Basic Extreme Learning Machine pytorch implementation

Hi all, I wanted to share this little github gist i made that is a basic implementation of the ELM, “extreme learning machine”. a quick explainer is that its a single hidden layer FFNN with randomized and locked/frozen hidden neurons. the input weights/biases are not computed using backprop; it uses least squares solutions and psuedo-inverse of the hidden layer outputs multiplied by target outputs analytically. there are many papers on the ELM and will describe it much better than me.

in this demonstration we classify the MNIST dataset, and our results are seemingly quite good! here are the results of a few runs (data preprocessing takes much longer than the actually training):

1000 Hidden Neurons:
Training Time: 2.3480 seconds
Prediction Time: 0.0872 seconds
Test Accuracy: 94.12%

5000 Hidden Neurons:
Training Time: 39.7465 seconds
Prediction Time: 0.3457 seconds
Test Accuracy: 97.28%

10000 Hidden Neurons:
Training Time: 213.8608 seconds
Prediction Time: 0.6887 seconds
Test Accuracy: 97.66%

maybe too good to be true? idk, see for yourself: