Ultra sparse simple linear regression with PyTorch?

Hi all, I’m still a beginner with NN and was wondering if I could set up an extremely simple linear regression without any kind of interaction between features using PyTorch? What I have in mind is illustrated below. The idea is to obtain linreg coefficients and intercepts for each feature across tens of thousands of features and hundreds of thousands of samples. This isn’t technically a NN, but the idea is to use the GPU and PyTorch infrastructure to accelerate acquisition of the coefficients / intercepts.

The closest I can find is an autoencoder, however here (a) there are no connections between different features, (b) there is only a single activation and no hidden layers, (c) the error is calculated by comparing the output with a different dataset with the same dimensions.

Thanks for any help and apologies if this is stupid :slight_smile:

Bump. Any thoughts please?

you could use convolution kernels of size 1.
transform your input to a 1-dimensional spatial input and “spread” the 12 features along the channel dimensions. (x = x.view(1,12,1) (batch, channel, space))
then you define a 1D-convolution with bias set to True with 12 input channels and 12 output channels and set the number of groups to 12 to ensure your kernels don’t mix along channels. (c1d = torch.nn.Conv1d(kernel_sz=1, in_chan=12, out_chan=12, bias=True, groups=12))
this way, you basically learn a weight and a bias for each channel separately and afterwards you resize the output back from 12 channels to a spatial extend of 12. (y = c1d(x); y = y.view(1,1,12))

https://pytorch.org/docs/stable/nn.html?highlight=conv1d#torch.nn.Conv1d
I hope this helps :slight_smile:

That’s clever and very helpful. Using convolution kernels in this way would definitely not have occurred to me. Thanks.

Glad I could help :slight_smile: