Weight sharing within one layer

Hi,

I’m interested in implementing SortNet (SortNet: Learning to Rank by a Neural Preference Function | IEEE Journals & Magazine | IEEE Xplore). It’s an architecture designed for pairwise learning-to-rank that operates on a concatenation of two feature vectors. To enforce symmetrical behaviour, it adopts an interesting weight-sharing technique:
image
For each neuron h_i, there exists a dual neuron hi’ with the weights swapped, so that

  1. v_k,i′=v_yk,i and vyk,i′=vxk,i hold, i.e., the weights from xk, yk to i are swapped in the connections to i′;
  2. wi′,≻=wi,≺ and wi′,≺=wi,≻ hold, i.e., the weights of the connections from the hidden i to the outputs N≻, N≺ are swapped in the connections leaving from i′;
  3. bi=bi′ and b≻=b≺ hold, i.e., the biases are shared between the dual hiddens i and i′ and between the outputs N≻ and N≺.

I was not able to find a reference implementation. As I understand it, this setup would require me to share elements of a nn.Linear’s weight tensor. Any way to achieve this in Pytorch?