Maintain Orthogonality between layers during training

Hello together,

I have a model with two branches, these branches use a ‘soft parameter sharing mechanism’.
My question is:
How can I maintain orthogonality between layers during training?

Many thanks in advance