Train a network to tolerate undesired weight shift during inference

Hi,

Is there anything could be done in the training to help the network be more resilient to undesired weight shift during inference?

For instance, suppose we train a simple MLP 784-512-10 and load the trained weights to some hardware, but the weight values are drifting down due to the reliability issue of memory components, thus degrading the accuracy. Assuming a known decreasing rate (v) of weight values, Is there any way to enforce a safety margin of weights that can retain a good accuracy to a certain time (e.g., 1 year) ?

Any inputs are appreciated! Thanks!

Could you explain this (interesting) use case a bit more?
Why would weight values drift down, if your memory is unreliable?
Wouldn’t this introduce random errors in the memory and thus corrupting the values in an unpredictable way?