Is there anything could be done in the training to help the network be more resilient to undesired weight shift during inference?
For instance, suppose we train a simple MLP 784-512-10 and load the trained weights to some hardware, but the weight values are drifting down due to the reliability issue of memory components, thus degrading the accuracy. Assuming a known decreasing rate (v) of weight values, Is there any way to enforce a safety margin of weights that can retain a good accuracy to a certain time (e.g., 1 year) ?
Any inputs are appreciated! Thanks!