Accuracy and robustness trade-off

Hello!

I am currently working on an autoencoder dealing with sparse data. I am training AE on background noise and then using this I am looking for anomalies in other datasets. The assumption is, that if an anomaly exists, autoencoder will make a large error. However, this doesn’t work very well because some examples have bigger error than an error of the signal sample. My question is, is there a way to make a trade-off during training between average error (accuracy) and the spread of errors (reduce standard deviation of the error distribution, robustness)?

Thank you in advance.