Study of the double trigger phenomenon and comparison of minimax approximation with L2-regularization

Authors

  • M.I. Kryvosheia Vinnytsia National Technical University

DOI:

https://doi.org/10.31649/1681-7893-2025-49-1-36-43

Keywords:

double descent, L2 regularization, minimax approximation, polynomial models, machine learning, anomalies

Abstract

This paper investigates the phenomenon of double descent and proposes the use of minimax approximation (L∞-norm) as an alternative to L2-regularization to improve the quality of model approximation. Double descent describes the dependence of the error on the complexity of the model: the error first decreases, then increases due to overfitting, and then decreases again. In contrast, in experiments with a model without regularization, a predominantly increasing trend of the error with short periods of decline was found, which is observed for an incomplete manifestation of the phenomenon. This is probably due to anomalous points in the data that caused an exponential increase in the error at high powers. Three approaches were noted: a classical model without regularization, a model with L2-regularization, and minimal approximation. L2 regularization added a penalty for large coefficient norms, which stabilized the error and prevented overfitting, especially at high polynomial degrees (200+). Minimax approximation minimized the error, thereby providing better maximum anomaly robustness and outperforming L2 regularization at low degrees (up to 50). The results confirmed that minimax approximation is more effective for problems with anomalies, while L2 regularization performs better on complex models with high polynomial degrees. The findings contribute to the understanding of the double descent phenomenon and show the practicality of applying different approaches due to data features and model requirements.

Author Biography

M.I. Kryvosheia, Vinnytsia National Technical University

Postgraduate student of the Department of Automation and Intelligent Information Technologies, Faculty of Intelligent Information Technologies and Automation

References

Golovach, A. B. Mathematical foundations of modeling complex systems. Kyiv: Naukova Dumka, 2018. – p. 286

N.V. Burennikova, O.V. Zelinska, I.M. Ushkalenko, Y.Yu. Burennikov "Optimization methods and models" – Vinnytsia: VNTU, 2019. – 121 p.

Ogirko O. I., Galayko N. V. "Probability theory and mathematical statistics: a textbook". – Lviv: Lviv State University of Internal Affairs, 2017. – 292 p.

Belkin, M., Hsu, D., Ma, S., & Mandal, S. Reconciling Modern Machine Learning and the Bias-Variance Trade-off. PNAS, 2019. – p. 23

Ng, A. Y. Feature Selection, L1 vs. L2 Regularization, and Rotational Invariance. Proceedings of the 21st International Conference on Machine Learning, 2004. - p. 8

Boyd, S., & Vandenberghe, L. Convex Optimization. Cambridge University Press, 2004. - p. 732

Chen, T., & Guestrin, C. XGBoost: A Scalable Tree Boosting System. KDD, 2016. - p. 785–790

Bishop, C. M. Pattern Recognition and Machine Learning. Springer, 2006. - p. 738

Downloads

Abstract views: 1

Published

2025-06-18

How to Cite

[1]
M. Kryvosheia, “Study of the double trigger phenomenon and comparison of minimax approximation with L2-regularization”, Опт-ел. інф-енерг. техн., vol. 49, no. 1, pp. 36–43, Jun. 2025.

Issue

Section

OptoElectronic/Digital Methods and Systems for Image/Signal Processing

Metrics

Downloads

Download data is not yet available.