Automatically Avoiding Overfitting in Deep Neural Networks by Using Hyper-Parameters Optimization Methods
DOI:
https://doi.org/10.3991/ijoe.v19i05.38153Keywords:
Deep Learning, Hyper-Parameters Optimization, Regularization, OverfittingAbstract
Overfitting is one issue that deep learning faces in particular. It leads to highly accurate classification results, but they are fraudulent. As a result, if the overfitting problem is not fully resolved, systems that rely on prediction or recognition and are sensitive to accuracy will produce untrustworthy results. All prior suggestions helped to lessen this issue but fell short of eliminating it entirely while maintaining crucial data. This paper proposes a novel approach to guarantee the preservation of critical data while eliminating overfitting completely. Numeric and image datasets are employed in two types of networks: convolutional and deep neural networks. Following the usage of three regularization techniques (L1, L2, and dropout), apply two optimization algorithms (Bayesian and random search), allowing them to select the hyperparameters automatically, with regularization techniques being one of the hyperparameters that are automatically selected. The obtained results, in addition to completely eliminating the overfitting issue, showed that the accuracy of the image data was 97.82% and 90.72 % when using Bayesian and random search techniques, respectively, and was 95.3 % and 96.5 % when using the same algorithms with a numeric dataset.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 zahraa kadhim, Dr. Hasanen S. Abdullah, Dr. Khalil Ibrahim Ghathwan
This work is licensed under a Creative Commons Attribution 4.0 International License.