International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 186 - Number 32 |
Year of Publication: 2024 |
Authors: S. Jeevidha, S. Saraswathi |
10.5120/ijca2024923857 |
S. Jeevidha, S. Saraswathi . Learning Methods and Parameters used in Neural Architecture Search for Image Classification. International Journal of Computer Applications. 186, 32 ( Aug 2024), 19-24. DOI=10.5120/ijca2024923857
NAS -Neural architecture search is a part of the deep learning mechanism that is widely adopted in a variety of autonomous systems, object detection, image classification, and gaming techniques. The entire performance of the Neural architecture depends only upon the learning methods and parameters, Earlier the data scientist chooses a neural architecture and the enhancement parameters based upon the model family and constructs an ensemble of models and they will perform training with the domain knowledge. In these new decades, the end-users who are familiar with the data at hand do not necessarily have the experience in designing Neural Architecture. For any image classification process, the best learning methods and parameters with less iteration should be chosen in order to reduce time and computing resources. As a result, there is increasing interest in enhancing the parameters and learning methods of Neural architecture search. Learning methods and parameters are used to control the learning rate that occurred in thrust and build the model based on type in which it gives more accuracy This paper provides a review of learning methods and parameters used in NAS techniques such as learning rates, batch normalization, batch sizes, data augmentation and evaluation parameters implemented in various Image classification techniques.