Author : Zorana Štaka, Marko Mišić
Date of Publication :20th March 2024
Abstract:Normalization techniques have been extensively used in deep learning methods to facilitate the learning process. Normalization decreases the impact of different scales of the input features, improves convergence of the model affected by the changes of input features or trained weights, and positively affects the generalization and robustness of the model. Three main techniques of normalization have been used in the open literature for research in the domain of computer vision: Batch, Layer, and Instance Normalization. New approaches, such as Switchable Normalization, suggest using different normalizers for different normalization layers in a deep neural network. In this work, we evaluate those four normalization techniques in the context of the computer vision problem, leaf counting, in the plant Arabidopsis thaliana. The four normalization techniques were evaluated using the popular evaluation metrics commonly used for leaf counting problem: difference in count, absolute difference in count, accuracy, and mean square error. The results show that the best models are those using Instance, Layer and Switchable normalization.
Reference :