Author : Vaibhav Govindwar,Aman Akbani,Aishwarya Wanjari,Aachal Nadeshwar,Prachi Aghase,C R Pote
Date of Publication :8th May 2024
Abstract:Understanding others' intentions through nonverbal cues like facial emotions is crucial in human communication. To design and train Deep Learning Models, this paper describes in detail how Convolutional Neural Network Models are developed using tf. Keras. The aim is to Sort facial photos into one of the seven face detection classifiers, our model is developed in such a manner that it learns hidden nonlinearity from the input facial images, which is critical for discriminating the type of emotion a person is expressing. The model proposed on the Lenet-5 architecture by Yann LeCun uses the subsampling, feature map, and activation function (ReLu) in between the convolutional layer and fully connected layer for the output soft-max activation function will be used. The FER-2013 dataset, which consists of 35,887 structured 48x48 pixel grayscale images, was used to train the CNN models. The training dataset has 28,709 elements, while the testing dataset has 7,178 elements. Train and test are the two folder names used to organize the FER dataset. separated even further into distinct files, each holding a different kind of FER dataset class. To mitigate the overfitting of the dropout, batch normalization and the model are employed. Since this is a multiclass classification problem, we are utilizing the Soft-max activation function and the Rectified linear unit for non-linear operation (ReLu). We are training a categorical cross-entropy and matrix for accuracy based on the parameters to assess the constructed CNN model's performance by examining the training epoch history. [13].
Reference :