Open Access Journal

ISSN : 2394-2320 (Online)

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE)

Monthly Journal for Computer Science and Engineering

Open Access Journal

International Journal of Engineering Research in Computer Science and Engineering (IJERCSE)

Monthly Journal for Computer Science and Engineering

ISSN : 2394-2320 (Online)

Visual Saliency Prediction

Author : Anju P 1 Ashly Roy 2 Priya K.V 3 Dr.M.Rajeswari 4

Date of Publication :20th May 2021

Abstract: The Saliency expectation is one of the most moving innovations utilized and it depends on object recognition idea. The visual scenes caught by the eye are taken as a Saliency map. From the start, the expectation is finished utilizing a managed AI calculation that is a support vector machine (SVM). This calculation utilizes for both arrangement and relapse examination. Further- more, it is utilized in both direct and nonlinear issues. For the element extraction measure, the neighborhood twofold example administrator is utilized which changes a picture into a cluster or picture of whole number marks portraying limited scope appearance of the picture. It is an effective surface administrator. However, this methodology can’t reach up to the assumption. These are valuable for the more modest datasets so the bigger datasets can’t perform utilizing this calculation as it requires some investment. When there is more commotion the precision can’t be anticipated. The translation of definite model loads additionally singular effect is harder to accomplish utilizing this calculation. So the forecast was held through CNN engineering. It comprises various classes of profound neural organization. It comprises the information layer, hidden layer, and yield layer. The forecast is finished by lenet engineering which is the basic and first design of CNN. The expectation of Saliency is additionally finished with the assistance of VGG19. It is a variation of the VGG model. Also, an ongoing expectation is proposed. The information pictures are arbitrarily gathered and utilizes the datasets SD-Saliency-900 and DUTS striking item recognition. The Python IDLE is utilized for the execution.

Reference :

    1. Alexander Kroner, Mario Senden, Kurt Driessens, Rainer Goebel , (2020) “Contextual encoder–decoder network for visual saliency prediction ”, Elsevier.
    2. Dongoo Kang, Sangwoo Park, Joonki Paik , , (2020) “SdBAN: Salient Object Detection Using Bilateral Attention Network With Dice Coefficient Loss ”, IEEE Access .
    3. Bashir Muftah Ghariba1, Mohamed S. Shehata and Peter McGuire, (2020) “A novel fully convolutional network for visual saliency prediction ”, PeerJ computer science .
    4.  Fei Zhou, Rongguo Yao, Guangsen Liao, Bozhi Liu, and Guoping Qiu , (2020) “Visual Saliency via Embedding Hierarchical Knowledge in a Deep Neural Network ”, IEEE Transactions on Image Processing.
    5. Junting Pan, Elisa Sayrol, Xavier Giro-i-Nieto, Kevin McGuinness, Noel E. O’Connor, (2020) “Shallow and Deep Convolutional Networks for Saliency Prediction ”, Computer vision foundation.
    6. Che, Z., Borji, A., Zhai, G., Min, X., (2018) “Invariance analysis of saliency models versus human gaze during scene free viewing ”, IEEE Transactions on Pattern Analysis and Machine Learning .
    7. K. Fu, Q. Zhao, I. Yu-Hua Gu, and J. Yang, , (2019) “Deepside: A general deep framework for salient object detection, ”, Neurocomputing
    8. Bylinskii Z, Judd T, Oliva A, Torralba A, Durand F , (2018) “What do different evaluation metrics tell us about saliency models ”, IEEE Transactions on Pattern Analysis and Machine Intelligence .
    9. M. Corina, L. Baraldi, G. Serra, R. Cucchiara, (2018) “Predicting human eye fixations via an LSTMbased saliency attentive model ”, IEEE Trans. Image Process., vol. 27, no. 10, pp. 5142–5154 .
    10. O. Russakovsky, L.-J. Li, and L. Fei-Fei , (2015) “Best of both worlds: human-machine collaboration for object an- notation ”, IEEE Conference on Computer Vision and Pattern Recognition, pages 2121–2131 .
    11. Ku¨ mmerer, M., Wallis, T., Bethge, M , (2018) “Saliency benchmarking made easy: Separating models, maps and metrics ”, computer vision
    12. Y. Kong, J. Zhang, H. Lu, and X. Liu , (2018) “Exemplar- aided salient object detection via joint latent space embedding, ”, EEE Trans. Image Process., vol. 27, no. 10, pp. 51675177
    13. Ghariba B, Shehata MS, McGuire P, (2019) “Visual saliency prediction based on deep learning ”, Information 10:257 [14] Z. Che, et al , (2020) “How is gaze influenced by image transformations? Dataset and model, ”, IEEE Trans. Image Process., vol. 29, pp. 2287 - 2300, .
    14. K. Simonyan and A. Zisserman , (2015) “Very deep convolutional networks for large-scale image recognition. ” In International Conference on Learning Representations .
    15. Oyama, T., Yamanaka, T , (2018) “Influence of image classification accuracy on saliency map estimation. ”, Journal of Experimental Psychology
    16. J. Zhao, J.-J. Liu, D.-P. Fan, Y. Cao, J. Yang, and M.-M. Cheng, (2019) “EGNet: Edge guidance network for salient object detection,”, in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV).
    17. Li Y, Mou X , (2019) “Saliency detection based on structural dissimilarity induced by image quality assessment model ”, Journal of Electronic Imaging 28:023025 .
    18. Z. Bylinskii, T. Judd, A. Oliva, A. Torralba, and F. Durand, (2018) “What do different evaluation metrics tell us about saliency models, ”, IEEE Trans. Patt. Anal. Mach. Intell., vol. 41, no. 3, pp. 740–757.
    19. L. Wang, H. Lu, R. Xiang, and M.-H. Yang , (2015) “Deep networks for saliency detection via local estimation and global search ”, In IEEE Conference on Computer Vision and Pattern Recognition, pages 3183–3192.

Recent Article