Author : Rahul M 1
Date of Publication :19th March 2020
Abstract: Machine Learning (ML) is that the discipline that studies strategies for mechanically inferring models from knowledge. Machine learning has been with success applied in several areas of code engineering starting from behaviour extraction, to testing, to bug fixing. More applications square measure nonetheless be outlined. However, an improved understanding of cubic centimetre strategies, their assumptions and guarantees would facilitate code engineers adopt and determine the acceptable strategies for his or her desired applications. During this system, we have a tendency to review and replicate on the applications of cubic centimetre for code engineering union per the models they manufacture and therefore the strategies they use. Once testing code it's been shown that there square measure substantial edges to be gained from approaches that exercise uncommon or undiscovered interactions with a system – techniques like random testing, fuzzing, and explorative testing. However, such approaches have a disadvantage in this the outputs of the tests got to be manually checked for correctness, representing a big burden for the technologist. This projected application presents a technique to support the method of distinguishing that tests have passed or failing by combining bunch and semi-supervised learning. We've shown that by victimization machine learning it's doable to cluster take a look at cases in such the way that those reminiscent of failures concentrate into smaller clusters. Examining the take a look at outcomes in clustersize order has the result of prioritising the results: those who square measure checked early have a way higher likelihood of being a failing take a look at.
Reference :
-
- W. Choi, G. Necula, and K. Sen, “Guided GUI testing of android apps with minimal restart and approximate learning,” in Proc. ACM SIGPLAN Int. Conf. Object Oriented Program. Syst. Lang. Appl., 2013, pp. 623–640.
- N. Semenenko, M. Dumas, and T. Saar, “Browserbite: Accurate crossbrowser testing via machine learning over image features,” in Proc. IEEE Int. Conf. Softw. Maintenance, 2013, pp. 528–531.
- D. Agarwal, D. E. Tamir, M. Last, and A. Kandel, “A comparative study of artificial neural networks and infofuzzy networks as automated oracles in software testing,” IEEE Trans. Syst., Man, Cybernet., vol. 42, no. 5, pp. 1183–1193, Sep. 2012.
- R. Lenz, A. Pozo, and S. R. Vergilio, “Linking software testing results with a machine learning approach,” Eng. Appl. Artif Intell., vol. 26, no. 5/6, pp. 1631–1640, 2013.
- J. Zhang et al., “Predictive mutation testing,” in Proc. 25th Int. Symp. Softw. Testing Anal., 2016, pp. 342–353.
- P. Flach, Machine Learning: The Art and Science of Algorithms That Make Sense of Data. Cambridge, U.K.: Cambridge Univ. Press, 2012.
- G. James, D. Witten, T. Hastie, and R. Tibshirani, An Introduction to Statistical Learning: With Applications in R (Springer Texts in Statistics). New York, NY, USA: Springer, 2013.
- P. Louridas and C. Ebert, “Machine learning,” IEEE Softw., vol. 33, no. 5, pp. 110–115, Sep./Oct. 2016.
- S. Shalev-Shwartz and S. Ben-David, Understanding Machine Learning: From Theory to Algorithms. Cambridge, U.K.: Cambridge Univ. Press, 2014.
- D. Cotroneo, R. Pietrantuono, and S. Russo, “Combining operational and debug testing for improving reliability,” IEEE Trans. on Reliability, vol. 62, pp. 408– 423, June 2013.
- J. Lv, B.-B. Yin, and K.-Y. Cai, “On the asymptotic behavior of adaptive testing strategy for software reliability assessment,” IEEE Trans. on Software Engineering, vol. 40, pp. 396–412, April 2014.