Automated Behavioral & Acceptance Testing of Refactoring Engines
The behavior-preserving program transformation that develops the design of a program is defined as refactoring which is a very essential design technique. As we know that the refactoring engines are tools that computerize the function of refactoring. In general, Refactoring engines can contain bugs. The refactoring engine developers commonly implement refactoring in an ad hoc manner since no guidelines are available for evaluating the correctness of refactoring implementations. So to overcome this problem an automated Java program generator is developed that to be referred as JDOLLY and UDITA which are automatically searches for all possible combinations of Java constructs to generate programs. The refactoring under test is applied to each generated program to evaluate the correctness of these transformations also it classifies the failing transformations. However before major refactoring, acceptance tests had to be added to the system to ensure equivalent program behavior before and after the changes. Hence in proposed system additionally the new programs added and then refactoring tools will be automatically refactoring using existing tools, behavioral and acceptance testing is performed for both manual programs as well as new programs added. The advantage of our enhancement is that it reports significantly more true positives and significantly fewer false positives than the initial model along with system with requirements does not getting changed by refactoring. At last we evaluate the outcomes using the compliable program and automated compliable program with different refactoring practice.
N. Tillmann and W. Schulte, “Parameterized unit tests,” in ESEC/FSE, 2005, pp. 253–262.
W. M. McKeeman, “Differential testing for software,” Digit Techn J, vol. 10, no. 1, pp. 100–107, 1998.
C. Cadar, D. Dunbar, and D. R. Engler, “KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs,” in OSDI, 2008, pp. 209–224.
B. Daniel, D. Dig, K. Garcia, and D. Marinov, “Automated testing of refactoring engines,” in ESEC/FSE, 2007, pp. 185–194.
E. D. Tempero, J. Noble, and H. Melton, “How do Java programs use inheritance? An empirical study of inheritance in Java software,” in ECOOP, 2008, pp. 667–691.
S. McCamant and M. D. Ernst, “Predicting problems caused by component upgrades,” in ESEC/FSE, 2003, pp. 287–296.
W. Jin, A. Orso, and T. Xie, “Automated behavioral regression testing,” in ICST, 2010, pp. 137–146.
W. Visser, K. Havelund, G. P. Brat, S. Park, and F. Lerda, “Model checking programs,” Autom Software Eng, vol. 10, no. 2, pp. 203–232, 2003.
K. Kim, T. Yavuz-Kahveci, and B. A. Sanders, “Precise data race detection in a relaxed memory model using heuristic-based model checking,” in ASE, 2009, pp. 495–499.
W. Opdyke. “Refactoring Object-Oriented Frameworks”. PhD thesis, University of Illinois at Urbana-Champaign, 1992.
L. Tokuda and D. Batory. “Evolving object-oriented designs with refactorings. Automated Software Engineering, 8:89–120, January 2001. ISSN 0928-8910.
M. Schafer, T. Ekman, and O. de Moor. “Challenge proposal: veriﬁcation of refactorings”. In PLPV ’09, pages 67–72. ACM.
P. Borba, A. Sampaio, A. Cavalcanti, and M. Cornelio. “Algebraic reasoning for object-oriented programming”. SCP, 52:53–100, 2004.
L. Silva, A. Sampaio, and Z. Liu. “Laws of object-orientation with reference semantics”. In SEFM ’08, pages 217–226, Washington, DC, USA, 2008. IEEE Computer Society.
F. Steimann and A. Thies. “From public to private to absent:Refactoring Java programs under constrained accessibility”.In ECOOP ’09, pages 419–443, Berlin, Heidelberg, 2009. Springer-Verlag.
B. Daniel, D. Dig, K. Garcia, and D. Marinov. “Automated testing of refactoring engines”. In ESEC/FSE ’07. ACM, 2007.
M. Gligoric, T. Gvero, V. Jagannath, S. Khurshid, V. Kuncak,and D. Marinov. “Test generation through programming in udita”. In ICSE ’10, pages 225–234, 2010.
E. Murphy-Hill and A. P. Black. “Refactoring tools: Fitness for purpose”. IEEE Software, 25(5):38–44, 2008.
M. Gligoric, T. Gvero, V. Jagannath, S. Khurshid, V. Kuncak, and D. Marinov. “Test generation through programming in UDITA”. In ICSE, pages 225–234, 2010.
G. Soares, B. Catao, C. Varjao, S. Aguiar, R. Gheyi, and T. Massoni. “Analyzing refactorings on software repositories”. In SBES, pages 164–173, 2011.
G. Misherghi and Z. Su. “HDD: Hierarchical delta debugging”. In ICSE, pages 142–151, 2006.
D. Spinellis. CScout: “A refactoring browser for C”. Sci. of Comp. Prog., 75(4):216–231, 2010
Thies .A and F. Steimann. “Systematic testing of refactoring tools”. In AST (poster), 2010.
M. Schafer, A. Thies, F. Steimann, and F. Tip. “A comprehensive approach to naming and accessibility in refactoring Java programs”. IEEE Trans. Soft. Eng.,38(6):1233–1257, 2012.
M.O. Cinn´eide, L. Tratt, M. Harman, S. Counsell, and I. H. Moghadam. “Experimental assessment of software metrics using automated refactoring. In ESEM, pages 49–58, 2012.
Z. Coker and M. Haﬁz. “Program transformations to ﬁx C integers”. In ICSE, pages 792–801, 2013.
V. Jagannath, Y. Y. Lee, B. Daniel, and D. Marinov. “Reducing the costs of bounded-exhaustive testing”. In FASE, pages 171–185, 2009.
Sun .B, G. Shu, A. Podgurski, and S. Ray. “CARIAL: Cost-aware software reliability improvement with active learning”. In ICST, pages 360–369, 2012.
E. Murphy-Hill, C. Parnin, and A. P. Black. “How we refactor, and how we know it”. In ICSE, pages 287–297, 2009.
M. Vakilian, N. Chen, S. Negara, B. A. Rajkumar, B. P. Bailey, and R. E. Johnson. “Use, disuse, and misuse of automated refactorings”. In ICSE, pages 233–243, 2012.
M. Sch¨afer. Speciﬁcation, “Implementation and Veriﬁcation of Refactorings”. PhD thesis, Oxford University Computing Laboratory, 2010.
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 3.0 License.