A Methodology for the Classification and Detection of Software Design Defects
Table of Contents
Introduction2
The System Development Life Cycle3
System Planning4
Systems Analysis4
Systems Design5
System Implementation5
System Maintenance6
SDLC Cost6
Algorithm7
Related work11
Regression via Classification15
The Regression via Classification framework15
Discretizing the target variable16
Transforming classifier outputs to numeric predictions17
Our RvC implementation for software defect prediction17
Classification algorithms19
MCDM methods20
Data Envelopment Analysis (DEA)21
Technique for Order Preference by Similarity to Ideal Solution (TOPSIS)23
Elimination and choice expressing reality (ELECTRE)25
Preference Ranking Organisation Method for Enrichment of Evaluations (PROMETHEE)26
Experimental study27
Performance measures28
Data sources30
Experimental design31
Discussion of results33
Conclusion remarks33
References36
Appendix41
A Methodology for the Classification and Detection of Software Design Defects
Introduction
Many measures have been proposed in the literature to capture the structural quality of object-oriented (OO) code and design (Chidamber and Kemerer, 1991; Chidamber and Kemerer, 1994; Li and Henry, 1993; Lee et al., 1995; Briand et al., 1997b; Henderson-Sellers, 1996; Hitz and Montazeri, 1995; Bieman and Kang, 1995; Lake and Cook, 1994; Lorenz and Kidd, 1994 and Tegarden et al., 1992). Such measures are aimed at providing ways of assessing the quality of software, for example, in the context of large scale software acquisition (Mayrand and Coallier, 1996). Such an assessment of design quality is objective, and the measurement can be automated. Once the necessary measurement instruments are in place, the assessment of even large software systems can be thus done very fast, at a low cost, with little human involvement. But how do we know what measures actually capture important quality aspects? Despite numerous theories about what constitutes good OO design, only empirical studies of actual systems' structure and quality can provide tangible answers. Unfortunately, only a few studies have so far investigated the actual impact of these measures on quality attributes such as fault-proneness (Basili et al., 1996; Briand et al., 1997b and Cartwright and Shepperd, 1999), productivity or effort (Chidamber et al., 1998), or the amount of maintenance modifications (Li and Henry, 1993).
Many design methodologies consider design evaluation meetings (DEMs) as places in which, by contrast to design meetings, design activities are supposed to be quite marginal. Often, such activities are even supposed to not occur. Empirical studies conducted on design projects, however, provide elements that lead to belief that design activities do occur in DEMs. Our aim is to identify and to understand the activities actually taking place in this type of meetings.
The System Development Life Cycle
The Systems Development Life Cycle (SDLC) serves as a method that assures Information Systems (IS) being developed meet the requirements and needs of an organization or company. This methodology creates processes and guidelines managing the planning, system analysis, design, implementation, and maintenance of IS within a company. The primary objectives of any SDLC are to implement an IS that: 1.) meets or exceeds customer expectations, 2.) work efficiently and effectively with the current and planned information technology structure, and 3.) are inexpensive to maintain and cost-effective to upgrade (DOJ, 2000.) The SDLC system provides an overall structured approach IS system development, maintenance and operation.
The five phases of the SDLC must be executed sequentially and are extremely dependent on ...