Recent developments have seen increased proposals on suitable perspectives or approaches for the evaluation of information systems (IS) project performance. For many years, the de facto standard has been the standard conformance to time, cost and specifications, or the triple constraint method (TCM) (Atkinson, 1999; White & Fortune, 2002). However, some researchers have questioned the suitability and completeness of this approach to effectively analyse the contribution of IS projects to the organisations and its stakeholders (Atkinson, 1999; Cohen & Graham, 2001).
An Exploratory Evaluation of three is Project Performance Measurement Methods
In response, several alternative approaches have been developed (Atkinson, 1999; Stewart, 2008) to help address this gap. However, an analysis of the project evaluation literature reveals that diverse empirical investigations have also become a top priority. Against this background, the research adopts the principles of design science (Hevner et al., 2004) and evaluates two recently developed measurement methods, the Project Performance Scorecard (PPS) (Barclay, 2008) and Project Objectives Measurement Model (POMM) (Barclay & Osei-Bryson, 2008) against the standard approach, the TCM. Hevner et al (2004) proposed that the utility, quality, and efficacy of a design artefact must be rigorously demonstrated via well executed evaluation techniques, and can be evaluated in terms of functionality, completeness, consistency, accuracy, performance, reliability, usability, fit with the organization, and other relevant quality attributes.
They further proposed several evaluation strategies including: observational (e.g. case studies and field studies); descriptive (e.g. scenario construction); analytical (e.g. static analysis, architecture analysis); experimental (through controlled experiments and simulation); testing (through functional or black box and structural or white box testing). As part of the design process, observational case studies were first used to help validate and justify the two artefacts/methods in previous studies (see Barclay 2008; Barclay & Osei-Bryson 2008). This approach was used as it allows for an in-dept study of the given artefact in an organizational context that would in turn provide feedback that may be used for the appropriate improvement of the artefact (Hevner et al., 2004). In this current exploratory study, the evaluation process is extended through the application of a mixed-method approach in the implementation of both the descriptive and analytical techniques. This involves the development of a real-life project scenario and the evaluation of the structures of each method using static analysis (e.g. complexity or performance) to identify attributes based on the perceptions of business and project practitioners. Static analysis is commonly used in software development to analyze the components and resources without running the programme (Chess & McGraw, 2004). While the proposed methods are not software components, the technique is suitable as practitioners are given the documented information (i.e. the components) to analyze each method in terms of elements such as usability, perceived semantic quality and completeness. To support this process, the conceptual model proposed by Maes & Poels (2006) is used as a basis to develop the research measurement instrument.
There are various forms of evaluation methods that have been applied to analyse the performance of projects in different ...