Quality of Estimations How to Assess Reliability of Cost Predictions Eberhard Kranich

2012 Joint Conference of the 22nd International Workshop on Software Measurement and the 2012 Seventh International
Conference on Software Process and Product Measurement
Quality of Estimations
How to Assess Reliability of Cost Predictions
Dr. Thomas Fehlmann
Eberhard Kranich
Euro Project Office AG
Zurich, Switzerland
e-Mail: [email protected]
Processes, Quality & IT (PQIT)
T-Systems International GmbH
Bonn, Germany
e-Mail: [email protected]
suggest that our inability of predicting software project cost is
the main reason for the inability of today’s industry to create
enough value to pay for the debt bills of industrialized nations.
Instead of great news about high returns from investments in
integrated ICT systems, daily news talk about the next round of
debt crisis meetings.
Abstract— Software Project Cost Prediction is one of the unresolved problems of mankind. While today’s civil engineering
work is more or less under control, software projects are not.
Cost overruns are so frequent that it is wise never trusting any
initial cost estimate but take precaution for higher cost.
Nevertheless, finance managers need reliable estimates in order
to be able to fund software and ICT projects without running
risks. Estimates are usually readily available – for instance based
on functional size and benchmarking. However, the question how
reliable these estimations are is often left out, or answered in a
purely statistical manner that gives no clue to practitioners what
these overall statistical variations means for them.
A. Why is Software Cost Estimation so difficult?
Software engineering is not civil engineering, where you
first create a plan then execute the plan, and all you need to do
is making sure the plan takes all eventualities into due consideration! Software has to explain complex tasks in a language
simple enough such that ICT systems are able to understand
and execute it correctly. It’s a translation process starting with
some actual processes, some explicit and many more implicit
requirements, involves social behavior, organizational capability maturity, ability to communicate, to formulate in different
industry-specific languages, of keeping trust and continual engagement that eventually ends in an integrated men-machine
system creating value.
This paper explains how to make use of Six Sigma's transfer
functions that map cost defined by a committee of GUFPI-ISMA
onto project cost. Transfer functions reverse the process of estimation: they show how much a project costs under suitable assumptions for the cost drivers. If cost drivers can be measured,
and transfer functions can be determined with known accuracy,
not only project cost can be predicted but also the range and
probability for such cost to occur.
Railways are less difficult to operate, and traffic jams are
easier to avoid than make software run as expected.
Keywords— Project Cost Estimation, Cost Drivers, Transfer
Function, Soft Skills, Lean Six Sigma
I.
B. Types of Software Project Cost Estimation
Since the early days, when manual coding was still the
main task of developers, software project managers had attempted to predict software project costs by detail analyzing
tasks and duration. It was the time of sophisticated methodologies based on detailed Work Breakdown Structures (WBS).
While a WBS helped management understanding complexity
of software development, it turned out to be unreliable in predicting the actual work needed. Nevertheless, effort predictions
based on work breakdown structures weren’t all bad – even if
they tended to predict other work than that actually needed to
complete the project. The major problem with these approaches
is that they do not reflect the nature of software development –
namely uncover what needs to be done to complete the project.
INTRODUCTION
Today’s economies heavily suffer from the problems to
conduct software projects within reasonable time and budget
constraints. Due to this, many administrative and legally relevant processes still rely on complicated and error-prone manual
and paper-based procedures, for instance for voting, health
insurance, consumer billing, intercompany transactions; and
eGovernment is still a kind of dream, sixty years after it became technically possible to transport information electronically. Not many inventions in mankind’s history took that long to
take effect! For instance, the steam engine took less than twenty years until railways crossed all Europe; between the construction of the 1885 Daimler/Maybach Petroleum Reitwagen
(Riding Car) and the first mention of traffic jams in The San
Francisco Call of October 20, 1904, there were less than twenty
years.
A recent reaction to WBS-based project estimations is agile
development – not planning for the work details but for allocating the time slots needed to complete the project, and do the
work in fixed-time increments, e.g., sprints.
Obviously, it was less risky to fund railways and car factories than today’s software industry. Consequently, today’s ICT
world is a world of gaming, chatting, entertainment and consumption rather than one of value creation. It is allowed to
978-0-7695-4840-1/12 $26.00 © 2012 IEEE
DOI 10.1109/IWSM-MENSURA.2012.11
However, the most popular approach to cost estimation is
benchmarking – based on own experiences or comparisons
with industry. Because benchmarking always suffered from the
8
The ISBSG database comes with a large list of project attribute parameters that are used to compare different projects:
industry, choice of modeling and coding language, team size,
usage characteristics, methodology approach, architecture, target platform. The database is filtered for large, medium or
small functional size, development platform, and development
type (new, enhancement, re-development, or customization).
Nevertheless, variations between functional size count and
effective effort needed are significant, as shown in [7]. Based
on a sample of 16 MIS Projects in R10 Database of 2009, with
similar project attributes, there is almost no correlation between
functional size and actual effort reported, see Figure 1. On the
contrary, if actual efforts of the same projects are analyzed
using parametric approach based on cost drivers, the actual cost
can be explained perfectly, see Figure 2.
difficulties of collecting reliable and comparable data, we also
consider the so-called Expert Estimation approach as kind of
benchmarking – instead of numerical database using the memories of experienced developers. Expert estimation is particularly successful in agile – collecting Story Points for sizing
software projects and allocating enough sprints [5].
C. The ISBSG Benchmarking Database
Among the numerical database collections, the ISBSG database is certainly the most popular one [11]. It’s an open collection of software development and maintenance projects collected all over the world and across all kind of industries.
While other such collections exist, e.g., the proprietary QSM
collection, none has the advantage of open, standardized and
controlled collection practices. It is relatively easy to estimate a
project – once its functional size is known, that is, when it’s
clear what needs to be build.
II.
PARAMETRIC APPROACHES TO COST PREDICTION
A. Cost Drivers
Parametric approaches are the most promising for predicting software project cost. The idea was made known by Barry
Boehm in 1981 when he started publishing the range of
COCOMO prediction models [4]. He based project estimation
on a number of cost drivers.
900 FP
800 FP
700 FP
600 FP
500 FP
Person Days (PD)
400 FP
300 FP
200 FP
100 FP
0 FP
0
100
200
300
400
500
600
700
800
900
days
Figure 1: ISBSG MIS Projects: Function Points vs. Actual Effort
predicted days
900
800
Cost Driver
700
Low
600
High
Figure 3: Cost Drivers with Different Slopes
500
400
Each of such cost driver functions has a different slope that
models how the cost driver influences overall effort. The slope
is referred as a-Parameter; the selected cost driver impact is
denoted by x. Boehm used some kind of general system characteristics such as: requirements volatility, functional sizing,
technical complexity, impact on current application, communication needs, and so one. These cost drivers are intuitively easy
to understand but behave differently in the requirements elicitation, in the design & development phases, or for application
testing or documentation. Moreover, the cost drivers may behave differently among different products. Total effort prediction is a function of these cost drivers.
300
200
100
0
Medium
0
200
400
600
800
1000
actual days
Figure 2: Cost Driven Estimations vs. Actual Effort
Unfortunately, this happens relatively late in the project,
namely when requirements are known up to a certain degree
and at a defined granularity level. At this point of time, a large
part of the project budget has possibly already been spent to
find out what these functional user requirements eventually
actually are. Nevertheless, for solution design, model-driven
development, scope management of projects, and defect prediction and planning of software operations: functional sizing
is the method of choice.
These cost drivers are good candidates for predicting the effort needed to implement the project tasks. Boehm characterizes project cost by a Cost Driver Profile. Users of the model use
a discrete scale marked with “Low”–“Medium”–“High” characterizing the cost driving force. What the medium is must be
9
defined: it should be fixed such that profiles remain comparable. Thus there is a need to state measurable standard value
ranges for medium profile values, e.g., for team size or for
people’s skills against which comparison is possible. Ideally,
cost drivers should be measurable; however, only quite rough
assessments are usually available for soft factors such as “skills
level”, or “need for communication”.
effort response vector is five times the number of projects considered for process measurement; possibly a few dimensions
less if not all projects cover all five effort types.
The impact of cost drivers varies among the development
stages. Thus, as Figure 3 shows, we may have different impact
even of the same cost driver, depending on the product.
( ) = C. The Estimation Formula
Barry Boehm uses exponential functions for the impact of
cost factors1:
where represents the slope of the cost driver and defines the impact of the cost driver , see Figure 3. The
products refer to the cross-point values for the impact
function ( ). The practical reason for taking an exponential
function is that you don’t have to care for dimensions nor for
any static minimum cost; experience shows, on the other hand,
that cost factors have a tendency to soaring when they start
growing. The a-parameter takes care of all that. The difference
between low impact and medium impact is much less than between medium to high impact. High impact has almost no upper limit, whereas low impact always has a limit: a minimal
cost associated to it.
B. Measuring the Response of the Software Project Process
We need to analyze the response of our process in a way
that allows distinguishing the various contributions from the
cost drivers. An obvious choice is looking at cost per phase: for
instance, distinguishing cost of requirements elicitation, analysis & design, technical implementation, solution integration,
and start of operation phases already allow for analyzing impact of various cost drivers that relate to people and requirements volatility. Another approach is based on the five CMMI
process areas Requirements Development (RD), Technical Solution (TS), Quality Assurance (QA), Product Integration (PI),
and Project Management (PM); however, as the experience of
ISBSG shows, it is very difficult to get reliable results for
phases across organizations, since different, internally developed and applied methodologies are common.
Barry Boehm combines those individual cost driver effects
by multiplication with an exponential factor:
D. Combining with Functional Size
If the model contains more than just one cost driver that
depends from functional size, we cannot use the above formula
(ii). However, since modules should not interfere with each
other, an additive model is more appropriate than the multiplication of influential factors. Let ( ) denote the impact of
functional size where the index1 ≤ ≤ . The functional contributions ( ) sum up for the Functional Cost Driver FD:
TABLE I: MEASURABLE EFFORT TYPES
Tester
Admin
Project Manager
DoIt
Test
Adm
(ii)
where n is the number of cost drivers, and () =
(〈 , , … , 〉) is the total cost influence profile per estimation item for the cost driver profile = 〈 , , … , 〉 .
Note that the represent low–medium–high and can be set
without loss of generality to some equally distanced values
around 1.0 : = 0.5, 1.0, and 1.5 respectively. No impact
means = 0, thus = 1. The Impact Function () can
be calculated based on the cost profile vector 〈 , , … , 〉 for
each cost driver vector that represents an estimation item.
The project process response should be measured by efforts
spent for a few relevant effort components shown in TABLE I.
Talk
Meetings
Meetings,
Chat
Meetings,
Chat
Meetings,
Chat
Meetings,
Chat
() = ( ) = In view of practicality, effort data should be collected as
closely to roles and physical evidence as possible, in order to
allow for comparisons among different organizations and
methodologies. We propose to distinguish effort spent for requirements elicitation in team and stakeholder communications
(Talk), work and rework needed (DoIt), reviews and tests conducted (Test), time needed for technical and financial project
administration (Adm), e.g., documentation, configuration management, time and records keeping, and project management
(PM). Since this kind of effort data is effort spent by the roles
sponsor, developer, tester, administrators, and project managers, it is easier to collect and allows getting more reliable effort
data. Note that cost drivers impact all effort types in the same
way, e.g., high, however a-parameters have different slope.
Roles\Effort Types
Sponsor
Developer
(i)
PM
() = ( )
Design,
Code
(iii)
Integrate,
QA
Exponentiation of the functional contributions ( ) also
yields excellent results, see [7], but makes the model unnecessarily complex. With only one functional cost driver ,
Enable,
Track
Manage
() = (iv)
fixes the logarithmic base for .
The profile vector for the project effort response thus runs
over two levels: over the number of projects estimated or effort-measured, and for each project over the five effort types:
Talk, DoIt, Test, Adm, and PM. These points of measurement
are called Estimation Items. Total dimension of the project
For instance, can be selected such that, say, 512
COSMIC Function Points correspond to = 1; the exponen1
10
Note that Barry Boehm uses and the other way round, see [7].
Six Sigma
Estimation
One Sigma
Estimation
Tolerance
Range
6V
Estimation
Items
Tolerance
Range
Standard
Deviation 1V
Standard
Deviation 1V
6V 5V 4V 3V 2V 1V 1V 2V 3V 4V 5V 6V
6V
FIGURE 4: ONE SIGMA AND SIX SIGMA ESTIMATIONS DEPEND ON THE VARIANCE IN THE ESTIMATION STACK
tial factor indicates the cost driving impact of functional
size.
jects, it is not sufficient if some mature organization keeps collecting effort data and profiling their projects, its customer
must have the possibility to compare and validate those calibration data.
The Calculated Effort in Person Days (PD) for an estimation item with cost driver profile is therefore
While the first problem can be solved in high maturity organizations, see e.g., [7], the new GUFPI-ISMA cost driver
catalogue is a big step towards addressing the third issue, see
section V. The remaining part of this paper focuses on the second problem: how to assess quality of estimations.
() = () ∗ () = (v)
This is a simplification compared to (ii).
E. Calibration
The cost driver vector = 〈 , , … , 〉 represents by the impact of functional size and by , … , the impact of
non-functional cost drivers. If there are enough estimation
items with cost driver profile for which () is known, it is
possible to calculate the a-parameters by multi-linear regression. The a-parameters hold for a series of similar estimation
items. Since the low/medium/high cost driver profiles need to
be taken into account, and since we also allow for no impact in
the profile, at least 4 estimation items – with each cost driver
once in no, low, medium and high profile state – are necessary
for calibration. Such as set of estimation items with known
() is called an Estimation Stack. However, the more estimation items are available, the better for reducing measurement errors by redundancy.
III.
TRANSFER FUNCTIONS
A. Estimation Stacks as Transfer Functions
An estimation stack represents a transfer function that maps
the cost driver vector profile onto an "-ary actual estimation
item efforts vector #$ = 〈 (), (), … % ()〉 using
(ii) for & = 1, . . ". This vector constitutes the response of the
process of creating an estimation stack for the " estimation
items, each estimation item row depending from their cost
driver profile ' = 〈,' , ,' , … , ,' 〉:
#$ = *() = 〈 ,+ , ,- , … , ,/ 〉
(vi)
This transfer function *() can be used for predicting project cost, based on the settings for the cost driver profiles. Note
that if the cost driver profiles remain restricted to discrete values, such as = 0.0, 0.5, 1.0, and 1.5, the number of estimations for a stack is limited to the permutation of all possible
cost driver profiles, thus to 4 possible response predictions –
zero, low, medium, or high. Thus, every response of this cost
prediction model comes with a known variation, with known
accuracy. However, intermediate values for the are also feasible.
So, if cost prediction for software projects is that easy, why
isn’t it current successful standard practice?
F. Quality of Predictions
There are a few problems. The first is certainly the data collection used for calibration. Very few organizations are mature
enough to collect their project data, know their cost drivers,
and keep them under control for a long enough time to successfully predict project cost. And even if data can be collected,
how do we know how accurate the calibration data is? This is
the second problem. Collecting actual data is significantly
more difficult than collecting expert estimations. That’s the
reason why many organizations rely on expert estimations rather than on actual data when calibrating their estimation
stacks. However, the third problem is probably the most intrinsic: since cost prediction is necessary for contracting ICT pro-
B. Selecting Cost Drivers
Transfer functions map process controls into process responses – not the other way round. Since responses are typically known first, before the relevant critical controls, predicting
the critical controls is a relevant issue for understanding transfer functions for processes.
11
estimation items in the stack. It is unclear what happens when
using the stack for predicting new projects. To understand how
to assess quality on an estimation stack for prediction, we need
to turn somewhat more into theory of transfer functions.
Both process controls and process responses are vectors in
a multidimensional event space, namely the space of suspected
cost drivers for the project delivery process. Cost drivers have a
value; they are more or less important. Cost drivers should be
orthogonal to each other, that is, one cost driver value must not
depend from other cost driver values. The condition is that the
value of one cost driver never depends from any combination
of other cost drivers.
D. Analyzing Transfer Functions for Software Projects
The cost driver profiles ' = 〈,' , ,' , … , ,' 〉 that define
the cost for the &
estimation item using the estimation function
(v) yield the matrix 9 = :,' ; of dimensions ( + 1) × " .
denotes the number of cost drivers as before; " is the size of
the estimation stack. Let #$> denote the vector obtained by
actual measurement of all " estimation items. Obviously #$> ≠ #$ but, if the model is capable, #$> ≅ #$ holds.
The aim is to predict how capable the model based on the chosen cost driver profile actually is.
However, cost drivers can compensate each other: if one
cost driver has no impact, other cost drivers might provide the
necessary impact to yield the observed process response.
C. Quality of Estimation Stacks
After calibration, i.e., calculation of the a-parameter by
multi-linear regression, the quality of an estimation stack can
be measured by its variation 3, as seen in Figure 4. This is assuming that project effort follows normal distribution. The
American Association of Cost Engineers has recently put this
into question, see [2], suggesting a double triangular distribution which is skewed to allow for larger cost overruns than
undercuts. In this case, a left-side 36 and a right-side 37 should
be used.
The transfer function can be linearized by looking at the
cost driver. The matrix A = :,' ; defines a linear mapping
B = A = 〈C , … C% 〉 where C' = ,' The vector B = 〈C , … , C% 〉 is called Effort Profile Vector.
With a sufficiently large number of cost drivers, it is always
possible to find suitable a-parameter. Note that positive
a-parameter increase cost; negative parameters would decrease
cost. Sometimes this is not straightforward. For instance, if you
add as a cost driver “Need for extensive documentation”, it is
not clear whether this increases or decreases cost. It might decrease cost of quality assurance and thus of effort type “Test”
but increase effort spent for “DoIt”.
The cost driver profiles ,' must not contradict each other;
this means any pair of the cost profiles must drive cost either
up or down. If some projects react on some cost driver with
cost increase, and others with otherwise same settings behave
differently, it won’t be possible to calculate the a-parameter by
regression analysis. In other words, regression analysis will
yield weird results without giving any hint.
Note that the cost ' of the &
project is not a function of
the organization’s effort profile component C' but of the cost
driver profile components ,' – fixed for the &
project – and
the cost drivers according formula (v). Thus the vector B is not
directly measurable; it only can be determined indirectly by
measuring cost of estimation item ' (), then calculating The sigma value can also be expressed in terms of confidence intervals; a suitable metric for getting the right kind of
management attention. For instance, a variation of 3 = 3.5
corresponds to 99.8% confidence based on the 95th percentile.
However, even if the estimation stack has high confidence,
it only demonstrates that the selected cost drivers model the
Prediction Accuracy #$D − * E #$
#$D : Observed Response of the Process
• Cost Measured
#$ : Predicted Response by Cost Drivers
• Predicted Cost
The Response
#$ = F(x)
Talk Effort
DoIt Effort
Test Effort
Adm Effort
PM Effort
“Analysis”
G
“Transfer”
Cost Driver x5
Cost Driver x4
Cost Driver x3
Cost Driver x2
The Controls x
Cost Driver x1
(vii)
FIGURE 5: OBSERVED RESPONSE AND EXPLAINED RESPONSE
12
F
for the full estimation stack #$, and finally using (vii) to derive B = A. The effort profile vector B is characteristic for the
vector #$ by using (vii).
F. The Quality Criteria for Cost Driver
Let A = :,' ; be the cost profiles matrix as before. The
difference between the effect profile vector AA⊺ BK obtained
from the eigenvector BK and the effect profile B = A obtained
from the cost driver profile is called Convergence Gap:
Characteristic effort profile vectors also exist for the measured (or expert-estimated) estimation items vector #$> , denoted by B> . Clearly, if B> ≅ B, then also #$> ≅ #$. However, given #$> , the effort profile vector B> cannot be easily
measured or calculated.
‖B − BK ‖ = ‖B − AA⊺ BK ‖
(viii)
⊺
Assume the eigenvalue I = 1 in AA BK = IBK . This vector difference (viii) is an indicator for the Prediction Accuracy,
the minimum difference between model estimations and actual
cost measurements:
E. Eigenvectors as Quality Criteria
An Eigenvector H of a square matrix > has the characteristic property that >H = IH; I is called its Eigenvalue. By normalization of H, the eigenvalue can be assumed to I = 1. The
existence of an eigenvector means that repeated application of
the square matrix > keeps the result stable. This is an indication that the measurements are not at random but stable. Eigenvectors therefore are used to level out measurement errors; this
is common practice in physics but also in decision theories like
the Analytic Hierarchy Process (AHP) of Saaty [14] and in
Google’s search algorithms [10].
M#$D − *:E(#$);M
(xi)
Consult [6], [8] and [12] for how to use eigenvectors of
transfer functions for validating cause and effect relations; for
application of eigenvectors with large-dimensional vector
spaces see [10], and [13] introduces the general theory. Compare Figure 6 for a visualization of the convergence gap in the
case of only 3 cost drivers, according an idea presented by
Schurr in [16] when explaining how AHP works.
Assume some expert has characterized an estimation stack
by its cost driver profiles and denote this analysis function
by E; thus = E(#$> ) and #$ = *(). Let B = A be as
defined in (vii). Let A⊺ denote the transpose of the matrix A. A⊺
is called dual as it reverses the cause-effect direction of cost
drivers, thus eliminating errors in cost driver assessments. For
an eigenvector BK , AA⊺ is the inverse, thus = A⊺ BK or
A = AA⊺ BK . The matrix AA⊺ is positive definite and diagonalsymmetric by construction and thus has real eigenvectors.
The calculation of eigenvectors is easily possible with any
industry standard linear algebra package; this is not a topic for
this paper. Thus eigenvector theory validates the choice of cost
drivers but cannot ascertain their correct label and meaning.
Note that the eigenvector calculation does not replace the regression analysis needed to get the a-parameters. It enhanced
their calculation by removing inconsistencies in the cost profiles, thus improving quality of cost driver profiling.
Convergence
Gap small
Convergence
Gap large
Eigenvector
Eigenvector
Cost Driver
Profile Vectors
Cost Driver
Profile Vectors
See Schurr, 2011
an
FIGURE 6: SMALL AND LARGE CONVERGENCE GAP FOR THREE COST DRIVERS (SCHURR [16])
Therefore, if B is near to an eigenvector of AA⊺ , i.e.,
‖BK − B‖ ≅ 0, then the cost driver profile matrix A = :,' ;
defines a stable estimation system in the sense that there are no
contradicting cost profiles, and thus calculating a-parameters
by regression analysis is safe for estimation items. The transpose A⊺ B predicts the solution for the equation A = B, eliminating contradictions injected by estimators. Such an estimation stack meets the quality criteria for model estimations
shown in Figure 5 and can be used to predict other projects that
rely on cost drivers used for the estimation stack.
G. Research Topic: Benefits for Estimation Stacks
The eigenvector property (viii) allows categorizing estimations stacks according the eigenvector criteria. Since many
eigenvectors for AA⊺ exist, it is possible to select one with the
minimum number of cost drivers. For this, it suffices to look at
the vector A⊺ B and identify those cost drivers components
(A⊺ B) whose impacts are close to zero. Thus it seems possible
to create estimation stacks with a limited selection out of the
possible cost driver factors, taking only those that eventually
impact the total cost estimate. This reduces the effort needed
13
TABLE II: GUFPI-ISMA PRODUCTIVITY IMPACT FACTORS
Personal
H1
H2
H3
H4
H5
H6
Process
Domain Knowhow
Personnel Capability
Technology Knowledge
Team Turnover
Management Capability
Team Size
P1
P2
P3
P4
P5
P6
P7
P8
P9
Product
Organization Maturity
Schedule Constraints
Requirement Completeness
Reuse
Project Type
Methodology
Stakeholder Cohesion
Project/Program Integration
Project Logistics
S1
S2
S3
S4
S5
S6
S7
However, this conjecture is yet under investigation and no
practical experience with this method is known so far.
LIMITATIONS
[1]
The consistency check for the matrix A = :,' ; cannot validate the semantics of the cost drivers ; it only limits the difference (xi). The labels must be determined by identifying the
meaning of the cost driver in the real world. However, the
choice of cost drivers can be ascertained in the following sense:
if it is possible to find estimation stacks that allow for consistent selection of cost drivers.
V.
[2]
[3]
[4]
[5]
THE GUFPI-ISMA PRODUCTIVITY IMPACT FACTORS
Since 2009, a working group of GUFPI-ISMA has collected from various sources, including COCOMO and ISBSG, the
cost drivers suspected to account for soft project cost. The advantage of such a collection – if accepted internationally and
used for profiling software projects – is that cost estimations
become comparable between organizations. Such an achievement is probably among the most important in information
technology since van Neumann stored code and data on the
same device.
[6]
[7]
[8]
[9]
However, up to now it is not known whether these cost
drivers are able to explain the observed cost in ICT projects.
Some preliminary research has just started. Also, it is not clear
how such cost drivers can be measured in a repeatable, unambiguous way across different organizations and possibly even
across different types of software projects. Establishing an estimation stack for the Productivity Impact Factors (PIF) is difficult because of the large number of cost drivers. To calculate
the a-parameter for the PIFs, it would be helpful to have sample but representative project data with cost drivers varying for
a few drivers only, not indiscriminately for all.
VI.
Technology
T1
T2
T3
T4
T5
Programming Language
Development Tools
Technical Environment
Technology Change
Technical Constraints
Although the method cannot verify that the selected cost
drivers are correct, it can ascertain that their measurements are
consistent and that the estimation model can be duly used for
predicting project cost. The convergence gap between model
prediction and actual project cost indicates how good the estimation stack used for cost prediction actually is.
for measuring or agreeing on cost driver profiles, and allows
creating estimation stacks for different categories of projects.
IV.
Product Size
Product Architecture
Product Complexity
Other Product Properties
Required Documentation
System Integration
Required Reusability
[10]
[11]
[12]
[13]
CONCLUSION
[14]
The concept of transfer functions is very powerful, and easily adaptable to software development cost estimations. We
have laid the theoretical background for quality of estimations;
the practical implementation is yet another challenge. The reward is huge: reliable project cost estimations, or at least estimations with a known accuracy, will help to develop the information and communication technology to bring the economic benefits that it promised long ago.
[15]
[16]
14
Abran, A. et al., “The COSMIC functional size measurement method Version 3.0.1 - Measurement manual,” COSMIC Corp., Montréal,
Canada (2009)
American Association of Cost Estimators, “Recommended Practice No.
41R-08,” AACE, Inc. (2008)
Buglione, L., Trudel, S.: “Guideline for sizing agile projects with
COSMIC,” Proceedings of the IWSM / MetriKon / Mensura 2010,
Stuttgart, Germany (2010)
Boehm, B.: “COCOMO II,” Addison-Wesley, New York, NY (2002)
Cohn, M.: “Agile estimating and planning,” Prentice Hall, New Jersey
(2005)
Fehlmann, Th., “The impact of linear algebra on QFD,” International
Journal of Quality & Reliability Management, Vol. 21 No. 9, pp. 83-96,
Emerald Group Publishing Ltd., Bradford, UK (2005)
Fehlmann, Th., “Using Six Sigma for project estimations – an
application of statistical methods for software metrics,” MetriKon 2009,
Kaiserslautern, Germany (2009)
Fehlmann, Th., Kranich, E., “Transfer functions, eigenvectors and QFD
in concert,” 17th International QFD Symposium, ISQFD 2011, Stuttgart,
Germany (2011)
Fenton, N.E., Neil, M., Marquez, D., “Using Bayesian networks to
predict software defects and reliability,” Proceedings of the Institution of
Mechanical Engineers, Part O, Journal of Risk and Reliability: p. 701712 (2008)
Gallardo, P. F., “Google's secret and linear algebra,” EMS Newsletter
63, March 2007, 10-15, Universidad Autónoma de Madrid, Spain (2007)
Hill, P. ed., “Practical software project estimation 3rd edition,” McGrawHill, New York, NY (2010)
Hu, M. and Antony, J., “Enhancing design decision-making through
development of proper transfer function in design for Six Sigma ,”
International Journal of Six Sigma and Competitive Advantage 3, 2007,
pp. 33-55 (2007)
Kressner, D., “Numerical methods for general and structured eigenvalue
problems,” Lecture Notes in Computational Science and Engineering,
vol. 46. Springer-Verlag, Heidelberg. Germany (2005)
Saaty, Th., “Decision-making with the AHP: Why is the principal eigenvector necessary,” European Journal of Operational Research vol. 145,
pp. 85--91, Elsevier Science B.V. (2003)
Santillo, L., Moretto, G., on behalf of SBC (GUFPI-ISMA), “A general
taxonomy of productivity impact factors,” Software Benchmarking
Committee, Gruppo Utenti Function Point Italia – Italian Software
Metrics Association, IWSM-MENSURA, Stuttgart (2010)
Schurr, S., “Evaluating AHP Questionnaire Feedback with Statistical
Methods,” 17th International QFD Symposium, ISQFD 2011, Stuttgart,
Germany (2011)