PROOF COVER SHEET

PROOF COVER SHEET
Author(s):
Josip Mikulic´
Article Title:
Rethinking the importance grid as a research tool for quality managers
Aritcle No:
CTQM593857
Enclosures:
1) Query sheet
2) Article proofs
Dear Author,
1. Please check these proofs carefully. It is the responsibility of the corresponding author to check
these and approve or amend them. A second proof is not normally provided. Taylor & Francis cannot
be held responsible for uncorrected errors, even if introduced during the production process. Once
your corrections have been added to the article, it will be considered ready for publication.
For detailed guidance on how to check your proofs, please see
http://journalauthors.tandf.co.uk/production/checkingproofs.asp.
2. Please review the table of contributors below and confirm that the first and last names are
structured correctly and that the authors are listed in the correct order of contribution. This
check is to ensure that your name will appear correctly online and when the article is indexed.
Sequence
Prefix
Given name(s)
Surname
1
Josip
Mikulic´
2
Darko
Prebezˇac
Suffix
Queries are marked in the margins of the proofs. Unless advised otherwise, submit all corrections
and answers to the queries using the CATS online correction form, and then press the “Submit All
Corrections” button.
AUTHOR QUERIES
General query: You have warranted that you have secured the necessary written permission from the
appropriate copyright owner for the reproduction of any text, illustration, or other material in your
article. (Please see http://journalauthors.tandf.co.uk/preparation/permission.asp.) Please check that
any required acknowledgements have been included to reflect this.
QUERY NO.
QUERY DETAILS
AQ1
Please check the edit made in the sentence ‘However, this paper argues. . .’.
AQ2
The following references Oliver (1997) and Oh (2001) are cited in text but not there in
the list. Please add to the list or delete the citations.
AQ3
The sentence ‘However, for attributes categorised. . .’ does not seem very clear. Please
check.
AQ4
Please check the edit made in the sentence ‘On the one hand. . .’.
AQ5
Is ‘attribute relevance’ the expansion of the acronym AR? If not, can we change
‘relevance’ to AR throughout the file, please suggest.
AQ6
Is ‘attribute determinance’ the expansion of the acronym AD? If not, can we change
‘determinance’ to AD throughout the file, please suggest.
AQ7
Please check the edit made in the sentence ‘However, airlines are. . .’.
AQ8
Please check the edit made in the sentence ‘It is important to note. . .’.
AQ9
Please check the edit made in the sentence ‘Since the AP level of. . . ’.
AQ10
Please check the edit made in the sentence ‘The case study in this paper. . .’.
AQ11
Please provide in-text citation for the references Fu¨ller & Matzler (2008), Mikulic´ &
Prebezˇac (2011a).
AQ12
Please expand the acronyms IAA, DAI.
AQ13
We have changed the values with dot separator to comma separator. Please check.
CTQM593857
Techset Composition Ltd, Salisbury, U.K.
6/13/2011
Total Quality Management
Vol. 00, No. 0, Month 2011, 1 –14
5
Rethinking the importance grid as a research tool for quality
managers
Josip Mikulic´∗ and Darko Prebezˇac
Department of Tourism, Faculty of Economics and Business, University of Zagreb, J.F. Kennedy
Square 6, 10000 Zagreb, Croatia
10
The importance grid (IG) is a research tool developed for the purpose of categorising
product/service attributes according to the Kano model, thus making it a tempting
technique for quality managers. However, this paper argues that the IG lacks a clear
theoretical foundation, which is why it is not recommended for its intended purpose.
Nevertheless, it is shown that a reinterpretation of the IG can provide valuable
information for the purpose of prioritising product/service attributes for
improvement. It is further suggested to regard the IG and the penalty-reward
contrast analysis (PRCA) not as competing techniques, as it is usually assumed in
the literature, but rather as complementary approaches. The managerial value of the
rethought IG in combination with a modified PRCA (determinance-asymmetry
analysis) is demonstrated in a case study on airline passenger satisfaction with
airport services.
15
20
Keywords: importance grid; attribute importance; customer satisfaction; Kano model
Introduction
25
30
QA:
Coll:
35
CE: RK/RI
40
The importance grid (IG) was developed by Harvey Thompson, an IBM consultant, for the
purpose of categorising product/service attributes according to the Kano model (Kano,
Seraku, Takahashi, & Tsuji, 1984). In the scholarly literature, the technique was first mentioned by Vavra (1997), and ever since it has been applied to explore the Kano model in
various product settings (e.g. Matzler & Hinterhuber, 1998; Yang, 2005; Riviere, Monrozier, Rogeaux, Pages, & Saporta, 2006) and services settings (e.g. Martensen & Gronholdt,
2001; Fuchs, 2002; Fuchs & Weiermair, 2003, 2004; Matzler, Sauerwein, & Heischmidt,
2003; Bartikowski & Llosa, 2004; Busacca & Padula, 2005).
To classify product/service attributes into the different Kano categories, the IG uses
scores of explicit and implicit attribute importance (AI). Explicit AI, also referred to as
stated importance, is obtained directly from the customer (e.g. by means of direct rating, ranking- or constant-sum scales), whereas implicit AI is obtained indirectly, most
usually by regressing attribute-level performance against a global measure of performance
(e.g. overall satisfaction). This type of importance thus is also referred to as derived importance. The IG literature reports the use of standardised regression coefficients (Fuchs, 2002;
Matzler & Sauerwein, 2002; Matzler, Sauerwein, & Stark, 2002; Busacca & Padula, 2005;
Peters, 2005), partial correlation coefficients (Matzler et al., 2002, 2003; Fuchs & Weiermair, 2003, 2004; Bartikowski & Llosa, 2004), and zero-order correlation coefficients
(Matzler et al., 2002). The two AI measures are then used to construct a two-dimensional
∗
45
Corresponding author. Email: [email protected]
ISSN 1478-3363 print/ISSN 1478-3371 online
# 2011 Taylor & Francis
DOI: 10.1080/14783363.2011.593857
http://www.informaworld.com
Q1
2
50
55
60
65
70
75
J. Mikulic´ and D. Prebezˇac
grid, which is divided into four quadrants, most frequently by using grand means of implicit
and explicit AI scores as thresholds. Depending on the attributes’ positionings across the
grid, three distinct categories of attributes can be identified: basic factors (BFs), excitement
factors (EFs) and performance factors (PFs) (Figure 1).
BFs, which are also referred to as must-be requirements or hygiene factors, are attributes with a strong negative impact on overall satisfaction (OS) in case of low-level performance, but which do not have a significant positive impact on OS when performance is
high. An example of a BF mentioned by several authors is airline safety (e.g. Matzler &
Sauerwein, 2002; Mikulic´ & Prebezˇac, 2008). If safety performance on a flight was low,
this would certainly have a strong negative impact on the passenger’s OS. Conversely,
high performance would not be likely to affect the passenger’s OS in a significant way,
because all airlines are usually expected to be safe, and they usually are. In contrast to
BFs, EFs are attributes with a positive impact on OS when provided at a satisfactory or
higher level, but which do not have a significant negative impact on OS when absent,
or when objective performance is low. These attributes are also referred to as attractive
quality elements or value-enhancing factors. Using the same example of a passenger
flight, an EF might be a diverse offer of in-flight movies. If delivered, this attribute
would be likely to contribute to the creation of passenger satisfaction (positive impact
on OS); however, when only a small choice of movies was provided (or when not provided
at all), this attribute would not necessarily be likely to significantly impact the passenger’s
OS in a negative way. Due to these performance-level-dependent dynamics in the attributes’ impact on OS, both BFs and EFs are said to have an asymmetric impact on OS.
In contrast, PFs, which are also referred to as one-dimensional quality elements or
linear factors, have a symmetric impact on OS – that is, high performance positively
impacts OS and low performance negatively impacts OS. A distinction is further made
between PFs with high importance (PF+ , high explicit and implicit AI) and PFs with
low importance (PF2 , low explicit and implicit AI). The latter category of attributes is
considered less important in explaining customer satisfaction and dissatisfaction.
The managerial value of information about key drivers of customer satisfaction, such
as provided by the IG, is immense. However, due to a lack of theoretical foundation, and
the lack of convergent validity between the IG and other methods for assessing the Kano
model (e.g. Matzler & Sauerwein, 2002; Fuchs & Weiermair, 2003; Busacca & Padula,
80
85
90
Figure 1. The importance grid.
Total Quality Management
95
100
105
110
115
120
125
130
135
3
2005; Witell & Lo¨fgren, 2007), the technique has never gained greater popularity among
practitioners and researchers. Following the call of Matzler, Bailom, Hinterhuber, Renzl
and Pichler (2004), this paper thus aims to discuss the technique’s underlying assumptions
and to pinpoint crucial logical and conceptual shortcomings. Based on insights from the
discussion, a reinterpretation of the IG is put forward, and a case study is used to show
how the ‘reinterpreted IG’ in combination with a determinance-asymmetry analysis
(DAA) (Mikulic´ & Prebezˇac, 2008) can provide valuable guidance to quality managers.
Another important implication of the discussion is that the IG and the penalty-reward contrast analysis (PRCA) should not be regarded as competing or conflicting techniques for
operationalising the Kano model, as it is generally assumed in the literature, but rather
as complementary approaches which, in combination, provide managers with surplus
information in prioritising product/service attributes for improvement.
Theoretical foundations of the IG
When using the IG for categorising product/service attributes according to the Kano
model, several implicit assumptions are made. Thus, in order to theoretically validate
the technique, it is necessary to assess whether its underlying assumptions are logical
and theoretically grounded.
Assumption 1: Explicit and implicit measures of AI assess different concepts. Since the
IG uses two different AI measures to determine an attribute’s category according to the
Kano model, the most basic assumption is that measures of explicit and implicit AI
assess different concepts. The literature dealing with the measurement of AI provides
theoretical confirmation for this assumption, as it is acknowledged that importance is a
multidimensional concept, and that different importance measures assess its different
dimensions (Myers & Alpert, 1968, 1977; Jaccard, Brinberg, & Ackerman, 1986; Van
Ittersum, Pennings, Wansink, & van Trijp, 2007). Moreover, several studies failed to
confirm nomological (e.g. Harte & Koele, 1995) and convergent validity (e.g. Wiley,
MacLachlan, & Moinpour, 1977) between derived measures (i.e. implicit AI) and direct
ratings (i.e. explicit AI), which are the two most frequently used measures in the IG.
Hence, the assumption that explicit and implicit measures assess different concepts (or
different dimensions of AI) is also empirically grounded. Moreover, it is noteworthy
that several authors have raised serious concerns about the reliability of AI measurement
in customer satisfaction research because of the ambiguity of AI measures (Oliver, 1997; Q2
Oh, 2001; Matzler & Sauerwein, 2002). This reinforces the recommendation not to regard Q2
explicit and implicit measures of AI as exchangeable measures for the same concept.
Assumption 2: Explicit AI is an indicator of an attribute’s dissatisfaction-generating
potential (DGP). When explicit AI is high, the attribute is either categorised as a BF or
a PF with a high degree of importance (PF+). These are the two attribute categories
according to the Kano model which strongly impact OS in a negative way when performing low. Put the other way round, a necessary precondition for an attribute to have a strong
negative impact on OS when performing low is that the attribute has a high level of explicit
AI. The relevant literature does not explicitly confirm this assumption, but it seems reasonable that attributes which are perceived to be very important by consumers (i.e. have high
explicit AI) negatively impact their OS when absent or performing low. However, this
should not be taken as a rule, but rather as a rule of thumb. Moreover, there is no
theory explaining that attributes which are perceived less important by customers do
not bear (a large) potential to generate dissatisfaction when performing low, which is
4
140
145
150
155
160
165
J. Mikulic´ and D. Prebezˇac
also implicitly being assumed in the IG. The assumption of explicit AI to be an indicator of
DSG should therefore be considered with care.
Assumption 3: Implicit AI is an indicator of an attribute’s satisfaction-generating
potential (SGP). When implicit AI is high, the attribute is either categorised as an EF or
a PF+. These are the two attribute categories with a strong positive impact on OS
when performance is high. Put the other way round, a necessary precondition for an attribute to have a strong positive impact on OS when performing high is that it has a high level
of implicit AI. This assumption also seems reasonable since implicit AI is in fact a measure
of an attribute’s impact on OS. However, for attributes categorised as EFs in the IG, it is
being neglected that a high level of implicit AI does not necessarily imply that the attribute’s impact on OS is unidirectional – that is, that high performance positively
impacts OS, but low performance does not negatively impact it. Consider the illustrative Q3
example of only one attribute impacting OS (Figure 2).
The unstandardised regression coefficient (i.e. implicit AI) in both the relationships is
b ¼ 0.5. However, attribute A clearly is an EF (the impact increases towards higher levels
of attribute performance – i.e. positively asymmetric relationship), whereas attribute B is
a BF (the impact increases towards lower levels of attribute performance – i.e. negatively
asymmetric relationship). Accordingly, the assumption of implicit AI being an indicator of
an attribute’s SGP should be made with care, because it could as well be an indicator of an
attribute’s DGP. Therefore, one should rather regard implicit AI as an indicator of an attribute’s overall impact on OS, encompassing its potentials to generate both satisfaction and
dissatisfaction (Mikulic´ & Prebezˇac, 2008). In order to analyse whether an attribute has an
equal, larger or smaller SGP than DGP, the attribute’s overall impact on OS could be split
up into its impacts in cases of low-level performance and high-level performance, as proposed in the penalty-reward contrast approach introduced by Brandt (1987), or using the
moderated regression approach proposed by Lin, Yang, Chan and Sheu (2010).
Assumption 4: Relative attribute positionings reveal the different attribute categories
according to the Kano model. Since the IG applies a data-centred approach to determine
the threshold values of the different Kano categories (i.e. the crosshairs that divide the IG
into four quadrants), for an attribute all the other attributes which are included in the analysis represent the reference points in determining its own category. A logical consequence
of such an approach is that the analysis will always yield a classification of attributes into
EFs, BFs and PFs when there are any differences in both explicit and implicit AI of attributes, which is usually the case. In their empirical comparison of different methods for
performing Kano classifications, Witell and Lo¨fgren (2007) highlight this to be a
serious problem tackling the validity of the IG. Another consequence of a data-centred
170
175
180
Figure 2. Statistically derived importance weights: BFs versus EFs.
Total Quality Management
185
5
approach is that the category of an attribute may change with different sets of analysed
attributes, as has been empirically confirmed by Mikulic´ and Prebezˇac (2011b).
However, in order to make sense, it is obvious that the classification of an attribute
should be robust and consistent across any set of analysed attributes. Thus, a datacentred quadrant analysis, such as the IG, is applicable only if it is used to draw conclusions based on relative attribute positionings, but not for performing classifications
of attributes into absolute, predefined attribute categories. Consequently, this assumption
should be regarded as a major reason for a lack of convergent validity with other methods
for assessing the Kano model in earlier studies.
190
Reinterpretation of the IG
195
200
205
210
215
220
225
The previous discussion revealed that one of four implicit assumptions of the IG is highly
problematic (assumption 4), whereas two should be taken with care (assumptions 2 and 3).
Consequently, it is not recommended to use the IG for categorising product/service attributes according to the Kano model. The questions which remain are how to properly interpret IG results, and whether these results have any managerial value.
To provide answers to these questions, it is first necessary to specify what is exactly
assessed with the measures typically used in the technique (i.e. direct AI ratings and statistically derived AI), and to evaluate their informational value. On the one hand, such
direct ratings assess the customer-perceived importance of attributes, which can be
described as an attitude-like importance statement that is based on personal values and
desires (Batra, Homer, & Kahle, 2001). This type of importance is also referred to as rel- Q4
evance (AR) by several authors (e.g. Myers & Alpert, 1977; Van Ittersum et al., 2007). On Q5
the other hand, statistically derived AI measures, such as regression-based weights, indicate an attribute’s importance in explaining variations in an outcome variable (e.g. OS),
based on experiential data. This type of importance is also referred to as determinance
(AD) (Myers & Alpert, 1977). Now, if one departs from these definitions, it does, in Q6
fact, not seem unreasonable to assume a strong positive correlation between AR and
AD – that is, attributes that are perceived to be more important (high AR) should have
a stronger impact on OS (high AD), and vice versa. If this was truly the case, it would,
in fact, be unnecessary to use both types of AI measures, because they provide equivalent
information. However, when reasoning this way, it must not be neglected that calculations
of AD indicators are usually based on data from single case studies, which has important
implications for the AR – AD relationship. Such case-based data do not always cover the
whole range of possible levels of objective attribute performance (AP), and neither do the
data always cover significant variations in objective AP for all attributes that are subject to
analysis. Accordingly, relatively low AD of an attribute with relatively high AR may
simply be attributed to a lack of variation in objective AP, whereas relatively high AD
of an attribute with relatively lower AR might be attributed to relatively higher variation
in objective AP. The first situation might be the case with core product/service attributes,
which are usually rated highly important, and which are typically provided at a satisfactory
(high) level by most competing product/service providers in a market. Furthermore, it is
also important to consider that variations in AP, as assessed in case-based studies, may for
some attributes be within the customer’s zone of tolerance (ZOT), thus deflating their
impact on OS, but for others be outside the ZOT, thus inflating their impact (Johnston,
1995). To illustrate the points made, let us reconsider the earlier mentioned example of
flight safety as an attribute of an airline. It is rather obvious that this attribute would
yield very high importance ratings in a passenger survey (i.e. high AR), and it would
6
J. Mikulic´ and D. Prebezˇac
230
235
Figure 3. The RDG.
240
245
250
255
260
265
270
probably emerge as the most ‘important’ attribute. However, airlines are typically safe and
do not vary in safety performance, which is why the attribute would not necessarily be
likely to exhibit a proportionately strong impact on OS (i.e. relatively low AD if the
flight was safe). Consequently, it should not be surprising if an attribute with relatively Q7
lower AR (e.g. in-flight service) showed a relatively higher impact on OS than the
safety attribute, especially if the AP of the lower relevance attribute crossed the passenger’s ZOT, either in a positive or a negative direction. However, conversely, if there actually were safety problems during a flight, the highly relevant attribute would certainly
become a highly determinant one (in accordance with its high AR), and all other attributes
might probably become less- or even in-determinant.
Although this is a hypothetical and extreme example, it makes clear that measures of
AR and AD assess quite distinct concepts, and that it is not reasonable to always assume a
strong positive correlation between them in case-based studies. On the one hand, AD is a
highly dynamic concept, as it is a function of its own relevance, performance and, possibly, even the performance of other attributes, whereas AR, on the other hand, is a relatively stable attitude-like concept that may, however, change during time. Now, if we
return to our question about the informational value of these two types of AI measures,
it can be concluded that the measures are complementary, rather than conflicting or competing, as already acknowledged in the literature (e.g. Myers & Alpert, 1977; Van Ittersum
et al., 2007). Such measures of AD help to reveal more or less active key drivers of customer satisfaction in a particular research setting, whereas measures of AR reveal those
attributes that generally have a larger or smaller potential to affect the customer’s OS,
if not performing in accordance with the customer’s expectations. From a managerial perspective, the two measures thus provide valuable surplus information in combination,
because they help to uncover those attributes that are both highly relevant and determinant,
and that should therefore have highest priority in improvement strategies. Accordingly, the
value of the IG should not be sought in its questionable ability to operationalise the Kano
model, but rather in its high reliability for the purpose of prioritising product/service attributes for improvement. Moreover, if combined with data on AP, the IG, in fact, becomes a
three-dimensional importance – performance analysis (IPA; Martilla & James, 1977) with
enhanced reliability compared to traditional approaches that employ unidimensional operationalisations of AI. In particular, the reinterpreted IG, which will subsequently be
Total Quality Management
7
referred to as relevance – determinance grid (RDG), facilitates a classification of attributes
into four categories with different relative priority levels (Figure 3).
.
Higher impact core attributes (high AR/high AD): These attributes are perceived as
highly important by customers, and they have a strong influence on OS. The management should therefore assign this attribute category highest general priority in
improvement strategies. In order to achieve a competitive advantage, the management should primarily focus on this attribute category.
.
Higher impact secondary attributes (low AR/high AD): These attributes are perceived as less
important for providing the core product/service, but they have nevertheless a large influence
on OS. Attributes from this category usually form the augmented product/service. Managers
who are seeking opportunities to differentiate themselves from the competition should focus
on this category. It is important to note that the importance of these attributes may be underestimated, if only a measure of AR is used as decision criterion.
.
Lower impact core attributes (high AR/low AD): These attributes are perceived as
very important by the customer, but they have a relatively lower impact on OS. Attributes from this category are fully expected by the customer, and they are usually provided by all competing product/service providers at a satisfactory level. Managers
should treat such attributes with care, because they might, in fact, be latent dissatisfiers with a strong negative impact on OS in case of performance failures. In this
regard, these attributes are similar to BFs in the original IG. In general, managers
should track innovations regarding these attributes, as they could result in a competitive advantage.
Lower priority attributes/lower impact secondary attributes (low AR/low AD):
Compared with other attributes, these attributes have lower levels of both AR
and AD. The management should assign this attribute category lower general
priority than the other three categories in improvement strategies. Managers
should, however, be aware that this category may comprise latent satisfiers
that have not fully expanded their potentials, because objective AP is low,
and/or because more relevant and/or determinant attributes perform below customer-desired levels.
275
280
285
290
.
295
300
305
310
315
It is important to note that the RDG should only be used to derive managerial implications based on the relative positioning of attributes with regard to each other. This is Q8
especially important if attributes are located close to each other, but in different quadrants of the grid. A certain degree of flexibility should thus be retained when interpreting RDG results.
However, although the RDG helps to reveal the most critical attributes that need to
be improved, a shortcoming is that the analysis does not provide insight into possible
asymmetric effects in the AP – AD relationship, which is particularly valuable information in cases when two or more attributes with similar AP levels are located
nearby in the RDG. It is thus suggested to use the RDG in combination with a DAA
(Mikulic´ & Prebezˇac, 2008), an extension of Brandt’s PRCA, that facilitates revealing
satisfiers (positive determinance-asymmetry (DA)), dissatisfiers (negative DA) and
hybrid (linear) attributes (zero DA) (Brandt, 1987). As a rule of thumb, in the case of
similar AP and location in the RDG, dissatisfiers should have higher priority than
satisfiers, whereas it should be the other way round when AP is high. Since such an
approach accounts for possible diminishing and increasing returns in OS caused by
rising AP perceptions, it is supposed to result in more effective increases of OS
(Mikulic´ & Prebezˇac, 2008).
8
320
325
330
335
J. Mikulic´ and D. Prebezˇac
Case study
To demonstrate the value of a combination of the RDG and DAA in prioritising quality
improvements, data from an airport satisfaction survey are used. Improvement priorities
are derived in two steps. In the first step, the RDG is used to infer general priorities
based on AR, AD and AP, whereas a DAA (Mikulic´ & Prebezˇac, 2008, 2011b) is used
in a second step to refine the prioritisation, if necessary. Since the DAA is basically an
extension of the PRCA (Brandt, 1987), it is noteworthy that the (reinterpreted) IG and
the PRCA appear as highly complementary approaches for the purpose of attribute prioritisation, rather than as competing or conflicting approaches for operationalising the Kano
model, as it is generally assumed in the literature (e.g. Fuchs & Weiermair, 2004).
Measures and sample
The data were collected in face-to-face interviews by means of a standardised questionnaire at a Croatian international airport over a period of 1 week in fall 2008. The questionnaire encompassed eight service attributes: (1) ‘ease of way-finding’, (2) ‘availability of
flight information’, (3) ‘check-in efficiency’, (4) ‘dining/drinking possibilities’, (5) ‘shopping possibilities’, (6) ‘comfort level of the building’, (7) ‘courtesy of airport staff’ and (8)
‘offer of flights’. AP and OS with passenger services offered by the airport were measured
with rating scales from 1 (‘very low’) to 5 (‘very high’). AR (i.e. customer-perceived
importance) was measured with Likert scales from 1 (‘I do not agree at all’) to 5 (‘I completely agree’). In total, 1017 fully completed and usable questionnaires entered the data
analysis.
Analysis and results
340
345
In the first step, the input data for the RDG and DAA were calculated. For the RDG, AP
scores were regressed against OS with the airport services to obtain indicators of AD (R2
¼ 0.535). Arithmetic means were further calculated to obtain indicators of AR and AP.
For the DAA, another multiple regression analysis was performed with two sets of
binary-coded AP ratings as predictors and OS the criterion variable (R2 ¼ 0.453)
(Equation 1):
OS = b0 +
350
355
360
(pi dp,i + ri dr,i ) + 1
∀i [ I,
(1)
where b0 is the constant, pi the incremental change in OS as a consequence of very low AP
ratings of attribute i, i [ I (penalty score), ri the incremental change in OS as a consequence of very high AP ratings of attribute i, i [ I (reward score), dp,i the dummy variable
for attribute i, i [ I, with a value of 1 for lowest AP ratings and a value of 0 for all other
ratings, dr,i the dummy variable for attribute i, i [ I, with a value of 1 for highest AP
ratings and a value of 0 for all other ratings, and 1 the error term.
A comparison of pi and ri then reveals the direction of DA (dai ):
. pi . ri : negative dai attribute i has a stronger effect on OS when its AP is perto be low than when it is perceived to be high.
ceived
. pi ri : symmetric dai ; attribute i has approximately equal effects on OS when
its
AP is perceived to be low and when it is perceived to be high.
. pi , ri : positive dai ; attribute i has a weaker effect on OS when its AP is perceived to be low than when it is perceived to be high.
Total Quality Management
365
9
It should be noted that statistical insignificance of pi scores may be attributed to generally satisfactory objective AP, resulting in very few or no cases of lowest AP ratings. In
case of both insignificant and high pi scores, an attribute is likely to be a latent dissatisfier
with a potentially strong negative impact on OS in case of low objective AP. Indicators of
the degree of DA (DAIi) in the range [21,1], can be obtained as follows (Mikulic´ &
Prebezˇac, 2008):
ri − pi DAIi = pi + ri
∀i [ I,
(2)
370
.
.
375
.
380
385
390
A value of 21 means that low AP perceptions cause a decrease in OS, but high AP
perceptions do not cause an increase in OS (perfect dissatisfier).
A value of 0 means that high AP perceptions cause an equal increase in OS, but low
AP perceptions cause a decrease in OS (perfect hybrid).
A value of 1 means that high AP perceptions cause increase in OS, but low AP
perceptions do not decrease OS (perfect satisfier).
The calculated indices are presented in Table 1.
In the next step, the RDG was constructed using scores of AR and AD. The threshold
values for dividing the grid into four quadrants were set at the grand means of AR (ARGM
¼ 4.25) and AD (ADGM ¼ 0.111). Furthermore, service attributes performing below
average (i.e. below the grand mean of AP scores; APGM ¼ 3.94) were marked with a
minus (2), and attributes performing above average were marked with a plus (+)
(Figure 4).
The RDG reveals that the (6) ‘comfort level of the building’ and the (8) ‘offer of
flights’ should be assigned highest priority in improvement strategies. Both attributes
are categorised as higher impact core attributes and both perform below average. In a
next step, the airport management should focus on (5) ‘shopping possibilities’ and (4)
‘dining/drinking possibilities’, which are categorised as higher impact secondary attributes that perform below average. It is noteworthy that the improvement priority of
these two attributes would have been completely underestimated if only a measure of
Table 1.
Input data for the RDG and IAA .
Service attribute
AR
395
400
405
1. Ease of way-finding
2. Availability of flight information
3. Check-in efficiency
4. Dining/drinking possibilities
5. Shopping possibilities
6. Comfort level of the building
7. Courtesy of airport staff
8. Offer of flights
Grand means
Note: ns, not significant.
∗
p , 0.1.
∗∗
p , 0.01.
∗∗∗
p , 0.001.
4.51
4.23
4.50
3.58
3.63
4.47
4.64
4.40
4.25
Q12, Q13
AD
∗
0.056
0.038 (ns)
0.135∗∗∗
0.117∗∗
0.124∗∗∗
0.150∗∗∗
0.146∗∗∗
0.132∗∗∗
0.111
AP
DGP
SGP
DAI
4.37
4.26
4.32
3.18
3.49
3.85
4.17
3.89
3.94
20.689
20.588
20.491
20.505
20.496
20.477
20.364
20.479
20.689
0.311
0.412
0.509
0.495
0.504
0.523
0.636
0.521
0.311
20.378
20.176
0.018
20.010
0.008
0.046
0.273
0.041
20.378
10
J. Mikulic´ and D. Prebezˇac
410
415
420
425
430
435
440
445
450
Figure 4. RDG results.
Note: AP scores are shown in brackets. Attributes marked with a + perform above average and attributes marked with a 2 perform below average.
AR had been used as a decision criterion, since both attributes have relatively high AD, but
the lowest AR scores in the analysed attribute set. Conversely, if only AD had been used as
decision criterion, the improvement priority of these two attributes would have been overestimated, because they are the two worst performing attributes, but have similar AD like
the two higher impact core attributes (i.e. (6) and (8)). The remaining four attributes are
less problematic as they perform above average, but they can nevertheless be prioritised.
Accordingly, the next two attributes to be considered for improvement should be (7) ‘courtesy of airport staff’ and (3) ‘check-in efficiency’, which are categorised as higher impact
core attributes; (1) ‘ease of way-finding’, which is categorised as a lower impact core
attribute, should follow, and lowest priority should be assigned to (2) ‘availability of
flight information’, which is categorised as a lower priority attribute.
Next, the DAA was conducted to test for possible asymmetries in the AP –AD relationship and to refine the prioritisation obtained through the RDG, if necessary (Figure 5).
The DAA reveals an approximately symmetric AP – AD relationship for five attributes
(SGP ≈|DGP|) – that is, for (3) ‘check-in efficiency’, (4) ‘dining/drinking possibilities’,
(5) ‘shopping possibilities’, (6) ‘comfort level of the building’ and (8) ‘offer of flights’.
These attributes could be described as hybrid factors, meaning that rising AP perceptions
cause approximately linear returns in OS. However, significant asymmetries are present
with regard to the remaining three attributes. On the one hand, (1) ‘ease of way-finding’
and (2) ‘availability of flight information’ cause diminishing returns in OS as AP perceptions rise. Since the AP level of these two attributes is relatively high (P1 ¼ 4.37 and P2 ¼
4.26, respectively), improving these attributes would not be likely to cause significant
increases in OS, which is why they do not necessitate particular attention. On the other Q9
hand, (7) ‘staff courtesy’ causes increasing returns in OS as AP perceptions rise. Its AP
level is quite high (P7 ¼ 4.17), but since it is a highly determinant satisfier (AD7 ¼
0.146), increasing its AP would be likely to cause further significant increases in OS.
Accordingly, this attribute should be considered for improvement right after the four attributes that perform below average. However, the same recommendation has been provided
earlier by the RDG.
Total Quality Management
11
455
460
465
Figure 5. DAA results.
470
475
To conclude, although the DAA revealed several significant asymmetric effects,
there is no need for any refinements of the prioritisation obtained by the RDG.
However, a noteworthy finding is that (7) ‘staff courtesy’, which is categorised as a
satisfier, has a larger absolute negative impact on OS when AP perceptions are low
(PC7 ¼ 20.092) than both the identified dissatisfiers – that is, (1) ‘ease of wayfinding’ (PC1 ¼ 20.062) and (2) ‘availability of flight information’ (PC2 ¼ 20.040).
Accordingly, if all these attributes performed low, unlike common opinion the satisfiers,
rather than the dissatisfiers, should be improved first, which reinforces the recommendation made by Mikulic´ and Prebezˇac (2008) not to use DA (i.e. a categorisation of attributes into satisfiers, dissatisfiers and hybrids) as a first-order criterion in decisionmaking about improvement priorities.
480
485
490
495
Conclusions
The IG was developed as a research tool for categorising product/service attributes in the
Kano model. A discussion of the technique’s underlying implicit assumptions, however,
failed to validate it as a technique for its originally intended purpose. Nevertheless,
based on a specification of the informational value of the measures typically used in the
IG, a logical and theoretically grounded reinterpretation of IG results was put forward
in this paper. In a case example on passenger satisfaction with airport services, the reinterpreted IG, referred to as RDG, facilitated the identification of four attribute categories
with different general priority levels: (i) ‘higher impact core attributes’, (ii) ‘higher impact
secondary attributes’, (iii) ‘lower impact core attributes’ and (iv) ‘lower priority attributes’. To derive improvement priorities with the objective of effectively increasing
overall passenger satisfaction, the DAA was paired with data on AP, thus turning it into
an IPA with enhanced reliability compared with traditional approaches that apply unidimensional operationalisations of AI. Moreover, a DAA was used in a second step to
refine the prioritisation obtained from the RDG, however, the analysis revealed that refinements were not necessary.
12
500
505
510
515
520
525
530
535
540
J. Mikulic´ and D. Prebezˇac
Implications for managers
Managers who use measures of AI in prioritising product/service attributes for improvement should be aware that explicit AI measures (e.g. direct importance ratings) and
implicit AI measures (e.g. coefficients obtained by regressing AP data against a global
performance measure) assess quite distinct concepts. Explicit AI measures assess the
relevance of attributes, whereas implicit AI measures assess the determinance of
attributes. In fact, it is not unlikely that attributes which are not perceived important by
customers (low relevance) emerge as having strong impact on OS in case-based studies
(high determinance), and vice versa. It is thus suggested to combine both types of AI
measures in decision-making, since very different implications regarding the prioritisation
of attributes may emerge depending on the type of AI measure used.
Moreover, managers who base their decisions about improvement priorities on a
classification of product/service attributes into satisfiers, dissatisfiers and hybrids
should be aware that implications might be misleading if absolute levels of determinance
remain unconsidered. The case study in this paper revealed that the only attribute which
was categorised as a satisfier (‘courtesy of the airport staff’) actually had a larger absolute
potential to create dissatisfaction, than both the identified dissatisfiers (‘ease of wayfinding’ and ‘availability of flight information’). Consequently, if all these attributes per- Q10
formed low, and if the primary objective was to decrease overall dissatisfaction, unlike
common opinion the satisfier should be assigned highest improvement priority rather
than the dissatisfiers. The authors of this study thus suggest prioritising attributes for
improvement in two steps. In the first step, an RDG should be used to obtain a general
prioritisation of attributes, based on their relevance, determinance and performance,
whereas a DAA should be used, in a second step, to gain insight into possible increasing
or diminishing returns in OS caused by rising levels of performance ratings. Results from
the latter analysis can then be used to refine the prioritisation of attributes with similar performance levels, relevance and determinance, if necessary. A noteworthy practical advantage of the proposed analytical framework is that it can be applied to data which are
collected in typical customer satisfaction surveys.
References
Bartikowski, B., & Llosa, S. (2004). Customer satisfaction measurement: Comparing four methods
of attribute categorisations. The Service Industries Journal, 24(4), 67–82.
Batra, R., Homer, P.M., & Kahle, L.R. (2001). Values, susceptibility to normative influence, and
attribute importance weights: A nomological analysis. Journal of Consumer Psychology,
11(2), 115–128.
Brandt, R.D. (1987). A procedure for identifying value-enhancing service components using customer satisfaction survey data. In C.F. Suprenant (Ed.), Add value to your service: The key to
success (pp. 61–64). Chicago, IL: American Marketing Association.
Busacca, B., & Padula, G. (2005). Understanding the relationship between attribute performance and
overall satisfaction: Theory, measurement and implications. Marketing Intelligence &
Planning, 23(6), 543–561.
Fuchs, M. (2002). Benchmarking indicator-systems and their potential for tracking guest satisfaction. Tourism: An Interdisciplinary Journal, 50(2), 141–155.
Fuchs, M., & Weiermair, K. (2003). New perspectives on satisfaction research in tourism destinations. Tourism Review, 58(3), 6–14.
Fuchs, M., & Weiermair, K. (2004). Destination benchmarking: An indicator-system’s potential for
exploring guest satisfaction. Journal of Travel Research, 42(3), 212–225.
Fu¨ller, J., & Matzler, K. (2008). Customer delight and market segmentation: An application of the
three-factor theory of customer satisfaction on life style groups. Tourism Management, 28(1),
116–126.
Q11
Total Quality Management
545
550
555
560
565
570
575
580
585
13
Harte, J.M., & Koele, P. (1995). A comparison of different methods for the elicitation of attribute
weights: Structural modeling, process tracing, and self-reports. Organizational Behavior
and Human Decision Processes, 64(1), 49–64.
Jaccard, J., Brinberg, D., & Ackerman, L.J. (1986). Assessing attribute importance: A comparison of
six methods. Journal of Consumer Research, 12(4), 463–468.
Johnston, R. (1995). The zone of tolerance: Exploring the relationship between service transactions
and satisfaction with the overall service. International Journal of Service Industry
Management, 6(2), 46–61.
Kano, N., Seraku, N., Takahashi, F., & Tsuji, S. (1984). Attractive quality and must-be quality.
Hinshitsu (Quality, The Journal of the Japanese Society for Quality Control), 14(2), 39–48.
Lin, S.P., Yang, C.L., Chan, Y.H., & Sheu, C. (2010). Refining Kano’s ‘quality attributes-satisfaction’ model: A moderated regression approach. International Journal of Production
Economics, 126(2), 255–263.
Martensen, A., & Gronholdt, L. (2001). Using employee satisfaction measurement to improve
people management: An adaptation of Kano’s quality type. Total Quality Management,
12(7/8), 949–957.
Martilla, J.A., & James, J.C. (1977). Importance–performance analysis. Journal of Marketing,
41(1), 77–99.
Matzler, K., Bailom, F., Hinterhuber, H.H., Renzl, B., & Pichler, J. (2004). The asymmetric relationship between attribute-level performance and overall customer satisfaction: A reconsideration
of the importance –performance analysis. Industrial Marketing Management, 33(4), 271–277.
Matzler, K., & Hinterhuber, H.H. (1998). How to make product development projects more successful by integrating Kano’s model of customer satisfaction into quality function deployment?
Technovation, 18(1), 25–38.
Matzler, K., & Sauerwein, E. (2002). The factor structure of customer satisfaction: An empirical test
of the importance grid and the penalty-reward-contrast analysis. International Journal of
Service Industry Management, 13(4), 314–332.
Matzler, K., Sauerwein, E., & Heischmidt, K.A. (2003). Importance–performance analysis revisited:
The role of the factor structure of customer satisfaction. The Service Industries Journal, 23(2),
112–129.
Matzler, K., Sauerwein, E., & Stark, C. (2002). Methoden zur Identifikation von Basis-, Leistungsund Begeisterungsfaktoren. In H.H. Hinterhuber & K. Matzler (Eds.), Kundenorientierte
Unternehmensfu¨hrung:
Kundenorientierung–Kundenzufriedenheit– Kundenbindung
(pp. 265–289). Wiesbaden: Gabler Verlag.
Mikulic´, J., & Prebezˇac, D. (2008). Prioritizing improvement of service attributes using impact
range-performance analysis and impact-asymmetry analysis. Managing Service Quality,
18(6), 559–576.
Mikulic´, J., & Prebezˇac, D. (2011a). Evaluating hotel animation programs at Mediterranean sun and
sea resorts: An impact-asymmetry analysis. Tourism Management, 32(3), 688–696.
Q11
Mikulic´, J., & Prebezˇac, D. (2011b). A critical review of techniques for classifying quality attributes
in the Kano model. Managing Service Quality, 21(1), 46–66.
Myers, J.H., & Alpert, M.I. (1968). Determinant buying attitudes: Meaning and measurement.
Journal of Marketing, 32, 13–20.
Myers, J.H., & Alpert, M.I. (1977). Semantic confusion in attitude research: Salience vs. importance
vs. determinance. Advances in Consumer Research, 4(1), 106–110.
Peters, M. (2005). Entrepreneurial skills in leadership and human resource management evaluated by
apprentices in small tourism businesses. Education + Training, 47(8/9), 575–591.
Riviere, P., Monrozier, R., Rogeaux, M., Pages, J., & Saporta, G. (2006). Adaptive preference target:
Contribution of Kano’s model of satisfaction for an optimized preference analysis using a
sequential consumer test. Food Quality and Preference, 17(7/8), 572–581.
Van Ittersum, K., Pennings, J.M.E., Wansink, B., & van Trijp, H.C.M. (2007). The validity of attribute-importance measurement: A review. Journal of Business Research, 60(11), 1177–1190.
Vavra, T.G. (1997). Improving your measurement of customer satisfaction: A guide to creating, conducting, analyzing and reporting customer satisfaction measurement program. Milwaukee,
WI: ASQC Quality Press.
Wiley, J.B., MacLachlan, D.L., & Moinpour, R. (1977). Comparison of stated and inferred parameter values in additive models: An illustration of a paradigm. Advances in Consumer
Research, 4(1), 98–105.
14
J. Mikulic´ and D. Prebezˇac
Witell, L., & Lo¨fgren, M. (2007). Classification of quality attributes. Managing Service Quality,
17(1), 54–73.
Yang, C.C. (2005). The refined Kano’s model and its application. Total Quality Management and
Business Excellence, 16(10), 1127–1137.
590
595
600
605
610
615
620
625
630