Relationship between Pair-wise and MC/DC Testing

Relationship between Pair-wise and MC/DC Testing:
Initial Experimental Results
Sergiy Vilkomir and David Anderson
Department of Computer Science
East Carolina University
Greenville, NC 27858
[email protected]
Abstract— While pair-wise testing has shown a high level of fault
detection in many situations, it has not done so for testing logical
expressions
in
software
applications.
The
modified
condition/decision coverage (MC/DC) approach was especially
developed for testing logical expressions, but test generation
according to this approach can be complicated and timeconsuming. From a practical point of view, combining pair-wise
and MC/DC testing would integrate the benefits of both
approaches; however, this would require solving many research
problems. As an initial step in this direction, our paper evaluates
the level of MC/DC coverage for pair-wise test cases in different
situations and compares this level to MC/DC coverage for
random test cases. To confirm this, we conducted experimental
testing of logical expressions of different sizes, complexities, and
numbers of input variables. Our experimental results show that
the pair-wise test cases had a higher level of MC/DC coverage
compared to the random test cases, achieving more than 70%
MC/DC coverage. This allows us to suggest pair-wise testing as a
promising method to be used as the basis for MC/DC test
generation.
Keywords— software testing; MC/DC; pair-wise; coverage
I.
INTRODUCTION
Combinatorial methods [1, 2] are promising modern
approaches in software testing. In particular, pair-wise testing
[3], which requires using all pairs of all values for any pair of
input parameters, has shown a high level of fault detection.
Another approach, t-wise testing [4], which is an extension of
pair-wise testing for t from 3 to 6, requires more test cases but
provides even better results.
However, pair-wise and t-wise approaches are not very
effective for testing logical expressions in software [5, 6].
Many other approaches have been suggested for this purpose,
including modified condition/decision coverage (MC/DC) [7].
The main MC/DC idea is that each logical expression
(decision) should take “True” and “False” outcomes and each
elementary logical condition should “affect a decision’s
outcome independently” [8]. This separate consideration of
each condition causes MC/DC to be an effective and stable
testing approach [9] but, at the same time, makes the task of
test generation complicated and time-consuming.
From a practical point of view, combining pair-wise and
MC/DC testing would integrate the benefits of both
approaches. However, this would require solving many
research problems. Therefore, it is important to understand the
2015 IEEE Eighth International Conference on Software Testing, Verification
and Validation Workshops (ICSTW)
4th International Workshop on Combinatorial Testing (IWCT 2015)
978-1-4799-1885-0/15/$31.00 ©2015 IEEE
relationships between pair-wise and MC/DC testing. More
specifically, it is necessary to evaluate the level of MC/DC
coverage for pair-wise test cases in different situations, as well
as to investigate whether using pair-wise test cases as the basis
for further MC/DC testing provides any benefits compared
with other techniques of test selection. As of now, the main
technique for comparison is random testing, which has been
used by many researchers for comparisons of levels of fault
detection and code coverage [10].
This paper provides initial experimental results in these
directions, and the remaining text is structured as follows:
Section 2 describes the main research question and the
approach to experimental testing. Section 3 contains the main
experimental results and provides an evaluation of MC/DC
coverage for pair-wise test cases. Conclusions and directions of
future work are presented in Section 4.
II.
RESEARCH QUESTIONS AND METHODS OF
INVESTIGATION
Our aim in this paper was the investigation of the following
two research questions:
•
RQ1: What is the MC/DC coverage level for pairwise test cases?
•
RQ2: Is the MC/DC coverage level for pair-wise test
cases higher than the level for random test cases?
Our approach was an experimental testing of logical
expressions. For this purpose, we generated logical expressions
of different sizes from 3 to 8, where the size of an expression
was the number of different logical variables in the expression.
For each size, we generated two types of expressions: simple
expressions where each logical variable was presented only
one time, and complex expressions where some variables were
presented several times. We evaluated MC/DC coverage
separately for each type and for all expressions. Test cases
(both pair-wise and random) were generated in two different
modes:
•
Mode 1: Different numbers of input variables. The
numbers of inputs coincided with the sizes of the
expressions.
•
Mode 2: Fixed numbers of input variables (10 and 20
inputs) for different sizes of expressions.
The five main steps of our investigation were as follows:
1. Creation of “artificial” software programs that contain
various sets of logical expressions. These programs had no real
functionality but allowed us to evaluate coverage of logical
expressions. In total, 338 expressions were generated.
2. Generation of pair-wise test cases and testing software from
Step 1. For pair-wise test generation, we used the ACTS tool
[11] created by the National Institute of Standards and
Technology (NIST). In total, 8 pair-wise test sets with 56 test
cases were generated.
3. Generation of random test cases and testing software from
Step 1. Several random test sets were generated for each
expression size. The sizes of random test sets were the same as
pair-wise test sets. In total, 28 random test sets with 204 test
cases were generated.
4. Evaluation of the MC/DC coverage for pair-wise and
random test cases. For this purpose, we used the CodeCover
[12] tool developed at the University of Stuttgart, Germany.
The Boolean analyzer view of CodeCover is shown in Fig. 1.
5. Comparison of MC/DC coverage for pair-wise and random
test cases.
Fig. 1. Boolean analyzer view of CodeCover
The total scope of the experimental testing for Mode 1 is
shown in Table 1.
TABLE I.
SCOPE OF THE EXPERIMENTAL TESTING FOR MODE 1
III.
EXPERIMENTAL RESULTS
A. Experimental Results for Mode 1
We evaluated the MC/DC coverage for each separate
expression for the pair-wise and random test cases. The
examples of our results for complex expressions of size 4 are
presented in Table 2, where PW is a pair-wise test set and R1,
R2, and R3 are random test sets.
TABLE II.
MC/DC COVERAGE FOR COMPLEX EXPRESSIONS OF SIZE 4
MC/DC coverage, %
Expression
PW
R1
R2
R3
(a||!c)&&(b||c&&d)
90
60
70
90
(b||!d)&&(a||d)&&(!c||!b)
91.7
75
66.7
66.7
(a&&c)||(!a&&b)||(!a&&b&&d)
(a&&b)||(!a&&c)||!d
71.4
100
71.4
80
57.1
90
71.4
100
(a&&!b)||(b&&d||c)
100
50
50
90
(b&&(c||a))||(!c&&d||!b)
50
41.7
50
75
(a||d||c)&&(b||!d)
60
60
70
80
(b&&!c)||(d&&c)||(!a&&!b)||(!d&&a)
75
87.5
87.5
50
(a&&b&&c&&d)||(!a&&b||!c&&d)
81.2
93.8
43.8
68.8
(a||!b)&&(c||!d)&&(!a||!d)
83.3
100
58.3
100
Average for random test cases
-
71.9
64.3
79.2
Total average
80.3
71.8
We also evaluated MC/DC coverage separately for
expressions of different sizes and separately for simple and
complex expressions. Comparisons of MC/DC coverage
between simple and complex expressions are given in Fig. 2
for pair-wise test cases and in Fig. 3 for random test cases.
Comparisons of MC/DC coverage between pair-wise and
random test cases are given in Fig. 4 for simple expressions
and in Fig. 5 for complex expressions.
Logical
pair-wise
random
total
runs
Vari
ables
expres
sions
test
sets
test
cases
runs
test
sets
test
cases
runs
3
12
1
4
48
3
12
144
192
4
22
1
6
132
3
18
396
528
5
20
1
6
120
3
18
360
480
6
20
1
7
140
3
21
420
560
7
20
1
7
140
3
21
420
560
8
20
1
8
160
3
24
480
640
total
114
6
38
740
18
114
2220
2960
MC/DC Coverage (%)
Number of
100
80
60
40
20
0
3-var
4-var
5-var
Simple
6-var
7-var
8-var
Complex
Fig. 2. MC/DC coverage for pair-wise test cases (Mode 1)
For Mode 2, we generated 224 logical expressions and two
pair-wise test sets, one containing 8 test cases with 10
variables and another containing 10 test cases with 20
variables. Five test sets of random test cases were generated
for 10 variables and another five test sets for 20 variables. The
sizes of the random test sets were the same as those of the
pair-wise test sets. In total for Mode 2, we ran the test cases
12,096 times.
Pair-wise test cases had a higher level of MC/DC coverage
compared to random test cases for all sizes (from 3 to 8) of
simple expressions. The average levels of MC/DC coverage
are 67.1% for pair-wise vs. 62.5% for random testing. For
some sizes of complex expressions, MC/DC coverage for
random tests was higher than for the pair-wise, but on average
for complex expressions this level was still slightly higher for
pair-wise (66.5% vs. 64.3%).
MC/DC Coverage (%)
TABLE III.
MC/DC COVERAGE FOR MODE 1
100
80
Size
of
expressions
60
40
20
0
3-var
4-var
5-var
Simple
6-var
7-var
8-var
Complex
3-var
4-var
5-var
6-var
7-var
8-var
Average
Simple
expressions
PW
R
77.8
75.9
76
74.3
70
64.7
64.2
60.3
59.3
50.2
57.5
53.9
67.1
62.5
MC/DC coverage, %
Complex
expressions
PW
R
77.7
74.2
80.3
71.8
73.4
66.4
62.7
66.6
60.4
57.8
49.4
52.9
66.5
64.3
All expressions
PW
77.7
78.2
71.7
62.3
59.9
53.5
66.8
R
75.1
72.6
65.6
63.5
54.0
53.4
63.4
Mc/DC Coverage (%)
MC/DC Coverage (%)
Fig. 3. MC/DC coverage for random test cases (Mode 1)
100
80
60
40
20
80
60
40
20
0
3-var
0
3-var
4-var
5-var
Pair-wise
6-var
7-var
8-var
Random
Mc/DC Coverage (%)
80
60
40
Random
40
20
0
Random
Fig. 5. MC/DC coverage for complex expressions (Mode 1)
The summary of all results for Mode 1 is presented in
Table 3. The final results for the MC/DC coverage for all
expressions are 66.8% for pair-wise and 63.4% for random
tests.
B. Experimental Results for Mode 2
In Mode 2, the MC/DC levels for pair-wise and random
test cases are higher for both 10 and 20 variables. The
difference between pair-wise and random testing is more
significant in all cases.
Comparisons of MC/DC coverage between pair-wise and
random test cases of 10 variables are given in Fig. 6 for simple
expressions and in Fig. 7 for complex expressions. Similar
results for 20 variables are presented in Figs. 8 and 9. In all
cases, MC/DC coverage for pair-wise test cases was higher
than for random test cases. The maximum difference is around
15%–16% (16.7% for simple expressions of size 3 and 15.6%
for simple expressions of size 5; 10 input variables).
4-var
5-var
Pair-wise
8-var
6-var
7-var
8-var
Random
Fig. 7. MC/DC coverage for complex expressions (Mode 2–10 inputs)
Mc/DC Coverage (%)
Pair-wise
7-var
8-var
60
3-var
6-var
7-var
80
0
5-var
6-var
100
20
4-var
5-var
Fig. 6. MC/DC coverage for simple expressions (Mode 2–10 inputs)
100
3-var
4-var
Pair-wise
Fig. 4. MC/DC coverage for simple expressions (Mode 1)
MC/DC Coverage (%)
100
100
80
60
40
20
0
3-var
4-var
5-var
Pair-wise
6-var
7-var
8-var
Random
Fig. 8. MC/DC coverage for simple expressions (Mode 2–20 inputs)
The summary of all results for Mode 2 is presented in
Tables 4 and 5. For 10 variables, MC/DC coverage is 73.8%
for pair-wise vs. 63.8% for random testing. For 20 variables,
the MC/DC coverage is 77.8% for pair-wise vs. 70.6% for
random testing. The final results of MC/DC coverage for all
expressions together are 75.8% for pair-wise and 67.2% for
random tests.
Mc/DC Coverage (%)
pair-wise test cases could be a promising basis for MC/DC
testing.
100
80
As future work, we are going to investigate the following:
• MC/DC coverage for mixed sets of logical
expressions where expression sizes are distributed in
proportions that reflect real life (more short
expressions than long expressions).
• MC/DC coverage of t-wise test cases.
• MC/DC coverage of pair-wise and t-wise test cases
for real software programs.
60
40
20
0
3-var
4-var
5-var
Pair-wise
6-var
7-var
8-var
Random
Fig. 9. MC/DC coverage for complex expressions (Mode 2–20 inputs)
TABLE IV.
MC/DC COVERAGE FOR MODE 2 (10 INPUTS)
MC/DC coverage, %
Random
R1
R2
R3
Average
77.3
67.5 85.4 80.4
69.2
68.1 66.9 74.6
67.9
69.4 71.8 66.9
65.6
71.7 63.0 65.8
Size
PW
3
4
5
6
93.3
79.9
78.2
76.8
7
8
64.5
57.7
55.2
52.8
59.2
49.4
58.2
57.1
Average
73.8
63.8
64.0
65.8
TABLE V.
R4
R5
70.8
74.0
63.0
82.3
62.6
68.4
57.7
53.8
66.3
45.2
53.3
61.0
55.7
50.3
65.5
61.5
62.0
MC/DC coverage, %
Random
R1
R2
R3
Average
88.3
82.6 90.7 92.0
75.2
71.9 74.8 71.6
76.3
71.2 75.5 73.1
PW
3
4
5
93.1
81.1
77.7
6
7
8
77.4
71.3
72.4
70.2
63.0
57.9
Average
77.8
70.6
IV.
71.6
62.8
54.5
68.1
73.5
62.3
64.4
72.3
67.7
62.8
56.5
69.1
ACKNOWLEDGMENT
This work was performed under the following financial
assistance award 70NANB14H264 from U.S. Department of
Commerce, National Institute of Standards and Technology.
The research is also partially supported by the National Science
Foundation REU Site Award 1262933.
REFERENCES
MC/DC COVERAGE FOR MODE 2 (20 INPUTS)
Size
Our final aims are the creation of methods that extend
pair-wise test cases to achieve MC/DC coverage close to
100%, and the development of a tool to automate those
methods.
[1]
R4
R5
87.9
74.7
88.5
82.9
79.4
82.1
74.0
58.9
55.1
70.5
64.1
68.2
59.0
73.1
CONCLUSIONS AND FUTURE WORK
This paper presents the initial results of experimental
testing with the aim of evaluating MD/DC coverage levels for
pair-wise test cases. The evaluations were done for separate
logical expressions in two different modes and, in both modes,
for groups of simple and complex expressions and for
expressions of different sizes. In each case, MC/DC coverage
for pair-wise test cases was compared to MC/DC coverage for
random test cases of the same size.
Our results evaluated the MC/DC coverage level for pairwise test cases as about 70% (66.8% for Mode 1 and 75.8%
for Mode 2). In practically all cases, MC/DC coverage of pairwise testing was higher than for random testing, with the
difference ranging from 2% up to 16%. The results show that
D. R. Kuhn, R. Kacker, and Y. Lei, Introduction to Combinatorial
Testing, Chapman and Hall/CRC, 2013, 341 pages.
[2] M. Grindal, J. Offutt, and S. F. Andler, “Combination testing strategies:
a survey,” Software Testing, Verification and Reliability, vol. 15, no. 3,
March 2005, pp. 167–199.
[3] D. R. Kuhn, R. Kacker, Y. Lei, and J. Hunter, “Combinatorial Software
Testing, ” IEEE Computer, vol. 42, no. 8, August 2009.
[4] D. R. Kuhn, Y. Lei, and R. Kacker, “Practical Combinatorial Testing Beyond Pairwise,” IEEE IT Professional, June 2008, pp. 19–23.
[5] S. Vilkomir, O. Starov, and R. Bhambroo, “Evaluation of t-wise
Approach for Testing Logical Expressions in Software,” Proceedings of
the IEEE Sixth International Conference on Software Testing,
Verification and Validation Workshops (ICSTW 2013), 18–20 March
2013, Luxembourg, Luxembourg, pp. 249–256.
[6] W. Ballance, S. Vilkomir, and W. Jenkins. “Effectiveness of Pair-wise
Testing for Software with Boolean Inputs,” Proceedings of the Fifth
International Conference on Software Testing, Verification and
Validation (ICST 2012), April 17-21, 2012, Workshop on Combinatorial
Testing (CT-2012), Montreal, Canada, pp. 580-585.
[7] J. Chilenski and S. Miller, “Applicability of Modified
Condition/Decision Coverage to Software Testing,” Software
Engineering Journal, September 1994, pp. 193-200.
[8] RTCA, DO-178B Standard “Software Considerations in Airborne
Systems and Equipment Certification,” 1982.
[9] S. Vilkomir, K. Kapoor and J. Bowen. “Tolerance of Control-Flow
Testing Criteria,” Proceedings of 27th IEEE Annual International
Computer Software and Applications Conference (COMPSAC 2003),
Dallas, Texas, USA, 3-6 November 2003, pp. 182-187.
[10] T. Y. Chen, F-C Kuo, H. Liu, and W. E. Wong, “Code Coverage of
Adaptive Random Testing,” IEEE Transactions on Reliability, Volume
62, Issue 1, March 2013, pp. 226-237.
[11] NIST, 2015, ACTS tool, http://csrc.nist.gov/groups/SNS/acts/
[12] CodeCover, 2015 http://codecover.org