Software Testing

Vidyalankar
T.Y. B.Sc. (IT)
Software Testing
Prelim Question Paper Solution
1. (a) GENERAL PRINCIPLES OF TESTING
Principle 1: Testing shows the presence of defects, not their absence
Principle 2: Exhaustive testing is not possible
Principle 3: Testing activities should start as early as possible
Principle 4: Defects tend to cluster together
Principle 5: The pesticide paradox
Principle 6: Test is context dependent
Principle 7: The fallacy of assuming that no failures means a useful system
1. (b) THE FUNDAMENTAL TESTING PROCESS
In order to accomplish a structured and controllable software development effort,
software development models and  development processes are used. There
are many different models: examples are the Waterfall-model, the V-model the
Spiral Model, different incremental or evolutionary models, and the “agile” or
“light weight” methods like XP (Extreme Programming), which are popular
nowadays. For the development of object-oriented software, systems, the
Rational Unified Process is discussed.
AII these models define a systematic way to achieve an orderly way of
workingduring the project. In most cases, phases and design steps are defined.
They have to be completed with a result in the form of a document. A phase
completion, often termed as amilestone, is achieved when the required
documents are completed and conform to the given quality criteria. Usually,
roles dedicated to specific tasks in software development are defined. These
tasks have to be accomplished by the project staff. Sometimes, in addition to the
models, the techniques and processes to be used in the particular phases are
described. With the aid of models, a detailed planning of the resources (time,
personnel, infrastructure etc.) can be performed. In a project, the development
models define the collective and mandatory tasks to be accomplished by
everyone involved, and their chronological sequence.
The first fundamental model was the Waterfall-model. It is impressively simple
and very well known. Only when one development level is completed will the next
one be initiated. Only between adjacent levels are there feedback loops that
allow, if necessary, required revisions in the previous level. The crucial
disadvantage of this model is that testing is understood as a "one time" action at
the end of the project just before the release to operation. The test is seen as a
"final inspection", an analogy to a manufacturing inspection before handing over
the product to the customer.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
1
Vidyalankar : T.Y. B.Sc. (IT)  ST
Fig. 1: waterfall-model
The description of tasks in the process models discussed above is not sufficient
as an instruction on how to perform structured tests in software projects. In
addition to the embedding of testing in the whole development process, a more
detailed process for the testing tasks themselves is needed. This means that the
“content” of the development task testing must be split into smaller subtasks, as
follows:test planning and control, test analysis and design, test implementation
and execution, evaluation of test exit criteria and reporting, and test closure
activities.
1. (c) Psychology of testing involves following steps :
(i) Human error : Many people make mistakes but they do not like to admit
them. One goal of software testing is to uncover descrinences between the
software and the specification or customer needs.
The failures found must be reported to the developer.
(ii) Developers test : Can the developer test his own program? The universally
valid answer does not exist. It the tester is also the author of the program or
the present in the developmenting then they must examine their own work
very critically.
But there are only few people which are able to keep the necessary distance
to the self created product. Who really likes to prove their own errors.
(iii) Independent testing term : It tends to increase the quality and
comprehensiveness of the test. The independent third party tester can look
at the test object without bias (partiality).
It is not their product and possible assumption and misunderstanding of the
developer are not necessarily the assumptions and misunderstanding of the
tester.
Tester must acquire the necessary knowledge of the test object in order to
create a test cases.
2
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
Prelim Question Paper Solution
But the tester come along with a deeper knowledge which is developer does
not have therefore independent professional group can take test software
effectively.
(iv) error reporting / Reporting of failures : It is a Job of software tester to report
the failures observed to the test manager.
The manner of this reporting can contribute to co-operate between
developers and tester. or have a negative influence on the important
communication of these two groups.
To proof the other peoples error is not an easy job and required diplomacy &
fact.
(v) Mutual comprehension : Mutual knowledge encourages co-operation
between software tester and software developer.
Developer should know the basics of testing and tester should have a basic
knowledge of software development due to this the overall software testing
becomes easy and simpally.
1. (d) Testing is necessary due to following reasons :
i) Understand the customer requirement and test the software with respect that
requirements.
ii) Software testing is necessary to verify and validate the software.
e.g. : V-model.
where each phase of the software development is validated through testing.
iii) Testing in necessary to verify the software which will execute successfully on
any platform, any environment at any time.
iv) Without software testing it is difficult to deploy the software to the customer.
v) Development people assume that what ever they have developed is as per
customer requirements and will always work. But it is not show every time
therefore software testing is to be done on developed software to access
whether it really works or not.
vi) Different entities are involved in different phases of software development.
There work may not be matching exactly with each other or with the
requirement statements. The gap between requirement, design, and coding
may not be traceable unless tasting is performed with respect to
requirements.
vii) Development may have excellent skills of coding but integration issues can
be present when different units do not works together, even though they
works independently.
viii) One must bring individual units together and make the final product,
therefore integration testing necessary to verify whether the software is
working properly a whole.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
3
Vidyalankar : T.Y. B.Sc. (IT)  ST
2. (a) Validation Model or V-Model :
Need with
policy law
Operator
skill
System
Analysis
Acceptance
Testing
System
Design
Software
development
phases
Integration
Testing
System
Coding
Unit
Testing
System
Testing
Validation &
System Testing
Validation
phases
Implementation

Validation model is used to perform validation testing with respect to the
software development phases.
(i) In system analysis developer gather all the requirements from the
customer with respect to the software under development. Once all the
requirements are gather, acceptance testing will be performed to validate
the requirements, and only those requirements will be a part of SRS
document which are accepted during “acceptance testing”.
(ii) In system design all the requirements which are gathered from customer
are considered and logical flow and a software on system in designed by
using flow chart on data flow diagram or algorithm. In system design low
level as well as high level aspect of the system are considered into the
design. Once the design will complete performed integration testing to
validate the software design.
In integration testing software tester performs top down and bottom of
integrated to check whether the system design can be implemented or
not.
(iii) In system coding the logical flow of the system is converted into a
software program that is software coding is done in this phase. To
validate, the software code unit testing is performed. While in unit testing
software tester test the code line by line or module by module.
4
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
Prelim Question Paper Solution
In unit testing tester test.
 Interface error.
 Errors in boundary conditions,
 Local data structure.
 Error handling paths.
(iv) In system testing the entire software is tested and executed so that it will
give error tree performance and it there are any bugs or errors on faults
that can be eliminated during software testing.
To validate the software validation and system testing are done.
In validation testing software testing executes the software and validate
all the requirements which are present is requirements specification
document.
2. (b) TEST TYPES: THE TARAGETS OF TESTING
A test type is focused on a particular test objective, which could be the testing of
a function to be performed by the component or system; a non-functional quality
characteristic, such as reliability or usability; the structure or architecture of the
component or system; or related to changes, i.e. confirming that defects have
been fixed (confirmation testing, or re-testing) and looking for unintended
changes (regression testing). Depending on its objectives, testing will be
organized differently. For example, component testing aimed at performance
would be quite different to component testing aimed at achieving decision
coverage.
(a) Testing of Function (functional testing)
The function of a system (or component) is ‘what is does’. This is typically
described in a requirements specification, a functional specification, or in
use cases. There may be some functions that are ‘assumed’ to be provided
that are not documented that are also part of the requirement for a system,
though it is difficult to test against undocumented and implicit requirements.
Functional tests are based on these functions, described in documents or
understood by the testers and may be performed at all test levels (e.g. test
for components may be based on a component specification).
Functional testing considers the specified behavior and is often also referred
to as black-box testing. This is not entirely true, since black-box testing also
includes non-functional testing.
Testing functionality can be done from two perspectives: requirements-based
or business-process-based.
Requirements-based testing uses a specification of the functional requirements for the system as the basis for designing tests. A good way to start is
to use the table of contents of the requirements specification as an initial test
inventory or list of items to test (or not to test). We should also prioritize the
requirements based on risk criteria (if this is not already done in the
specification) and use this to prioritize the tests. This will ensure that the
most important and most critical tests are included in the testing effort.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
5
Vidyalankar : T.Y. B.Sc. (IT)  ST
Business-process-based testing uses knowledge of the business processes.
Business processes describe the scenarios involved in the day-to-day
business use of the system. For example, a personnel and payroll system
may have a business process along the lines of: someone joins the
company, he or she is paid on a regular basis, and he or she finally leaves
the company. Use cases originate from object-oriented development, but are
nowadays popular in many development life cycles. They also take the
business processes as a starting point, although they start from tasks to be
performed by users. Use cases are a very useful basis for test cases from a
business perspective.
(b) Testing of Software Product Characteristics (non-functional testing)
A second target for testing is the testing of the quality characteristics, or nonfunctional attributes of the system (or component or integration group). Here
we are interested in how well or how fast something is done. We are testing
something that we need to measure on a scale of measurement, for example
time to respond.
Non-functional testing, as functional testing, is performed at all test levels.
Non-functional testing includes, but is not limited to, performance testing,
load testing, stress testing, usability testing, maintainability testing, reliability
testing and portability testing. It is the testing of 'how well' the system works.
The characteristics and their sub-characteristics are, respectively:
 functionality, which consists of five sub-characteristics: suitability, accuracy,
security, interoperability and compliance; this characteristic deals with functional testing.
 reliability, which is defined further into the sub-characteristics maturity
(robustness), fault-tolerance, recoverability and compliance;
 usability, which is divided into the sub-characteristics understandability,
learnability, operability, attractiveness and compliance;
 efficiency, which is divided into time behavior (performance), resource utilization and compliance;
 maintainability, which consists of five sub-characteristics: analyzability,
changeability, stability, testability and compliance;
 portability, which also consists of five sub-characteristics: adaptability,
installability, co-existence, replaceability and compliance.
(c) Testing of Software Structure/Architecture (structural testing)
The third target of testing is the structure of the system or component. If we
are talking about the structure of a system, we may call it the system
architecture. Structural testing is often referred to as ‘while-box’ or ‘glass-box’
because we are interested in what is happening ‘inside the box’.
Structural testing is most often used as a way of measuring the thoroughness
of testing though the coverage of a set of structural elements or coverage
items. It can occur at any test level, although is it true to say that it tends to
be mostly applied at component and integration and generally is less likely at
higher test levels, except for business-process testing. At component
6
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
Prelim Question Paper Solution
integration level it may be based on the architecture of the system, such as a
calling hierarchy. A system, system integration or acceptance testing test
basis could be a business model or menu structure.
At component level, and to a lesser extent at component integration testing,
there is good tool support to measure code coverage. Coverage
measurement tools assess the percentage of executable elements (e.g.
statements or decision outcomes) that have been exercised (i.e. covered) by
a test suite. If coverage is not 100%, then additional tests may need to be
written and run to cover those parts that have not yet been exercised. This of
course depends on the exit criteria.
The techniques used for structural testing are structure-based techniques,
also referred to as white-box techniques. Control flow models are often used
to support structural testing.
(d) Testing Related to Changes (confirmation and regression testing)
Confirmation Testing (re-testing)
When a test fails and we determine that the cause of the failure is a software
defect, the defect is reported, and we can expect a new version of the
software that has had the defect fixed. In this case we will need to execute
the test again to confirm that the defect has indeed been fixed. This is known
as confirmation testing (also known as re-testing).
When doing confirmation testing, it is important to ensure that the test is
executed in exactly the same way as it was the first time, using the same
inputs, data and environment.
Regression Testing
Like confirmation testing, regression testing involves executing test cases
that have been executed before. The difference is that, for regression testing,
the test cases probably passed the last time they were executed (compare
this with the test cases executed in confirmation testing-they failed the last
time).
The term ‘regression testing’ is something of a misnomer. It would be better if
it were called ‘anti-regression’ testing because we are executing tests with
the intent of checking that the system has not regressed (that is, it does not
now have more defects in it as a result of some change). More specifically,
the purpose of regression testing is to verify that modifications in the software
or the environment have not caused unintended adverse side effects and
that the system still meets its requirements.
Regression tests are executed whenever the software changes, either as a
result of fixed or new or changed functionality. It is also a good idea to
execute them when some aspect of the environment changes, for example
when a new version of a database management system is introduced or a
new version of a source code compiler is used.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
7
Vidyalankar : T.Y. B.Sc. (IT)  ST
2. (c) These are four levels of testing suggested by V-model namely
(i) Component or module
(ii) integration
(iii) System testing
(iv) Acceptance testing
(i) Component testing :
Component testing is also known as unit or module or program testing.
It searches for defects in and verifies the functing of software component like
module program object, classes etc. that are separately testable.
(ii) Integration testing :
Once all the modules are tested separately then they can be integrated using
integration testing and finally entire software is tested.
Integration testing test interfaces between components, interactions to
different parts of system such as an O.S., software & hardware interface.
There are three types of integration testing.
Incremental (Top-Down, Bottom up, Regression)
Regression :
Whenever integration testing is performed regression testing has to be called.
Regression testing is performed whenever any new module gets added into the
system or any existing module gets deleted from the system or any structural
changes happens in the system regression testing is called.
To analysis impact per :
 Regression test are executed whenevers software changes as a result of
fixed or new or changed functionality.
 More specifically the purpose of regression testing is to verify that
modification is a software or the environment have not caused un-integer
adverse side effect and that the system still needs it required.
 It is the good idea to execute them, when same aspect of the environment
changes for e.g.: change in DBMS or change in software architecture or
change in design or a new version of source code in used.
2. (d) MAINTENANCE TESTING
Once deployed, a system is often in service for years or even decades. During
this time the system and its operational environment is often corrected, changed
or extended. Testing that is executed during this life cycle phase is called
‘maintenance testing’.
(a) Impact Analysis and Regression Testing
Usually maintenance testing will consist of two parts:
 testing the changes
 regression tests to show that the rest of the system has not been
affected by the maintenance work.
8
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
Prelim Question Paper Solution
In addition to testing what has been changed, maintenance testing includes
extensive regression testing to parts of the system that have not been
changed. A major and important activity within maintenance testing is impact
analysis. During impact analysis, together with stakeholders, a decision is
made on what parts of the system may be unintentionally affected and
therefore need careful regression testing. Risk analysis will help to decide
where to focus regression testingit is unlikely that the team will have time to
repeat all the existing tests.
If the test specifications from the original development of the system are
kept, one may be able to reuse them for regression testing and to adapt them
for changes to the system. This may be as simple as changing the expected
results for your existing tests. Sometimes additional tests may need to be
built. Extension or enhancement to the system may mean new areas have
been specified and tests would be drawn up just as for the development. It is
also possible that updates are needed to an automated test set, which is
often used to support regression testing.
(b) Triggers for Maintenance Testing
As stated maintenance testing is done on an existing operational system. It
is triggered by modifications, migration, or retirement of the system.
Modifications include planned enhancement changes (e.g. release-based),
corrective and emergency changes, and changes of environment, such as
planned operating system or database upgrades, or patches to newly
exposed or discovered vulnerabilities of the operating system. Maintenance
testing for migration (e.g. from one platform to another) should include
operational testing of the new environment, as well as the changed software.
Maintenance testing for the retirement of a system may include the testing of
data migration or archiving, if long data-retention periods are required.
Planned Modifications
The following types of planned modification may be identified:
 perfective modifications (adapting software to the user’s wishes, for instance
by supplying new functions or enhancing performance);
 adaptive modifications (adapting software to environmental changes such as
new hardware, new systems software or new legislation);
 corrective planned modifications (deferrable correction of defects).
3. (a) Types of formal review
There are three main types of formal review
i) Technical review:
a) Technical review is a discussion meeting that focouses on achieving the
technical concepts about the content of the document under review.
b) Compared to inspection technical reviews are less formal and there is a
little or no focous on the defect identification on the basis of referenced
document, intended readership and rules.
c) During technical review defects are founds by experts, who focous on
the content of the document.
d) To perform technical review technical experts are needed.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
9
Vidyalankar : T.Y. B.Sc. (IT)  ST
e) The galls of technical review are:
 Access the value of technical concept and Alternatives in the product
and project environment.
 Establish the constancy in the use and representation of technical
concepts.
 Ensures that technical concepts we used correctly.
 Key characteristics of technical review are:
 It is a documented defect detection process that involves peer
and technical experts.
 It is performed as a peer review without management
participation.
 Ideally, it is lead by trend moderator but possible also by
technical export.
ii)
Inspection:
a) Inspection is the most formal type. Documents under inspection is
prepared and check the thoroughly by the reviewer before the meeting.
By comparing the work product with tis sources and other refereed
document.
b) In the inspection meeting the defects found are locked and any
discussion is postpond until the discussion phase. This makes the
inspections meeting a very efficient meeting.
c) The goals of inspection are:
 It helps the author to improve the quality of document under
inspection.
 It removes the defers efficiently and as early as possible.
 Train new employees in the organization development process.
 The key characteristics of inspects are:
 It is usually lead by a trend moderator.
 It involves peers to determine the product.
 For the inspection of document rules and check list are used.
iii) Walkthrough:
a) Walkthrough characters by the another of the document under review
guiding the participants through the document his/her throught process
to achieve a common understanding to gather feedback.
b) Within a Walkthrough the another does most of the preparation. The
participants who are selected from different departments and
background are not required to do a detail study of the document in
advanced.
c) The content of the document is explained step by step by another.
d) The specific goals of a Walkthrough depends on its role in the creation.
 The general goals are to present the document to sty holder within or
outside the show disciplined in order together information regarding
the topic under documented.
 To explain evaluate the contents of the document.
 Establish a common understanding of the document.
 Key characteristic of Walkthrough are:
10
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
Prelim Question Paper Solution



The walkthrough meeting is lead by author and repeaters scribe
in present.
Separate pre-meeting preparation of reviewers is optional.
With the help of check list and standard rules the documents can
be validated.
3. (b) Static analysis is an examination of differs from more traditional dynamic testing
in a number of ways.
(i) Static analysis is performed on requirement design a code. Without
executing the software.
(ii) Static analysis is performed before the formal review.
(iii) Static analysis is note related to dynamic properties of software such as code
coverage, branch coverage & statement coverage.
(iv) The goal of static analysis is to find defects whether or not they may cahie
failures.
As with reviews static analysis finds defects rather than failures.
Three methods are used in static analysis:
(i) Coding standards :
st
Checking of coding standards is most well known features. The 1 action to
be taken is to be defined or abopt code standard.
It usually a coding standard consist of set of programming rules, norming
convertion and layout specification.
The main advantage of using the coding standard is that it sever a list of
effort and by adopting well known coding standards there will probably be
checking tools are available.
There are three causes by using coding standards.
 The number of rules in a standard is usually so large that no one can
remember them all.
 Some context sensitive rules that demands reviews of several files are
very hard to check by human being.
 If people spend time in checking the coding standard then that will
distract them from other defects that they might otherwise find.
(ii) Code Matrices (Cyclomatic complexity) :
When we performed static code analysis usually information is calculated
about structural attribute of the code such as comment frequency, depth of
nesting, cylomatic number (path number). & number of LOC (Line of Code).
this information can be computed not only as the design and code are be
created but also as changes are made to the system, to see if the design or
code is becoming bigger more complex or more difficult to understand &
maintained.
Cyclomatic complexity matric is based on the number of decision in a
program. It is important to tester because it provides an indication of the
amount of testing necessary to practically avoid the defects.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
11
Vidyalankar : T.Y. B.Sc. (IT)  ST
Consider the example :
If A = 354
Then If B > C
Then A = R
ELSE A = C
END If
END If
PRINT A
If A = 354
If B > C
A=C
B=C
END IF
END IF
PRINT A
C = L  N + ZP … (formula of cyclomatic complexity)
(iii) Code structure :
 There are many different kinds of structural measures, each of which tellsing
something about required to write in the some place, to understand the code
when making a change or to test the code using particular tools.
 There are servers aspects of code structure to be considered.
(a) Control flow structured :
It address sequence in which test case instruction are executed. Control
flow analysis is used to identity unreachable or deadcode.
(b) Data flow structure :
It follows the trail of a data item as it is accessed and modified by the
code.
Using dataflow measures tester comes ot know, how data acts when they
are transformed by the program or software.
12
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
Prelim Question Paper Solution
Data Structure :
It refers to the organization of data it self, independent of the program code.
When data is arranged in the form of linklist, queue, stalk, or any other well
defined structured, the alogorithm for creating modifying and deleting them are
more likely to be well-defined.
Data structures provides lots of information about in writing the programs to
handle the data and derigning test cases to flow program correctness.
3. (c) Roles and Responsibility of Review:
The participants in any type of formal review should have adequate knowledge of
review process. The roles and responsibilities of the people who are present
during the review process are as follows:
i) Moderator:
a) Moderator is a leader of review process. He/she determined in cooperation with the author, the type of reviews and the approach of the
review process.
b) In the planning phase of review, moderate performs the scheduling.
c) Moderator also defines the exit cratering i.e. the number of defects allow
per page in a review process.
d) Moderator assigns review durance as well as the standard rules or
checklist to the reviewer for review.
e) Moderator performs entry check are the follow up on the rework.
f)
ii)
Moderator also leads the possible discussion and stores the data that is
collected.
Author:
a) Author is the writer of the document under review.
b) The basic goals of authors should be to learn as much as possible with
regards to improving the quality of the documents.
c) The took of the author is to illuminants un-clear areas and to understand
the defects found.
d) Author is present during the review meeting as well as in the rework
phase or the review process.
iii) Scribe/Recorder:
a) The main aim of scribe of scribe of recorder is to record the defects
noted by the reviewer during the loging phase.
b) During the logging meeting the scribe how to record each defects
mention and any suggestion for process improvement.
c) The scribe can explain the nature of defect to the author which are found
during the review process.
d) Scribe is present during the review meeting.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
13
Vidyalankar : T.Y. B.Sc. (IT)  ST
iv) Reviewer:
a) The reviewer is also called as checkers on inspectors. The task of
reviewer to review any material for defects mostly prior to review
meeting.
b) The level of domain knowledge or technical properties needed by the
reviewer is also depends on the type of review.
c) During the review process the reviewer will get document to be review as
well as checklist on standard to review that document.
d) The reviewer must present in preprating review meeting and follow-up
phases of the review process.
v) Manager:
a) Manager is also known as the chairperson of the review process.
b) The manager is involved in the review process as he/she decides on the
execution of reviews, allocates time in project schedule and determines
whether review process objectives have been meat.
c) Manager will also take care of any review training requested by the
participants.
d) Manager is also responsible to clarify any peoples issue involve in the
entire review process.
3. (d) Success Factors for Reviews
Implementing (formal) reviews is not easy as there is no one way to success and
there are numerous ways to fail. The next list contains a number of critical
success factors that improve the chances of success when implementing
reviews. It aims to answer the question, 'How do you start (formal) reviews?'.
i) Find a 'champion'
ii) Pick Things that Really Count
iii) Explicitly Plan and Track Review Activities
iv) Train Participants
v) Manage People Issues
vi) Follow the Rules but keep it Simple
vii) Continuously Improve Process and Tools
viii) Report Results
ix) Just do it!
4. (a) Equivalence Partitioning
The domain of possible input data for each input data element is divided into
equivalence classes.
An equivalence, class is a group of data values where the tester assumes that
the test object processes them in the same way.
The test of one representative of the equivalence class is seen as sufficient
because it is assumed that for any other input value, the same equivalence class
the test object will not show a different behavior. Besides, equivalence classes
for correct input, those for incorrect input values must be tested as well.
14
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
Prelim Question Paper Solution
The following are the strategies used to determine the equivalence class.
(i) For the input as well as for the output identify the restictions and conditions in
the specification and for every restriction equivalence classes are defined.
(ii) If a continues numerial domain specified then create one valid and two
invalid equivalence classes.
e.g. : Input Range
10  x  20
Valid  x = {10, 11, … 20}
invalid  x < 10
x > 20
(iii) If a no. of values should be intered then create one valid and two invalid
equivalence, classes are designed (Valid is equal to invalid is equal to less
than or more than).
e.g. : Input specific value
x=5
(belongs to)
valid  x  5
Invalid  x < 5
x>5
(d) It a set of values are specified where each value may possibly be treated
differently then created one valid and one invalid equivalence class. Where
valid equivalence class containing all valid input addres and invalid class
contains all possible other values.
e.g. : X = {a, e, i, o, u}
valid  x  {a, e, i, o, u}
Invalid  x{b, c, …}
(e) If there is a condition that must be fulfill then create one valid and one invalid
equivalence class to test the condition full-filled on not full-filled.
e.g. :
Input Boolean values
Valid  True
Invalid  False
Test completion criteria for equivalence partitioning : A test completion
criteria for the test by equivalence class partitioning can be defined as the
percentage of executed equivalence classes in comparison to the total
number of specified equivalence classes.
 equivalence Classes 
[Therefore E-C-Coverage = no. of tested EC 
]
 totalnumberofEC 
Boundary value analysis
Definition : It checks the border of the equivalence class.
On every border the exact boundary value or both nearest adjust value (inside a
outside of the equivalence class) are tested. Thereby the minionul possible.
increment in both direction should be used. Therefore three test cases result for
every Boundary i.e. lower bound upper bound, and intermediate value.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
15
Vidyalankar : T.Y. B.Sc. (IT)  ST
Boundery value analysis is suitable when the input condition of equivalence class
defines a range.
Hint for test case design by BVA.
(a) For an input domain the boundaries and adjusent values outside the domain
must be considered for e.g. :
If the domain is domain = [1.0; +1.0]
Test data = 1.0; +1.0; 1.001; + 1.001, 0.99, + 0.999.
(b) An input file has a restricted no. of data records between one and hundred
the test value will be one. hundred, zero & 101.
(c) If the output domain is serve as the basis, then the analysis can be alone as
follows :
output of test object is an integer value between 500 & 1000.
The test output should be achieved as 500, 1000, 499 and 1001.
(d) If complex data structure are given as input & output for instance and MT list
or zero matrices can be consider as a boundary value.
Test completion criteria for Boundery value can be defined as
 no.oftestedBV 
Boundary value coverage (BV coverages = 
)
 Totalno.ofBV 
4. (b)
1.
Specification based
Specification based on BBT
Structure based
Structure based on WBT
Event
Event
test cases
input
software
code + data
Result
output
Actions
2.
3.
4.
16
In specification base testing show
code and data are not visible to the
tester. Therefore performance of the
slow can be tested by applying set of
input condition is called test cases.
Specification based testing is also
called as Black box testing is
suitable when the entire specification
as the show is known.
Specification based testing is used,
in the later stage of the show
development i.e. after in
To perform specification based
testing less technical knowledge is
read.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
test cases
input
software
code + data
Result
output
Actions
In structure based the show code
and data are variable the tester
therefore internal structure of the
show can be tested.
Structure based testing is also know
glass box or WBT and is suitable
when the code of the show is to be
test.
Structure based testing is used in the
early types of show development i.e.
once coding.
To perform structure base testing
high technical knowledge is read
because taster deals with code.
Prelim Question Paper Solution
5.
6.
7.
Specification base testing requires
less time therefore it is suitable for
large scale show.
Specification testing try to eliminate
following categories of errors.
a) Interface errors.
b) Local data structure error.
c) Boundary condition error.
d)Initialization of termination errors.
The methods used in specification
testing are:
a) equivalence partition.
b) Boundary value analysis.
c) decision table testing
d) state transition testing.
e) use case testing.
Structure based requires more time
therefore it suitable for small scale
show or programs.
Structure based technique tryies to
eliminate following types of errors.
a) independent path error.
b) loop error.
c) data flow errors.
d) error handing paths.
Structure base technique uses
a) Statement Coverage
b) Branch Coverage
c) Path Coverage
4. (c) Intuitive and Experience Based Test Case Determination
Besides the methodical approach, intuitive determination of test cases should be
performed. The systematically identified test cases may be complemented by
intuitive test cases. Intuitive testing can uncover faults overlooked by systematic
testing.
ERROR GUESSING
Error guessing is a technique that should always be used as a complement to
other more formal techniques. The success of error guessing is very much
dependent on the skill of the tester, as good testers know where the defects are
most likely to lurk. Some people seem to be naturally good at testing and others
are good testers because they have a lot of experience either as a tester or
working with a particular system and so are able to pin-point its weaknesses.
This is why an error-guessing approach, used after more formal techniques have
been applied to some extent, can be very effective. In using more formal
techniques, the tester is likely to gain a better understanding of the system, what
it does and how it works. With this better understanding, he or she is likely to be
better at guessing ways in which the system may not work properly.
There are no rules for error guessing. The tester is encouraged to think of
situations in which the software may not be able to cope. Typical conditions to try
include division by zero, blank (or no) input, empty files and the wrong kind of
data (e.g. alphabetic characters where numeric are required). If anyone ever
says of a system or the environment in which it is to operate ‘That could never
happen’, it might be a good idea to test that condition, as such assumptions
about what will and will not happen in the live environment are often the cause of
failures. A structured approach to the error-guessing technique is to list possible
defects or failures and to design tests that attempt to produce them. These defect
and failures and to design tests that attempt to produce them. These defect and
failure lists can be built based on the tester’s own experience or that of other
people, available defect and failure data, and from common knowledge about
why software fails.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
17
Vidyalankar : T.Y. B.Sc. (IT)  ST
Exploratory Testing
Exploratory testing is a hands-on approach in which testers are involved in
minimum planning and maximum test execution. The planning involves the
creation of a test charter, a short declaration of the scope of a short (1 to 2 hour)
time-boxed test effort, the objectives and possible approaches to be used.
The test design and test execution activities are performed in parallel typically
without formally documenting the test conditions, test cases or test scripts. This
does not mean that other, more formal testing techniques will not be used. For
example, the tester may decide to use boundary value analysis but will think
through and test the most important boundary values without necessarily writing
them down. Some notes will be written during the exploratory-testing session, so
that a report can be produced afterwards.
Test logging is undertaken as test execution is performed, documenting the key
aspects of what is tested, any defects found and any thoughts about possible
further testing. A key aspect of exploratory testing is learning: learning by the
tester about the software, its use, its strengths and its weaknesses. As its name
implies, exploratory testing is about exploring, finding out about the software,
what it does, what it doesn’t do, what works and what doesn’t work. The tester is
constantly making decisions about what to test next and where to spend the
(limited) time.
This is an approach that is most useful when there are no or poor specifications
and when time is severely limited. It can also serve to complement other, more
formal testing, helping to establish greater confidence in the software. In this
way, exploratory testing can be used as a check on the formal test process by
helping to ensure that the most serious defects have been found.
4. (d) State Transition Test
(i) State : 
(ii) Start State : 
(iii) End State :
(iv) Event/action : evection / action
Turn style machine
locked
dial numbers
insert coin
communication end
time laps
connection failure
18
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
unlocked
Re-insert coin
Prelim Question Paper Solution
STD ATM Machine
Idle
Invalid Card
Card Validation
Enter pin
Invalid pin
Enter pin
Select Transaction
Invalid
Transaction
Transaction
Valid Transaction
Processing
push
Empty
push [height = max  1]
Filled
pop [height=1]
full
pop
push + pop
5. (a) Project risks
However, testing is an activity like the rest of the project and thus it is subject to
risks that endanger the project. To deal with the project risks that apply to testing,
we can use the same concepts we apply to identifying, prioritizing and managing
product risks.
Remembering that a risk is the possibility of a negative outcome, what project risks
affect testing? There are direct risks such as the late delivery of the test items to
the test team or availability issues with the test environment. There are also indirect
risks such as excessive delays in repairing defects found in testing or problems
with getting professional system administration support for the test environment.
For any risk, product or project, you have four typical options:
 Mitigate: Take steps in advance to reduce the likelihood (and possibly the
impact) of the risk.
 Contingency: Have a plan in place to reduce the impact should the risk
become an outcome.
 Transfer: Convince some other member of the team or project stakeholder
to reduce the likelihood or accept the impact of the risk.
 Ignore: Do nothing about the risk, which is usually a smart option only when
low.
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
19
Vidyalankar : T.Y. B.Sc. (IT)  ST
Here are some typical risks along with some options for managing them.
 Logistics or product quality problems that block tests: These can be
mitigated through careful planning, good defect triage and management, and
robust test design.
 Test items that won't install in the test environment: These can be
mitigated through smoke (or acceptance) testing prior to starting test phases
or as part of a nightly build or continuous integration. Having a defined
uninstall process is a good contingency plan.
 Excessive change to the product that invalidates test results or
requires updates to test cases, expected results and environments:
These can be mitigated through good change-control processes, robust test
design and light weight test documentation. When severe incidents occur,
transference of the risk by escalation to management is often in order.
 Insufficient or unrealistic test environments that yield misleading
results: One option is to transfer the risks to management by explaining the
limits on test results obtained in limited environments. Mitigation - sometimes
complete alleviation - can be achieved by outsourcing tests such as
performance tests that are particularly sensitive to proper test environments.
Here are some additional risks to consider and perhaps to manage:
 Organizational issues such as shortages of people, skills or training,
problems with communicating and responding to test results, bad
expectations of what testing can achieve and complexity of the project team
or organization.
 Supplier issues such as problems with underlying platforms or hardware,
failure to consider testing issues in the contract or failure to properly respond
to the issues when they arise.
 Technical problems related to ambiguous, conflicting or unprioritized
requirements, an excessively large number of requirements given other
project constraints, high system complexity and quality problems with the
design, the code or the tests.
5. (b) Estimation techniques
There are two techniques for estimation covered by the ISTQB Foundation
Syllabus. One involves consulting the people who will do the work and other
people with expertise on the tasks to be done. The other involves analyzing
metrics from past projects and from industry data.
Asking the individual contributors and experts involves working with experienced
staff members to develop a work-breakdown structure for the project. With that
done, you work together to understand, for each task, the effort, duration,
dependencies, and resource requirements. The idea is to draw on the collective
wisdom of the team to create your test estimate. Using a tool such as Microsoft
Project or a whiteboard and sticky-notes, you and the team can then predict the
testing end-date and major milestones. This technique is often called 'bottom up'
estimation because you start at the lowest level of the hierarchical breakdown in
the work-breakdown structure - the task - and let the duration, effort,
dependencies and resources for each task add up across all the tasks.
20
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
Prelim Question Paper Solution
5. (c) Test approaches or strategies
The choice of test approaches or strategies is one powerful factor in the success
of the test effort and the accuracy of the test plans and estimates. This factor is
under the control of the testers and test leaders.
 Analytical: For example, the risk-based strategy involves performing a risk
analysis using project documents and stakeholder input, then planning,
estimating, designing, and prioritizing the tests based on risk.
 Model-based: For example, you can build mathematical models for loading
and response for e-commerce servers, and test based on that model. If the
behavior of the system under test conforms to that predicted by the model,
the system is deemed to be working. Model-based test strategies have in
common the creation or selection of some formal or informal model for critical
system behaviors, usually during the requirements and design stages of the
project.
 Methodical: For example, you might have a checklist that you have put
together over the years that suggests the major areas of testing to run or you
might follow an industry-standard for software quality, such as ISO 9126, for
your outline of major test areas. You then methodically design, implement
and execute tests following this outline. Methodical test strategies have in
common the adherence to a pre-planned, systematized approach that has
been developed in-house, assembled from various concepts developed inhouse and gathered from outside, or adapted significantly from outside ideas
and may have an early or late point of involvement for testing.
 Process- or standard-compliant: For example, you might adopt the IEEE
829 standard for your testing, using books such as [Craig, 2002] or [Drabick,
2004] to fill in the methodological gaps. Alternatively, you might adopt one of
the agile methodologies such as Extreme Programming. Process- or
standard-compliant strategies have in common reliance upon an externally
developed approach to testing, often with little - if any - customization and
may have an early or late point of involvement for testing.
 Dynamic: For example, you might create a lightweight set of testing guide
lines that focus on rapid adaptation or known weaknesses in software.
Dynamic strategies, such as exploratory testing, have in common
concentrating on finding as many defects as possible during test execution
and adapting to the realities of the system under test as it is when delivered,
and they typically emphasize the later stages of testing. See, for example,
the attack-based approach of and the exploratory approach.
 Consultative or directed: For example, you might ask the users or
developers of the system to tell you what to test or even rely on them to do
the testing. Consultative or directed strategies have in common the reliance
on a group of non-testers to guide or perform the testing effort and typically
emphasize the later stages of testing simply due to the lack of recognition of
the value of early testing.
 Regression-averse: For example, you might try to automate all the tests of
system functionality so that, whenever anything changes, you can re-run
every test to ensure nothing has broken. Regression-averse strategies have
in common a set of procedures - usually automated - that allow them to
detect regression defects. A regression-averse strategy may involve
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
21
Vidyalankar : T.Y. B.Sc. (IT)  ST






automating functional tests prior to release of the function, in which case it
requires early testing, but sometimes the testing is almost entirely focused on
testing functions that already have been released, which is in some sense a
form of post-release test involvement.
Risks: Testing is about risk management, so consider the risks and the level
of risk. For a well-established application that is evolving slowly, regression is
an important risk, so regression-averse strategies make sense. For a new
application, a risk analysis may reveal different risks if you pick a risk-based
analytical strategy.
Skills: Strategies must not only be chosen, they must also be executed. So,
you have to consider which skills your testers possess and lack. A standard
compliant strategy is a smart choice when you lack the time and skills in your
team to create your own approach.
Objectives: Testing must satisfy the needs of stakeholders to be successful.
If the objective is to find as many defects as possible with a minimal amount
of up-front time and effort invested - for example, at a typical independent
test lab - then a dynamic strategy makes sense.
Regulations: Sometimes you must satisfy not only stakeholders, but also
regulators. In this case, you may need to devise a methodical test strategy
that satisfies these regulators that you have met all their requirements.
Product: Some products such as weapons systems and contractdevelopment software tend to have well-specified requirements. This leads to
synergy with a requirements-based analytical strategy.
Business: Business considerations and business continuity are often
important. If you can use a legacy system as a model for a new system, you
can use a model-based strategy.
5. (d) RISK AND TESTING
As you read this section, make sure to attend carefully to the glossary terms
product risk, project risk, risk and risk-based testing.
(a) Risks and levels of risk
Risk is a word we all use loosely, but what exactly is risk? Simply put, it's the
possibility of a negative or undesirable outcome. In the future, a risk has
some likelihood between 0% and 100%; it is a possibility, not a certainty. In
the past, however, either the risk has materialized and become an outcome
or issue or it has not; the likelihood of a risk in the past is either 0% or 100%.
The likelihood of a risk becoming an outcome is one factor to consider when
thinking about the level of risk associated with its possible negative
consequences. The more likely the outcome is, the worse the risk. However,
likelihood is not the only consideration.
(b) Product risks
You can think of a product risk as the possibility that the system or software
might fail to satisfy some reasonable customer, user, or stakeholder
expectation. (Some authors refer to 'product risks' as 'quality risks' as they
are risks to the quality of the product.) Unsatisfactory software might omit
some key function that the customers specified, the users required or the
stakeholders were promised. Unsatisfactory software might be unreliable and
22
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
Prelim Question Paper Solution
frequently fail to behave normally. Unsatisfactory software might fail in ways
that cause financial or other damage to a user or the company that user
works for. Unsatisfactory software might have problems related to a
particular quality characteristic, which might not be functionality, but rather
security, reliability, usability, maintainability or performance.
6. (a) Potential Benefits of using Tools
There are many benefits that can be gained by using tools to support testing,
whatever the specific type of tool. Benefits include:
 reduction of repetitive work;
 greater consistency and repeatability;
 objective assessment;
 ease of access to information about tests or testing.
Risks of using Tools
Although there are significant benefits that can be achieved using tools to
support testing activities, there are many organizations that have not achieved
the benefits they expected.
Simply purchasing a tool is no guarantee of achieving benefits, just as buying
membership in a gym does not guarantee that you will be fitter. Each type of tool
requires investment of effort and time in order to achieve the potential benefits.
There are many risks that are present when tool support for testing is introduced
and used, whatever the specific type of tool. Risks include:
 unrealistic expectations for the tool;
 underestimating the time, cost and effort for the initial introduction of a tool;
 underestimating the time and effort needed to achieve significant and
continuing benefits from the tool;
 underestimating the effort required to maintain the test assets generated by the
tool;
 over-reliance on the tool.
6. (b) Test Design Tools
Features or characteristics of test design tools include support for :
 generating test input values from:
 requirements;
 design models (state, data or object);
 code;
 graphical user interfaces;
 test conditions;
 generating expected results, if an oracle is available to the tool.
Test Data Preparation Tools
Features or characteristics of test data preparation tools include support to:
 extract selected data records from files or databases;
 'massage' data records to make them anonymous or not able to be identified
with real people (for data protection);
 enable records to be sorted or arranged in a different order;
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
23
Vidyalankar : T.Y. B.Sc. (IT)  ST


generate new records populated with pseudo-random data, or data set up
according to some guidelines, e.g. an operational profile;
construct a large number of similar records from a template, to give a large
set of records for volume tests, for example.
Test Execution Tools
The main reason for this is that a captured script is very difficult to maintain
because :
 It is closely tied to the flow and interface presented by the GUI.
 It may rely on the circumstances, state and context of the system at the time
the script was recorded. For example, a script will capture a new order
number assigned by the system when a test is recorded. When that test is
played back, the system will assign a different order number and reject sub
sequent requests that contain the previously captured order number.
 The test input information is 'hard-coded', i.e. it is embedded in the individual
script for each test.
Features or characteristics of test execution tools include support for :
 capturing (recording) test inputs while tests are executed manually;
 storing an expected result in the form of a screen or object to compare to, the
next time the test is run;
 executing tests from stored scripts and optionally data files accessed by the
script (if data-driven or keyword-driven scripting is used);
 dynamic comparison (while the test is running) of screens, elements, links,
controls, objects and values;
 ability to initiate post-execution comparison;
 logging results of tests run (pass/fail, differences between expected and
actual results);
 masking or filtering of subsets of actual and expected results, for example
excluding the screen-displayed current date and time which is not of interest
to a particular test;
 measuring timings for tests;
 synchronizing inputs with the application under test, e.g. wait until the
application is ready to accept the next input, or insert a fixed delay to
represent human interaction speed;
 sending summary results to a test management tool.
6. (c) Requirements Management Tools
Are requirements management tools really testing tools? Some people may say
they are not, but they do provide some features that are very helpful to testing.
Because tests are based on requirements, the better the quality of the
requirements, the easier it will be to write tests from them. It is also important to
be able to trace tests to requirements and requirements to tests.
Features or characteristics of requirements management tools include support
for :
 storing requirement statements;
 storing information about requirement attributes;
 checking consistency of requirements;
24
1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
Prelim Question Paper Solution






identifying undefined, missing or 'to be defined later' requirements;
prioritizing requirements for testing purposes;
traceability of requirements to tests and tests to requirements, functions or
features;
traceability through levels of requirements;
interfacing to test management tools;
coverage of requirements by a set of tests (sometimes).
6. (d) Incident Management Tools
This type of tool is also known as a defect-tracking tool, a defect-management
tool, a bug-tracking tool or a bug-management tool. However, 'incident
management tool' is probably a better name for it because not all of the things
tracked are actually defects or bugs; incidents may also be perceived problems,
anomalies (that aren't necessarily defects) or enhancement requests. Also what
is normally recorded is information about the failure (not the defect) that was
generated during testing - information about the defect that caused that failure
would come to light when someone (e.g. a developer) begins to investigate the
failure.
Incident reports go through a number of stages from initial identification and
recording of the details, through analysis, classification, assignment for fixing,
fixed, re-tested and closed, as described in Chapter 5. Incident management
tools make it much easier to keep track of the incidents over time.
Features or characteristics of incident management tools include support for :
 storing information about the attributes of incidents (e.g. severity);
 storing attachments (e.g. a screen shot);
 prioritizing incidents;
 assigning actions to people (fix, confirmation test, etc.);
 status (e.g. open, rejected, duplicate, deferred, ready for confirmation test,
closed);
 reporting of statistics/metrics about incidents (e.g. average time open,
number of incidents with each status, total number raised, open or closed).

1013/BSc/IT/TY/Pre_Pap/2013/CP/Reg/ST_Soln
25