Testing material

Written on 10:13 AM by MURALI KRISHNA

Introduction to Testing

Overview on Testing

Software testing is often used in association with the terms Verification and Validation. Verification is the checking or testing of items, including software, for conformance and consistency with an associated specification. Software testing is one kind of verification, which also uses techniques such as reviews, analysis, inspections and walkthroughs. Validation is the process of checking if what has been specified is what the user actually wanted.
• Validation: Are we building the right product?
• Verification: Are we building the product right?

Debugging Debugging is the process of analyzing and locating the bugs when software does not behave as expected. Although the identification of some bugs will be obvious from playing with the software, methodical approach to software testing is a much more thorough means of identifying bugs. Debugging is therefore an activity that supports testing, but cannot replace testing. However, no amount of testing can be guaranteed to discover all bugs.

What is testing?
Testing is the process of executing a program with the intent of finding an error. A good test case is one that has a probability of finding an as-yet undiscovered error. A successful test is one that uncovers an as-yet-undiscovered error;

Why testing?
The development of software systems involves a series of production activities where opportunities for injection of human fallibilities are enormous. Errors may begin to occur at the very inception of the process where the requirements may be erroneously or imperfectly specified. Because of human inability to perform and communicate with perfection, software development is accompanied by a quality assurance activity.


Economics of Testing
“Too little testing is a crime - too much testing is a sin”. The risk of under testing is directly translated into system defects present in the production environment. The risk of over testing is the unnecessary use of valuable resources in testing systems that have no defects, or very few defects that the cost of testing far exceeds the value of detecting the system defect.
Most of the problems associated with testing occur from one of the following causes:
• Failure to define testing objectives
• Testing at the wrong phase in the cycle
• Use of ineffective test techniques


Extent of Testing

The cost-effectiveness of testing is illustrated in the above diagram. As the cost of testing increases, number of undetected defects decreases. The left side of the diagram represents the under test situation and the right the after test. In the undertest side, cost of testing is less than the resultant loss from undetected defects. At some point, the two lines cross and an overtest condition begins. In this situation, the cost of testing to uncover defects exceeds the losses from those defects. A cost effective perspective means testing until the optimum point is reached, which is the point where the cost of testing no longer exceeds the value received from the defects uncovered.

Testers Roles
Testing, is the core competence of any Testing Organization.
So, what does a tester do

• Understand the Application Under Test
• Prepare test strategy
• Assist with preparation of test plan.
• Design high-level conditions
• Develop test scripts
• Understand the data involved
• Execute all assigned test cases
• defects in the defect tracking system
• Retest fixed defects
• Assist the test leader with his/her duties
• Provide feedback in defect triages
• Automate test scripts
• Understanding of SQL

Types of testing

Testing can be basically classified as :

White Box Testing
• Aims to establish that the code works as designed
• Examines the internal structure and implementation of the program
• Target specific paths through the program
• Needs accurate knowledge of the design, implementation and code

Black box testing
• Aims to establish that the code meets the requirements
• Tends to be applied later in the lifecycle
• Mainly aimed at finding deviations in behavior from the specification or requirement.
• Causes are inputs, effects are observable outputs


Alpha Testing
A customer conducts the Alpha testing at the developer's site. The software is used in a natural setting with the developer recording errors and usage problems. Alpha tests are conducted in the controlled environment by the developer.

Beta Testing
The beta testing is conducted at one or more customer sites by the end user{s) of the software. The developer will not be present in the customer's place. So, the Beta test is a 'live' application of the software in an environment that cannot be controlled by a developer. The customer records all the problems (real or apparent) that are encountered during the beta testing and reports to the developer at regular interval. As a result of problems reported during beta test, the software developer makes the modifications and then prepares for release of the software product to the entire customer base.

Integrated Systems Testing

Integrated System Testing (IST) is a systematic technique for validating the construction of the overall Software structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and test the overall Software structure that has been dictated by design. IST can be done either as Top down integration or Bottom up Integration.

Top Down: Approach is to check the integrity starting from the apex of the system and moving down to check integrity of various modules

Bottom up: Approach is to check the integrity starting from a module and checking the integrity to the apex.


User Acceptance Testing
User Acceptance Testing (UAT) is performed by Users or on behalf of the users to ensure that the Software functions in accordance with the Business Requirement Document. UAT focuses on the following aspects:

• All functional requirements are satisfied
• All performance requirements are achieved
• Other requirements like transportability, compatibility, error recovery etc. are satisfied
• Acceptance criteria specified by the user is met.

Difference between IST and UAT

Particulars
IST UAT
Baseline Document Functional Specification Business Requirement
Data Simulated Live Data
Environment Controlled Simulated Live
Perspective Functionality User style
Location Off Site On Site
Tester Composition Tester Company Test company & Real Users
Purpose Validation & Verification User Needs

Performance Testing

Performance testing is designed to test run time performance of software within the context of an integrated system. It is not until all systems elements are fully integrated and certified as free of defects the true performance of a system can be ascertained.
Performance tests are often coupled with stress testing and often require both hardware and software infrastructure. That is, it is necessary to measure resource utilization in an exacting fashion. External instrumentation can monitor intervals, log events. By instrumenting the system, the tester can uncover situations that lead to degradations and possible system failure.


Test Preparation Process

Baseline Documents
Construction of an application and testing are done using certain documents. These documents are written in sequence, each of it derived from the previous document.

Business Requirement
This document describes users needs for the application. This is done over a period of time, and going through various levels of requirements. This should also portrays functionalities that are technically feasible within the stipulated times frames for delivery of the application.
As this contains user perspective requirements. User Acceptance Test is based on this document.

How to read a Business Requirement?
In case of the Integrated Test Process, this document is used to understand the user requirements and find the gaps between the User Requirement and Functional Specification.
User Acceptance Test team should break the business requirement document into modules depending on how the user will use the application. While reading the document, test team should put themselves as end users of the application. This document would serve as a base for UAT test preparation.

Functional Specification

This document describes the functional needs, design of the flow and user maintained parameters. These are primarily derived from Business Requirement document, which specifies the client's business needs.
The proposed application should adhere to the specifications specified in this document. This is used henceforth to develop further documents for software construction and validation and verification of the software.
In order to achieve synchronization between the software construction and testing process. Functional Specification (FS) serves as the Base document.

How to read a Functional Specification?
The testing process begins by first understanding the functional specifications. The FS is normally divided into modules. The tester should understand the entire functionality that is proposed in the document by reading it thoroughly.
It is natural for a tester at this point to get confused on the total flow and functionality. In order to overcome these, it is advisable for the tester to read the document multiple times, seeking clarifications then and there until clarity is achieved.
Testers are then given a module or multiple modules for validation and verification. These modules then become the tester's responsibility.
The Tester should then begin to acquire an in-depth knowledge of their respective modules. In the process, these modules should be split into segments like field level validations, module rules, business rules etc. In order to do the same modules precisely the tester should interpret importance and its role within the application.
A high level understanding of the data requirements for respective modules is also expected from the tester at this point.
Interaction with test lead at this juncture is crucial to draw a testing approach, like an end-to-end test coverage or individual test. (Explained later in the document)


Tester's Reading Perspective
Functional specification is sometimes written assuming some level of knowledge of the Testers and constructors. We can categorize the explanations by

Explicit Rules: Functionality expressed as conditions clearly in writing, in the document.

Example
Date of a particular field should be system date

Implicit Rules: Functionality that is implied based on what is expressed as a specification/condition or requirement of a user.

Example
FS would mention the following for a deposit creation
Start Date Field: Should be = or > than the system date
Maturity Date Field: Should be = or > than the system date
Under this condition, the implied specification derived is that, Start date should not be equal to the maturity date

The tester must also bear in mind, the test type i.e. Integrated System Testing (IST) or User Acceptance Testing (UAT). Based on this, he should orient his testing approach.

Design Specification
This document is prepared based on the functional specification. It contains the system architecture, table structures and program specifications. This is ideally prepared and used by the construction team. The Test Team should also have a detailed understanding of the design specification in order to understand the system architecture.

System Specification
This document is a combination of functional specification and design specification. This is used in case of small applications or an enhancement to an application. Under such situations it may not be advisable make two documents.

Prototype
This is look and feel representation of the application that is proposed. This basically shows the placement of the fields, modules and generic flow of the application. The main objective of the prototype is to demonstrate the understanding of the application to the users and obtain their buy-in before actual design and construction begins.
The development team also uses the prototype as a guide to build the application. This is usually done using HTML or MS PowerPoint with user interaction facility.

Scenarios in Prototype
The flow and positioning of the fields and modules are projected using several possible business scenarios derived from the application functionality.
Testers should not expect all possible scenarios to be covered in the prototype.

Flow of Prototype
The flow and positioning are derived from initial documentation off the project. A project is normally dynamic during initial stages, and hence tester should bear in mind the changes to the specification, if any, while using the prototype to develop test conditions.
It is a value addition to the project when tester can identify mismatches between the specifications and prototype, as the application can be rectified in the initial stages itself.

Test Strategy
Actual writing of a strategy involves aspects, which define other issues between the Testing organization and the client. Testers must basically understand some of the issues that are discussed in the strategy document, which are outlined below.

Please refer to Annexure One for complete details of a Test Strategy

The testing process may take the form of an End-to-End approach or individual segment testing using various values.

End-to-End: The test path uses the entire flow provided in the application for completion of a specified task. Within this process various test conditions and values are covered and results analyzed. There maybe a possibility of reporting several defects relating to the segments while covering the test path. The advantage of using this approach is to minimize combination and permutation of conditions/values and ensure coverage and integration.

Individual Segment Testing: Several conditions and values are identified for testing at the unit level for testing. These are tested as separate cases.

Automation Strategy
Automation of testing process is done to reduce the effort during regression testing. In some cases automating the entire testing process may not possible due to technical and time constraints. The possible automation strategies that could be adopted depending on the type of the project are

Selective: Critical and complex cases are identified. These test cases are generally automated to simplify the testing process and save time.

Complete: As the term suggests, all test cases technically possible are automated.

Performance Strategy
The client specifies the standards for the performance testing. It generally contains
• Response time
• Number of Virtual Users
Using the above information, a Usage Pattern of the application is derived and documented in the strategy. Issues discussed in the performance strategy document are

Resources: Personnel trained in Performance testing tool identified. Date wise utilization of the resources is laid down.

Infrastructure: Generation of virtual users require huge amount of RAM. The performance team should be given a machine, which is suitable for the performance tool.

Report: The type of report that will be generated after the tests are discussed. Reports are ideally in the form of graphs. Reports generated are:
• Detailed Transaction Report (By Virtual user)
• Throughput Graph
• Hits per second Graph
• Transaction per second
• Transaction Response Time Graph
• Transaction Performance Summary Graph
• Transaction Distribution Graph
• Transaction Performance Summary Graph

Risk Analysis

Risk's associated with projects are analyzed and mitigation's are documented in this document. Types of risk that are associated are

Schedule Risk: Factors that may affect the schedule of testing are discussed.

Technology Risk: Risks on the hardware and software of the application are discussed here

Resource Risk: Test team availability on slippage of the project schedule is discussed.

Support Risk: Clarifications required on the specification and availability of personnel for the same is discussed.

Effort Estimation
The function points in the Functional Specifications will be used as the basis for the purpose of estimating the effort needed for the project. The average of the different estimates from the Peers in the test team will be taken as the basis for calculation of the effort required.
There could be some variation in the planned to actual effort. An effort estimation review will be done by a Senior Consultant to identify gaps, if any.
In case of the UAT, function points are taken from the Business Requirement document.

Infrastructure
Hardware and software requirements for the testing the application are documented. Apart from this, any other requirement should also be documented. Infrastructure that has to be provided by the client is also specified.

High Level Test Conditions

It represents the possible values that can be attributed to a particular specification. The importances of determining the conditions are:
• Deciding on the architecture of testing approach
• Evolving design of the test scripts
• Ensuring coverage

Understanding the maximum conditions for a specification
At this point the tester will have a fair understanding of the application and his module. The functionality can be broken into
• Field level rules
• Module level rules
• Business rules
• Integration rules
• Processing logic

It may not be possible to segment the specifications into the above categories in all applications. It is left to the test team to decide on the applicable segmentation.

For the segments identified by the test team, the possible condition types that can be built are:

Positive condition: Polarity of the value given for test is to comply with the condition existence.

Negative condition: Polarity of the value given for test is not to comply with the condition existence.

Boundary condition: Polarity of the value given for test is to assess the extreme values of the condition

User Perspective condition: Polarity of the value given for test is to analyze the practical usage of the condition

Example:
Condition Positive Negative Boundary User Perspective
Interest Percentage for a deposit 10 -10 100, 101,99 9.4325 (with four decimals)


Queries on Functional Specification
Preparation of test conditions would lead to certain queries arising because of
• Gap between the understanding of the tester and specification
• Implied specification
• Prototype being contradictory
• Implementation issues
• Design restrictions
• Contradictions within the functional specification document
• Contradictions between other application documents
• Contradictions between functional specification and real time applicability

These queries can be clarified using the following resources

Domain Consultant: Normally the author of the FS, and an expert in the field of the application.

Test Manager: Person responsible for the management and co-ordination of the project.

Test Lead: Person responsible for the testing processes and team.

Peer Group: Other team members, who are generally in the test team.


Intelligent Testing
Most testers would tend to follow a combination and permutation method for arriving at the test conditions. The above section clearly explains a way in which coverage of the test values using a non-combination and permutation would suffice complete testing.
This option may not be applicable for all the applications.

Another important part of intelligent testing would be to reduce the number of test values for a segment. Generally, more than one negative test value can be incorporated for one segment in one test path.

Traceability

BR and FS
The requirements specified by the users in the business requirement document may not be exactly translated into a functional specification. Therefore, a trace on specifications between functional specification and business requirements is done on a one to one basis. This helps finding the gap between the documents. These gaps are then closed by the author of the FS, or deferred after discussions.
Testers should understand these gaps and use them as an addendum to the FS, after getting this signed off from the author of the FS. The final FS form may vary from the original, as deferring or taking in a gap may have ripple effect on the application. Sometimes, these ripple effects may not be reflected in the FS. Addendum's may sometime affect the entire system and the test case development. There the traceablity discussed in section 2.5.2 gains importance

FS and Test Conditions
Test conditions built by the tester are traced with the FS to ensure full coverage of the baseline document. If gaps between the same are obtained, tester must then build conditions for the gaps. In this process, testers must keep in mind the rules specified in test condition writing (3.4.4).

Gap Analysis
This is the terminology used on finding the difference between 'what it should be' and 'what it is'. As explained, it is done on the Business requirement to FS and FS to test conditions. Mathematically, it becomes evident that Business requirements that are users needs are tested, as Business Requirement and test conditions are matched.

Simplifying the above,
A = Business Requirement
B = Functional Specification
C = Test Conditions

A = B, B = C, Therefore A = C

Another way of looking at this process is to eliminate as many mismatches at every stage of the process, there by giving the customer an application, which will satisfy their needs.
In the case of DAT, there is an direct translation of specification from the Business Requirement to Test Conditions leaving lesser amount of understandability loss.

Tools Used
The entire process of traceability is a time consuming process. In order to simplify, Rational Software Incorporated has developed a tool, which will maintain the specifications of the documents. Then these are mapped correspondingly. The specifications have to be loaded into the system by the user. Even though it is a time consuming process, it helps in finding the 'ripple' effect on altering a specification. The impacts on test conditions can immediately be identified using the trace matrix.

Test bed
High Level Planning
In order to test the conditions and values that are to be tested, the application should be populated with data. There are two ways of populating the data into tables of the application.

Intelligent: Data is tailor-made for every condition and value, having reference to its condition. These will aid in triggering certain action by the application. By constructing such intelligent data, few data records will suffice for the testing process.

Example:
Business rule, if the Interest to be Paid is more than 8% and the Tenor of the deposit
Exceeds one month, then the system should give a warning.

To populate an Interest to be Paid field of a deposit, we can give 9.5478 and make the Tenor as two months for a particular deposit.

This will trigger the warning in the application.

Unintelligent: Data is populated in mass, corresponding to the table structures. Its values are chosen at random, and not with reference to the conditions derived. This type of population can be used for testing the performance of the application and its behavior to random data. It will be difficult for the tester to identify his requirements from the mass data.

Example:
Using the above example, to find a suitable record with Interest exceeding 8% and
the Tenor being more than two months is difficult.
Having now understood the difference between intelligent and unintelligent data, and also at this point having a good idea of the application, the tester should be able to design intelligent data for his test conditions.
Application may have its own hierarchy of data structure, which is interconnected.

Example
A client may have different accounts in different locations. Each of these locations
may have multiple account types.


Based on the above structure, and the conditions, tester must carefully match the individual or End-to-End scenarios for test to this data hierarchy.

In cases of an End-to-End testing, the same client can be used for multiple test paths, varying the location and/or accounts. Tester must also note the dependencies in the data hierarchy to design the data.
Each condition in the test path will have a specific data associated with it. So, when the test path is executed, the tester will be sure of triggering the conditions he has designed the test case for.
Below is a real time illustration of how high-level data design is made.

For a particular test script xx/001, the type of accounts required and client's types are mentioned clearly. Also, the values that will be entered during the order capture are also decided prior to the test execution leaving no surprises to Tester.
Validations and business rules that happen after some events are also calculated.

Feeds analysis
Most applications are fed with inputs at periodic intervals, like end of day or every hour etc. Some applications may be stand-alone i.e., all processes will happen within its database and no external inputs of processed data are required.
In the case of applications having feeds, received from other machines, they are sent in a format, which are predesigned. These feeds, at the application end, will be processed by local programs, and populated in respective tables.
It is therefore, essential for testers to understand the data mapping between the feeds and the database tables of the application. Usually, a document is published in this regard.
Translation of the high level data designed previously should be converted into the feed formats, in order to populate the applications database.
Feeds format
A feed format is translated or converted in a tabular format in Microsoft Word, with all the fields as the table headings. Testers then begin to convert the high level data into application readable format through this document.
These are imported into excel and finally into MS Access database. They are then converted into actual feed files, depending on the platform on which the application works.
Incases of, stand alone application a similar process should be adopted, converting high level data into table structure data for populating the tables.
Below is a real time sample of feed format for cash accounts.


Here account 1/100001/102 which was described in the real time example of 2.6.1 is translated into feed format. Account number Portfolio in which the account number is held currency of the account balances etc are explained clearly. There are also some application specific codes and status populated.

Final Set-up
The feed files are then uploaded into the application using a series of programs. In case of unintelligent data, some tool would be used to generate mass data specific to the application, by specifying the application's requirements to the tool. These will then be uploaded in to application.
Once these are uploaded, data might have to be interconnected to the application business logic. This may be necessary for both types of applications, stand-alone and feeds fed application.

Example.
Linking accounts to Location Singapore and then linking them to a client using some code numbers, which may be unique to a client
In the case of UAT, the test team does not simulate test data. Live production data frozen in a separate UAT environment is used for executing test cases.

Test Case

Test Case Formation
At this stage, the Tester has clarity on how the application is to be tested. It now, becomes necessary to aid the actual test action with test cases. Test cases are written based on the test conditions. It is the phrased form of test conditions, which becomes readable and understandable by all.

Explicit writing
There are three headings under which a test case is written. Namely,

Description: Here the details of the test on a specification or condition are written

Data and Pre-requirements: Here either the data for the test or specification is mentioned. Pre-requirements for the test to be executed should also be clearly mentioned.

Expected Results: The expected result on execution of the instruction in the description is mentioned. In general, it should reflect, in detail the result of the test execution.

While writing a case, to make the test case explicit, the tester should include the following
• Reference to the rules and specifications under test in words with minimal technical jargons
• Check on data shown by the application should refer to the table names if possible
• Location of the fields or if a new window displayed must be specified clearly
• Names of the fields and screens should also be explicit.

Expected Results
The out-come of executing an instruction would have a single or multiple impact on the application. The resultant behavior of the application after test execution is the expected result.

Single Expected Result: Has a single impact on the instruction executed

Example
Description: Click on the hyperlink 'New Deposit' at the top left hand corner of the Main
Menu Screen
Expected Result: New Time deposit Screen should be displayed

Multiple Expected Result: Has multiple impact on executing the instruction

Example
Description: Click on the hyperlink 'New Deposit' at the top left hand corner of the Main
Menu Screen
Expected Result: New Time deposit Screen should be displayed .
Customer contact date should be prefilled with the system date

Language used in the expected results should not have ambiguity. The results expressed, should be clear and have only one interpretation possible. It is advisable to use the term 'Should' in the expected results.

Pre-Requirements
Test cases cannot generally be executed with normal state of the application. Below is a list of possible pre-requirements that could be attached to the test case:

1. Enable or disable external interfaces
Example,
Reuters, a Foreign exchange rate information service organization server to be connected
to the application

2. Time at which the test cases is to be executed
Example
Test to be executed after 2:30 p.m. in order to trigger a warning

3. Date's that are to be maintained (pre-date or post-date) in the database before testing, as its sometimes not possible to predict dates of testing, and populate certain date fields when they are to trigger certain actions in the application
Example
Maturity date of a deposit should be the date of test. So, it is difficult to give the value of the maturity date while data designing or preparing test cases.

4. Deletion of certain records to trigger an action by the application
Example An document availability indicator field to be made null, so as to trigger an warning from the application

5. Change values if required to trigger an action by the application
Example
Change the value of the interest for a deposit so as to trigger a warning by the application

Data definition
Data for executing the test cases should be clearly defined in the test cases. They should indicate the values that will be entered into the fields and also indicate the default values of the field.
Example:
Description: Enter client's name
Data: John Smith
OR
Description: Check the default value of the interest for the deposit
Data: $100 :

In the cases of calculations involved, the test cases should indicate the calculated value in the expected results of the test case.
Example
Description: Check the default value of the interest for the deposit
Data: $100
This value ($100) should be calculated using the formula specified well in advance while
data design.

In brief, the entire process should be within the control of the tester, and no action is outside the tester's anticipation.

Test Script
This will sequence the flow of an End-to-End test path or sequence of executing the individual test condition. ,
Test case specifies the test to be performed on each segment. Though the sequences of a path are analyzed, navigations to test conditions are not available in the test cases.
Test scripts should ideally start from the login screen of the application. Doing this helps in two ways
• Start conditions are always the same, and uniformity can be achieved
• Automation of test scripts requires start and end conditions i.e. the automation tool will look for the screen to be the same, as specified in its code. Then the tool will automatically run the series of cases without intervention by the user. So, the test scripts must start and end in the same originating screen.
The test scripts must explain the navigation paths very clearly and explicitly. The objective of this is to have flexibility on the person who would execute the cases.
Test scripts sequences must also take into account the impacts the previous cases i.e. in cases of deletion of certain record, the test should not flow by searching for details of the same.
In short, the test cases in series will form the test script in case of an End-to-End test approach. In individual test conditions, the navigation and the test instruction will be a test case and this will constitute a test script.
In practice, for End-to-End test approach, test scripts are written straightway incorporating the test cases. It is only for explanation, these were categorized into two steps.

Interaction with development team
Interaction between the testing team and development team should begin while writing the test scripts. Any interaction prior to test case writing would ideally bias both the teams. Screen shots of the various screens should be obtained from the development team as well as interact on the design of the application.
The tester should not make any changes to the test script at this point based on what the development team has presented. The contradictions between the Test Scripts and actual application are left to Project Managers decisions only. Recommendations from the test team on the contradiction would be a value addition.

Review of Test Scripts
Test cases are given to project leader and managers for review. The test scripts are then sent to the Client for review. Based on the review, changes have to be made to the entire block that was built i.e. test conditions, test data and test scripts. The Client then marks their comments in the 'comments' column in the test preparation script. Testers should understand that, if a change is made in the test script then it requires, changes in the test conditions and data.

Activity Report
Test lead should report to his/her test manager on day to day activity of the test team. The report should basically contain plan for the next day, activities pending, activity for the day-completed etc.

Backend Testing
The process of testing also involves the management of data, which at times are required to flow as an input into the application referred to as feeds hence, from external data stores / applications or data that is generated within the scope of the same application. The required data is to be extracted and processed to produce information as per requirements specified by the business requirements document.

The process of absorbing data into an application could be varied, depending on factors like:
• Nature of data required
• Format of data received
• Limitations of the system supplying data, in presenting data in a required pattern.

Understanding the application
The understanding of the application, as specified in the Business requirements and Functional Specification, plays a key role in testing. The business requirements and the functional specifications draw a parallel between the requirements and the offering of the proposed system. This gives an understanding of the data requirements for the application.

Data Feeds Management
The process of Data Feeds Management involves the study of the data requirements of the business, the gathering of data in a format most suited to handle the process of upload and presentation of data. The Project Management team does this study, and the outcome is a document called the Data Acquisition Document, that identifies the data required.

Data Requirements for Integrated System Testing (IST)
The data requirements expected of the application given by the functional specification may not be dealt in an exhaustive manner. Data requirements could be related to both feeds and maintenance that are required to control program execution.
The Data Acquisition Document gives the input regarding the data feeds intended for the application, its format, the description of each field, also referred to as the data element, the data type and its size. This forms the basis for the data content of the application.
The data for use by IST team either can be in the form of live feeds from the Product Processors or simulated feeds. The possibility of receiving live feeds from Product Processors, for the purpose of testing cannot be exercised always due to various reasons, which may vary from secrecy issues to problems in supplying the required data. To overcome this feeds are simulated to substitute for the live feeds. The feeds are generated after the study of the layouts and various data conditions that may be required to test the functionality of programs that are used to upload them into the application.

The Data Generation Tool, developed indigenously is used to automate the task of generating simulated feeds. This is used to generate volumes of data, which at times could be unintelligent data depending on then complexity of the data requirement.
The tool needs to be customized, to generate data as per the requirements of the application under test.

Data Upload Process
Data feeds are uploaded and stored in the application database, by executing a sequence of programs. The Design Specification Document contains information about tables used in the application database. The development team provides the inputs relating to the programs that are to be executed, their sequence of execution, the functionality of each program, the maintenance's required to ensure proper program execution and the tables acted upon by the program and changes caused to it.
The impact the programs have on the business process, their status and the results tracking mechanism are to be understood. The representation of data in tables, correlation of data across tables and their representation in the user interface are vital components that help ensure the verification and conformance of the data displayed for its preciseness.

Test Preparation
The test conditions are arrived using the inputs regarding the functionality of the process as provided. The process of deriving test conditions and test cases subsequently is the same as described in earlier sections. The results of the test cases could reflect in the application interface or may be required to be checked in the tables.

Example
Consider the case of a program that is supposed to calculate percentage of contribution of each account held to the assets or liabilities of the customer's holding. The program in this case has to group the customer's total assets and liabilities and ..calculate the percent contribution of each account's asset or liability in the totality of the customer's assets or liabilities.

The program is run and the percentage contributions checked as displayed in the user interface. Also, the percentages could be calculated in the database itself and checked with the corresponding percentages displayed in the user interface.

Master and Parameter Tables
An application could contain various operation modes, support varied products and have varying values as inputs depending on the product category serviced or the operation modes. This entails the use of tables that contain the list of values that could be in use. To control the application in performing differing actions in response to a particular input value, parameter tables are used to identify the possible permissible values, which also could be used to control application flow.

Maintenance
Before the process of testing is commenced, verification is to be carried out to check that the tables in the database are populated with the required data. This specification will be available as part of the program functionality and the task on checking its correctness and ensuring that it adheres to the specifications lies with the tester.

Non Certified Testing
Testing may sometimes have to be performed on application, which do not have baseline documents. This situation may arise when
• Application is already in use and not accepted by the users
• Enhancement to the application is proposed
Under these situations, conventional process explained may not be possible. Under those circumstances the following preparation methodology is used

Documents
All possible documents should be collected from the client regarding the application. Testers must go through these documents, understand and extract as many possible informations as possible.
Most important of it would be the data base layout document. This document would explain data structure and layout.

Screen Shots
Screen shots from the application should be captured. These screen shots should represent the application screen by screen and maintaining the flow pattern of the application.

Mapping
Tester at this point will have both the database and front-end screen shots. Carefully data base should be mapped by understanding the entries made in front end (input) and values displayed in front end (output).The purpose of each field, screens and functionality should also be understood. The tester should arrive at clarity on the input and output of the application.
In these cases, tester should use his discretion to decide the validations required at field, Module and application level depending on the application purpose.
Once these are done then the test team can start building test conditions for the application and from then on proceed with the normal test preparation style.
Form the above, one can infer that the process only verifies what the application can do and not validate its integrity. Hence it is not possible to certify and guarantee the application.



Test Execution Process

The preparation to test the application is now over. The test team should next plan the execution of the test on the application. In this section, we will see how test execution is performed.

Stages of Testing

Three passes
Tests on the application are done on stages. Test execution takes place in three passes or sometimes four passes depending on the state of the application. They are:

Pre IST or Pass 0: This is done to check the health of the system before the start of the test process. This stage may not be applicable to most test process. Free form testing will be adopted in this stage.

Comprehensive or Pass 1: All the test scripts developed for testing are executed. Some cases the application may not have certain module(s) ready for test, hence they will be covered comprehensively in the next pass. The testing here should not only cover all test cases but also business cycles as defined in the application.

Discrepancy or Pass 2: All Test scripts that have resulted in a defect during the comprehensive pass should executed. In other words, all defects that have been fixed should be retested. Function points that may be affected by the defect should also be taken up for testing. Automated test scripts captured during the pass one are used here. This type of testing is called as Regression testing. Defects that are not fixed will be executed only after they are fixed.

Sanity or Pass 3: This is the final round in the test process. This is done either at the client's site or at Same Organization depending on the strategy adopted. This is done in order to check if the system is sane enough for the next stage i.e. UAT or production as the case may be under a isolated environment. Ideally the defects that are fixed from the previous pass are checked and freeform testing done to ensure integrity is conducted.

These categories apply for both UAT and IST.

Pre- Requirements for Testing
Version Identification Values
The application would contain several program files for it to function. The version of these files and an unique identification number for these files is a must for change management.
These numbers will be generated for every program file on transfer from the development machine to the test environment. The number attributed to each program file is unique and if any change is made to the program file between the time it is transferred to the test environment and the time when it is transferred back to the development for correction, it can be detected by using these numbers. These identification methods vary from one client to another.
These values have to be obtained from the development team by the test team. This helps in identifying unauthorized transfers or usage of application files by both parties involved.
The responsibility of acquiring, comparing and tracking before and after softbase transfer lies with the test team.


Interfaces for the application
In some applications external interfaces may have to connected or disconnected. In both cases the development team should certify that the application would function in an integrated fashion. Actual navigation to and from an interface may not be covered in black box testing.

Unit and Module test plan sign off
To begin an Integrated test on the application, development team should have completed tests on the software at Unit and module levels.

Unit and Module Testing: Unit testing focuses verification effort on the smallest unit of software design. Using the Design specification as a guide, important control paths and field validations are tested. This is normally a white box testing.
Clients and the development team must sign off this stage, and hand over the test plan and defect report for the test to the testing team.
In cases of the UAT, IST sign off report must be handed over to the UAT before commencement of UAT.

Test Plan

This document is a deliverable to client. It contains actual plan for test execution with details to the minute.

Test Execution Sequence
Test scripts can either be executed in a random format or in an sequential fashion. Some applications have concepts that would require sequencing of the test cases before actual execution. The details of the execution are documented in the test plan.
Sequencing can also be done on the modules of the application, as one module would populate or formulate information required for another.

Allocation of test cases among the team
The Test team should decide on the resources that would execute the test scripts. Ideally, the tester who designed the test script for the module executes the test. In some cases, due to shortage of time or resource at that point of time, additional test scripts might have to be executed by some members of the team.

Clear documentation of responsibilities is done in the test plan.

Allocation of test cases on different passes
All test scripts may not be possibly executed in the first passes. Some of the reasons for this could be
• Functionality may some-times be introduced at a later stage and application may not support it, or the test team may not be ready with the preparation
• External interfaces to the application may not be ready
• The client might choose to deliver some part of the application for testing and rest may be delivered during other passes

Targets for completion of Phases
Time frames for the passes have to decided and committed to the clients well in advance to the start of test. Some of the factors consider for doings so are
Number of cases/scripts: Depending on the number of test scripts and the resource available, completion dates are prepared
Complexity of Testing: In some cases the number of test cases may be less but the complexity of the test may be a factor. The testing may involve time consuming calculations or responses from external interfaces etc
Number of Errors: This is done very exceptionally, Pre-IST testing is done to check the health of the application soon after the preparations are done. The number of errors that were reported should be taken as a benchmark.

Annexure Two shows the Test Plan Template normally used for preparing a Test Plan for a project.

Automation of Test Cases

Tools Used

Win Runner is used for automation of test cases. It is a capture and playback tool.

All test cases are first executed manually and simultaneously captured using WinRunner. Some advantages of this tool are
• It is adaptable to web and client server applications .
• Automated test results can be generated
• Compatibility with other tools used like Test Director
• It is easy to learn and use the tool
• Reduces human errors
• Saves time in test execution
• Used for Regression Testing

Capturing During First pass
As, we now understand, the tool has to capture the test sequence, inputs, outputs and calculations involved in the test cases. This is done during the first pass of the test. While executing the test cases, testers should capture them using the tool. Automation experts in the team will provide guidance.

Intelligent Automation
Like intelligent testing, automation can also be chosen carefully. Depending on the strategy i.e. Complete or selective the test cases are automated.
In cases of selective, only critical and complex test cases are automated. Time-consuming test cases are also automated, as it would take less time for regression testing.

Defect Management

What is a defect?
A Defect is a product anomaly or flaw. Defects include such things as omissions and imperfections found during testing phases. Symptoms (flaws) of faults contained in software that is sufficiently mature for production will be considered as defects. Deviations from expectation that is to be tracked and resolved is also termed a defect.
An evaluation of defects discovered during testing provides the best indication of software quality. Quality is the indication of how well the system meets the requirements. So in this context defects are identified as any failure to meet the system requirements.

Defect evaluation is based on methods that range from simple number count to rigorous statistical modeling.
Rigorous evaluation uses assumptions about the arrival or discovery rates of defects during the testing process. The actual data about defect rates are then fit to the model. Such an evaluation estimates the current system reliability and predicts how the reliability will grow if testing and defect removal continue. This evaluation is described as system reliability growth modeling.
Life cycle of a defect is explained diagrammatically below,


Types of Defects
Defects that are detected by the tester are classified into categories by the nature of the defect. The following are the classification

Showstopper (X): The impact of the defect is severe and the system cannot go into the production environment without resolving the defect since an interim solution may not be available.

Critical (C): The impact of the defect is severe, however an interim solution is available. The defect should not hinder the test process in any way.

Non critical (N): All defects that are not in the X or C category are deemed to be in the N category. These are also the defects that could potentially be resolved via documentation and user training. These can be Graphic User Interface (GUI) defects are some minor field level observations.

Defect reporting by tester
Defects or Bugs when detected in the application by the tester must be duly reported through an automated tool. Particulars that have to be filled by a tester are

Defect Id: Number associated with a particular defect, and henceforth referred by its ID

Date of execution: The date on which the test case which resulted in a defect was executed

Defect Category: These are explained in the next section, ideally decided by the test leader

Severity: As explained, it can be Critical, Non-Critical and Showstopper

Module ID: Module in which the defect occurred

Status: Raised, Authorized, Deferred, Fixed, Re-raised, Closed and Duplicate.

Defect description: Description as to how the defect was found, the exact steps that should be taken to simulate the defect, other notes and attachments if any.

Test Case Reference No: The number of the test case and script in combination which resulted in the defect

Owner: The name of the tester who executed the test case

Test case description: The instructions in the test cases for the step in which the error occurred

Expected Result: The expected result after the execution of the instructions in the test case descriptions

History of the defect: Normally taken care of the automated tool used for defect tracking and reporting.

Attachments: The screen shot showing the defect should be captured and attached

Responsibility. Identified team member of the development team for fixing the defect.


Defect Tracking by Test Lead

The test lead, categorizes the defects after meetings with the clients as,

Modify Cases: Test cases to be modified. This may arise when the testers understanding may be incorrect.

Discussion Items: Arises when there is a difference of opinion between the test and the development team. This is marked to the Domain consultant for final verdict.

Change Technology: Arises when the development team has to fix the bug.

Data Related: Arises when the defect is due to data and not coding.

User Training: Arises when the defect is not severe or technically not feasible to fix, it is decided to train the user on the defect. This should ideally not be critical.

New Requirement: Inclusion of functionality after discussion

User Maintenance: Masters and Parameter maintained by the user causing the defect.

Observation: Any other observation, which is not classified in the above categories like a user perspective GUI defect.
Reporting is done for defect evaluation and also to ensure that the development team is aware of the defects found and is in the process of resolving the defects. A detailed report of the defects is generated everyday and given to the development team for their feedback on defect resolution. A summary report is generated for every report to evaluate the rate at which new defects are found and the rate at which the defects are tracked to closure.
Defect counts are reported as a function of time, creating a Defect Trend diagram or report, and as a function of one or more defect parameters like category or status, creating a Defect Density report. These types of analysis provide a perspective on the trends or distribution of defects that reveal the system's reliability, respectively.
It is expected that defect discovery rates will eventually diminish as the testing and fixing progresses. A threshold can be established below which the system can be deployed. Defect counts can also be reported based on the origin in the implementation model, allowing detection of “weak modules”, “hot spots”, parts of the system that are fixed again and again, indicating some fundamental design flaw.

Defects included in an analysis of this kind are confirmed defects. Not all reported defects report an actual flaw, as some may be enhancement requests, out of the scope of the system, or describe an already reported defect. However, there is a value to looking at and analysing why there are many defects being reported that are either duplicates or not confirmed defects.

Other Tools

Tools that are used to track and report defects are,

Clear Quest (CQ): It belongs to the Rational Test Suite and it is an effective tool in Defects Management. CQ functions on a native access database and it maintains a common database of defects. With CQ the entire Defect Process can be customized. For e.g., a process can be designed in such a manner that a defect once raised needs to be-definitely authorized and then fixed for it to attain the status of retesting. Such a systematic defect flow process can be established and the history for the same can be maintained, Graphs and reports can be customized and metrics can be derived out of the maintained defect repository.

Test Director (TD): Test Director is an Automated Test Management Tool developed by Mercury Interactive for Test Management to help to organize and manage all phases of the software testing process, including planning, creating tests, executing tests, and tracking defects. Test Director enables us to manage user access to a project by creating a list of Authorized users and assigning each user a password and a user group such that a perfect control can be exercised on the kinds of additions and modifications an user can make to the project. Apart from Manual Test Execution, the Win Runner automated test scripts of the project can also be executed directly from Test Director. Test Director activates Win Runner, runs the tests, and displays the results. Apart from the above, it is used for
• To report defects detected in the software.
• As a sophisticated system for tracking software defects.
• To monitor defects closely from initial detection until resolution.
• To analyze our Testing Process by means of various graphs and reports.


Defects Meetings
Meetings are conducted at the end of everyday between the test team and development team to discuss test execution and defects. Here, defect categorizations are done which were explained in section 4.5.4

Before meetings with the development team, test team should have internal discussions with the test lead on the defects reported to the test lead. This process ensures that all are accurate and authentic to the best knowledge of the test team.

Defects Publishing
Defects that are authorized are published in a mutually accepted media like Internet, Intranet, email etc. These are published in the Intranet and depending on the client’s requirements, defects are published either in their Intranet or Internet.
Reports that are published are.
• Daily defect report
• Summarized defect report for the individual passes
• Final defect report
Format used for publishing the defects are given below with some examples.

Test Down Times

During the execution of the test, schedules prepared earlier may slip based on certain factors. The test team should record time lost due to these duly.

Server problems
Test team may come across problems with the server, on which the application is planted. Possible causes for the problems are
• Main server on which the application may have problems with number of instances on it slowly down the system
• Networking to the main server or internal network may get down
• Software compatibility with application and middleware if any may cause concerns delaying the test start
• New version of databases or middleware may not be fully compatible with the application
• Improper installation of system applications may cause delays
• Interfaces with applications may not be compatible with the existing hardware setup

Problems on Testing side /Development side
Delays can also be from the test or development teams like
• Data designed may not be sufficient or compatible with the application (missing some parameters of the data)
• Maintenance of the parameters may not be sufficient for the application to function
• Version transferred for testing may not be the right one
• Delay on transfer technique i.e. FATS may have some technical problems

Show Stopper
Schedule may not only slip because of the above-mentioned reason. Health of the software may not be appropriate for testing, Like

Module Show Stopper: Testing on some components of the application may not be possible due to fundamental errors in it, stopping further testing

Application Show Stopper: Testing the application may not be possible due to fundamental errors in it, stopping further testing

Given below is a sample of a test down time log with examples

Soft base Transfer

Between Passes
Soft base is the term used for describing the application software in the test and construction process. Control of soft base should be with either the development team or the test team depending on the process time frames i.e. whether construction or testing is in progress. There should also be control on the version that is released for either construction or testing.

Soft base is transferred to the test environment after which the first pass of testing starts. During the first pass, the defects discovered are fixed in the development environment on the same version of the soft base, which is being tested. At the end of the first pass, after the test team has completed the execution of all the test cases, the fixed version of the soft base is transferred into the test environment, to commence the second pass of testing.

FATS
The Acronym of FATS is Fully Automated Transfer System. FATS is an Automated version control mechanism adopted by Citibank India Technologies for source code transfer from Development server to Test Control Machine. In case of UAT source code is transferred from Development server to UAT machine. FATS Uses SCCS (Source code control system) of Unix for Version Control. Source code transfer will be transferred from Development server to FATS Server. FATS server generates a unique Identification for each Program file that is to be transferred. For completion of transfer there are three security levels namely
• Project Manager password
• User password
• Quality Assurance (QA) password.

Ideally, first the Project managers from the client side check's the file out of FATS transfer to check its integrity. Then the user acknowledges the file for user requirements. Finally, QA clears the file on quality standards.

Revision of Test Cases
At times, transfer of soft base could take time from release for correction to transfer back into test environment. During the period testers should
• Include test scripts for new functionalities incorporated during execution
• Enrich data and test cases based on happenings during the previous pass of testing
• Complete documentation for the previous passes
• Modify test cases which are classified as 'MC'
• Free form testing on the soft base available
Performance Testing

Execution

To execute the Performance Test, test cases are planned and developed for the application. The test cases constitute simulation of series of activities carried out by the user. These are called virtual scripts. Virtual scripts are generated by a Virtual Generator that runs in the background creating test code using the specific navigation steps that the user performs. A script-recording device marks the test cases as transactions. Several such sets of scripts (scenarios) running simultaneously at different load conditions form the basis for Performance testing.
The analysis of response times provide a result on the behavior of application and of specific transactions, under different load conditions. A successful transaction passes the criteria outlined for it in the User Acceptance defined in the specifications. A stable transaction responds to the loading of extra concurrent users in a predictable manner with regard to its operational norm of success as established in the User Acceptance criteria. A successful and stable transaction fulfills both the above conditions.

Tools
Load Runner enables you to test your system under controlled and peak load Conditions. To load your system, LoadRunner simulates an environment where Multiple users work concurrently. To generate load, LoadRunner runs multiple of Virtual Users that are distributed over a network. Using a minimum of hardware resources, these Virtual Users provide consistent, repeatable, and measurable load to exercise the system just as the real users would. While the system is under load, LoadRunner accurately measures, monitors, and analyzes the system's Performance. LoadRunner's in- depth reports and graphs provide the information that needed to evaluate the performance of the system.

Post Test Process


Sign off Criteria
In order to acknowledge the completion of the test process and certify the application, the following has to completed
• All passes have been completed
• All test cases should have been executed
• All defects raised during the test execution have either been closed or deferred
• Show stoppers in the last pass of the test have been rectified

Authorities
The following personnel have the authority to sign off the test execution process

Client: The owners of the application under test.

Project Manager: Person who managed the project

Project Lead: Person who managed the test process

Deliverables
Internal
The following are the internal deliverable

Test Preparation Scripts: Test script that was sent to clients and the correction made
Conditions Matrix: High Level test conditions
Data Sheets: Sheets used for designing the data for test
Minutes of meetings/discussions: Team meetings, meetings with client's etc
Project archives: These are explained further in the document
External
The following are the deliverables to Clients
• Test Strategy
• Effort Estimation
• Test Plan (includes Test scripts)
• Test Results
• Traceability matrix
• Pass defect report
• Final Defect report
• Final Summary Report
• Defect Analysis
• Metrics

Software Quality Assurance (SQA) Deliverable

The following are SQA deliverables
• The Test plan is a document, which needs SQA approval and sign off
• Test results though do not require sign off by SQA authority, need to be delivered for SQA perusal.
• The Traceability document, Test Scripts with review comments (without the result status), defect Report format, Defect analysis and tool evaluation document (for selection of tools for the project), should be part of Test plan.
• Test Scripts bearing module name
• Mails requesting release, Risk Assessment Memo, Effort Estimation, Project & Defect Metrics
• Test results including the Final Defect Report & Executive Summary, Test Results (Test Results with Pass/Fail status)

Bad: Problems faced during the test process are highlighted. Like
• Communication issues with project management team
• Lack of functional/ technical knowledge and guidance
• Lack of automation experience
• Sign off done verbally
• Baseline documents not frozen
• Baseline documents not signed off
• Baseline document not made available
• Incomplete softbase transferred for testing
• No Process to analyze risks
• Status of unit testing may not be available
• No proper version control by the development team

Ugly: Low points and Mistakes made during the test process are highlighted. Like,

• Estimation of effort not accurate or deviation not within permissible limits
• Profiles of members not matching to application under test
• Introduction of new tools without understanding its complexity and compatibility to our methods and approaches
• Lack of proper feel of the application
• Risk mitigation plan

Good: High points during the test process are highlighted. Like,

• New test approaches used
• Completion of project within projected time
• Preparing datewise test schedule
• Defects publishing on time
• Domain knowledge acquired through project and domain consultant
• Track of revisions in the test deliverables

Metrics

Defect Metrics
Analysis on the defect report is done for management and client information. These are categorized as

Defect Age: Defect age is the time duration between the point of introduction of defect to the point of closure of the defect. This would give a fair idea on the defect set to be included for smoke test during Regression

Defect Analysis: The analysis of the defects can be done based on the severity, occurrence and category of the defects. As an example Defect Density is a metric which gives the ratio of defects in specific modules to the total defects in the application. Further analysis and derivation of metrics can be done based on the various components of the defect management.

Test Management Metrics
Analysis on the test management is done for management and client information. These are categorized as

Schedule: Schedule Variance is a metric determined by the ratio of the planned duration to the actual duration of the project.

Effort: Effort variance is a metric determined by the ratio of the planned effort to the actual effort exercised for the project.

Debriefs
With Test Team
Completion of a project gives knowledge enrichment to the team members. Polarity of the knowledge i.e. positive and negative should be shared commonly with the management and peer groups.

Experiences are classified as
B-Bad
U - Ugly
G – Good

Sample Test Cases

Written on 7:38 AM by MURALI KRISHNA

Test case for Date field can be:-

1) test format which is allowed like mm/dd/yy or MM/DD/YYYY or which one allowed.
2)Test for boundary values for date and month.
3) Test For null date /month/year
4) negatine date/month/year
5)Check for 30th feb



Test cases for ATM Machine

1. Machine is accepting ATM card
2. Machine is rejecting expired card
3. successful entry of PIN number
4. unsuccessful operation due to enter wrong PIN number 3 times
5. successful selection of language
6. successful selection of account type
7. unsuccessful operation due to invalid account type
8. successful selection of amount to be withdraw
9. successful withdrawal.
10. Expected message due to amount is greater than day limit
11. unsuccessful withdraw operation due to lack of money in ATM
12. Expected message due to amount to withdraw is greater than possible balance.
13. unsuccessful withdraw operation due to click cancel after insert card


Test cases for Pen
1) test for ball or ink pen.
2) test for ink color (blue, red, what)
3) write with pen to see whether it works or not
4) check brand name and logo
5) check for body color
6) check for body material
7) drop pen from a reasonable height
8) check whether it is a click pen, or a pen with cap?
9) check for pen weight
10) check the refill


Test cases for "a computewr shutdown operation"
1)verify shutdown selection using start menu (mouse).
2)verify shut down selection using altf4 (keyboard).
3)verify shutdown operation.
4)verify shutdown operation using poweroff.

Testcases for "ATM withdrawl operation with all rules and regulations"
1)verify card insertion.
2)verify card insertion with wrong angle insertion.
3)verify language selection.
4)verify pin number entry.
5)verify operation when wrong pin number entered three times.
6)verify withdrawl option selection.
7)verify operation when you selected wrongly account type w.r.t that inserted card.
8)verify account type selection.
9)verify amount entry.
10)verify withdrawl operatio when amount>possible balance.
11)verify withdrawl operation when amount >day limit of bank.
12)verify operation with network problem.
13)verify cancel after inserting the card.

Automation Testing versus Manual Testing

Written on 7:35 AM by MURALI KRISHNA

Automation Testing versus Manual Testing Guidelines
I met with my team’s automation experts a few weeks back to get their input on when to automate and when to manually test. The general rule of thumb has always been to use common sense. If you’re only going to run the test one or two times or the test is really expensive to automation, it is most likely a manual test. But then again, what good is saying “use common sense” when you need to come up with deterministic set of guidelines on how and when to automate?

Pros of Automation
•If you have to run a set of tests repeatedly, automation is a huge win for you
•It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner
•It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner (see What is a Nighlty)
•Aids in testing a large test matrix (different languages on different OS platforms). Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.
Cons of Automation
•It costs more to automate. Writing the test cases and writing or configuring the automate framework you’re using costs more initially than running the test manually.
•Can’t automate visual references, for example, if you can’t tell the font color via code or the automation tool, it is a manual test.
Pros of Manual
•If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
•It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.
Cons of Manual
•Running tests manually can be very time consuming
•Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome.
Other deciding factors
•What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.
•Is the return on investment worth automating? Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?
Criteria for automating
There are two sets of questions to determine whether automation is right for your test case:

Is this test scenario automatable?
1. Yes, and it will cost a little
2. Yes, but it will cost a lot
3. No, it is no possible to automate
How important is this test scenario?
1. I must absolutely test this scenario whenever possible
2. I need to test this scenario regularly
3. I only need to test this scenario once in a while
If you answered #1 to both questions – definitely automate that test
If you answered #1 or #2 to both questions – you should automate that test
If you answered #2 to both questions – you need to consider if it is really worth the investment to automate

What happens if you can’t automate?
Let’s say that you have a test that you absolutely need to run whenever possible, but it isn’t possible to automate. Your options are
• Reevaluate – do I really need to run this test this often?
• What’s the cost of doing this test manually?
• Look for new testing tools
• Consider test hooks

Blackbox Testing

Written on 7:32 AM by MURALI KRISHNA

Black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is also used for black box testing and white box testing is also sometimes called ’structural testing’. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s still discouraged.
Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the applicationa are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.
Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration, System, Acceptance and regression testing stages.
Tools used for Black Box testing:
Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.

Advantages of Black Box Testing
- Tester can be non-technical.
- Used to verify contradictions in actual system and the specifications.
- Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
- Chances of having unidentified paths during this testing

Methods of Black box Testing:
Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.
Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.
Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.
Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values
BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.
Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.
How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.

Common Interview Questions

Written on 7:21 AM by MURALI KRISHNA

1. Tell me about yourself?
The most often asked question in interviews. You need to have a short
statement prepared in your mind. Be careful that it does not sound
rehearsed. Limit it to work-related items unless instructed otherwise.
Talk about things you have done and jobs you have held that relate to
the position you are interviewing for. Start with the item farthest
back and work up to the present.

2. Why did you leave your last job?
Stay positive regardless of the circumstances. Never refer to a major
problem with management and never speak ill of supervisors, co-workers
or the organization. If you do, you will be the one looking bad. Keep
smiling and talk about leaving for a positive reason such as an
opportunity, a chance to do something special or other forward-looking
reasons.
3. What experience do you have in this field?
Speak about specifics that relate to the position you are applying for.
If you do not have specific experience, get as close as you can.

4. Do you consider yourself successful?
You should always answer yes and briefly explain why. A good
explanation is that you have set goals, and you have met some and are
on track to achieve the others.

5. What do co-workers say about you?
Be prepared with a quote or two from co-workers. Either a specific
statement or a paraphrase will work. Jill Clark, a co-worker at Smith
Company, always said I was the hardest workers she had ever known. It
is as powerful as Jill having said it at the interview herself.

6. What do you know about this organization?
This question is one reason to do some research on the organization
before the interview. Find out where they have been and where they are
going.

7. What have you done to improve your knowledge in the last year?
Try to include improvement activities that relate to the job. A wide
variety of activities can be mentioned as positive self-improvement.
Have some good ones handy to mention.

8. Are you applying for other jobs?

Be honest but do not spend a lot of time in this area. Keep the focus
on this job and what you can do for this organization. Anything else is
a distraction.

9. Why do you want to work for this organization?
This may take some thought and certainly, should be based on the
research you have done on the organization. Sincerity is extremely
important here and will easily be sensed. Relate it to your long-term
career goals.

10. Do you know anyone who works for us?
Be aware of the policy on relatives working for the organization. This
can affect your answer even though they asked about friends not
relatives. Be careful to mention a friend only if they are well thought
of.

11. What kind of salary do you need?
A loaded question. A nasty little game that you will probably lose if
you answer first. So, do not answer it. Instead, say something like,
That’s a tough question. Can you tell me the range for this position?
In most cases, the interviewer, taken off guard, will tell you. If not,
say that it can depend on the details of the job. Then give a wide
range.

12. Are you a team player?
You are, of course, a team player. Be sure to have examples ready.
Specifics that show you often perform for the good of the team rather
than for yourself are good evidence of your team attitude. Do not brag,
just say it in a matter-of-fact tone. This is a key point.

13. How long would you expect to work for us if hired?
Specifics here are not good. Something like this should work: I’d like
it to be a long time. Or As long as we both feel I’m doing a good job.

14. Have you ever had to fire anyone? How did you feel about that?
This is serious. Do not make light of it or in any way seem like you
like to fire people. At the same time, you will do it when it is the
right thing to do. When it comes to the organization versus the
individual who has created a harmful situation, you will protect the
organization. Remember firing is not the same as layoff or reduction in
force.

15. What is your philosophy towards work?
The interviewer is not looking for a long or flowery dissertation here.
Do you have strong feelings that the job gets done? Yes. That’s the
type of answer that works best here. Short and positive, showing a
benefit to the organization.

16. If you had enough money to retire right now, would you?
Answer yes if you would. But since you need to work, this is the type
of work you prefer. Do not say yes if you do not mean it.

17. Have you ever been asked to leave a position?
If you have not, say no. If you have, be honest, brief and avoid saying
negative things about the people or organization involved.

18. Explain how you would be an asset to this organization?
You should be anxious for this question. It gives you a chance to
highlight your best points as they relate to the position being
discussed. Give a little advance thought to this relationship.

19. Why should we hire you?
Point out how your assets meet what the organization needs. Do not
mention any other candidates to make a comparison.

20. Tell me about a suggestion you have made?
Have a good one ready. Be sure and use a suggestion that was accepted
and was then considered successful. One related to the type of work
applied for is a real plus.

21. What irritates you about co-workers?
This is a trap question. Think real hard but fail to come up with
anything that irritates you. A short statement that you seem to get
along with folks is great.

22. What is your greatest strength?
Numerous answers are good, just stay positive. A few good examples:
Your ability to prioritize, Your problem-solving skills, Your ability
to work under pressure, Your ability to focus on projects, Your
professional expertise, Your leadership skills, Your positive attitude.

23. Tell me about your dream job?
Stay away from a specific job. You cannot win. If you say the job you
are contending for is it, you strain credibility. If you say another
job is it, you plant the suspicion that you will be dissatisfied with
this position if hired. The best is to stay genetic and say something
like: A job where I love the work, like the people, can contribute and
can’t wait to get to work.

24. Why do you think you would do well at this job?
Give several reasons and include skills, experience and interest.

25. What are you looking for in a job?
See answer # 23

26. What kind of person would you refuse to work with?
Do not be trivial. It would take disloyalty to the organization,
violence or lawbreaking to get you to object. Minor objections will
label you as a whiner.

27. What is more important to you: the money or the work?
Money is always important, but the work is the most important. There is
no better answer.

28. What would your previous supervisor say your strongest point is?
There are numerous good possibilities:
Loyalty, Energy, Positive attitude, Leadership, Team player, Expertise,
Initiative, Patience, Hard work, Creativity, Problem solver.

29. Tell me about a problem you had with a supervisor?
Biggest trap of all. This is a test to see if you will speak ill of
your boss. If you fall for it and tell about a problem with a former
boss, you may well below the interview right there. Stay positive and
develop a poor memory about any trouble with a supervisor.

30. What has disappointed you about a job?
Don’t get trivial or negative. Safe areas are few but can include:
Not enough of a challenge. You were laid off in a reduction Company did
not win a contract, which would have given you more responsibility.

31. Tell me about your ability to work under pressure?
You may say that you thrive under certain types of pressure. Give an
example that relates to the type of position applied for.

32. Do your skills match this job or another job more closely?
Probably this one. Do not give fuel to the suspicion that you may want
another job more than this one.

33. What motivates you to do your best on the job?
This is a personal trait that only you can say, but good examples are:
Challenge, Achievement, Recognition.

34. Are you willing to work overtime? Nights? Weekends?
This is up to you. Be totally honest.

35. How would you know you were successful on this job?
Several ways are good measures:
You set high standards for yourself and meet them. Your outcomes are a
success.Your boss tell you that you are successful.

36. Would you be willing to relocate if required?
You should be clear on this with your family prior to the interview if
you think there is a chance it may come up. Do not say yes just to get
the job if the real answer is no. This can create a lot of problems
later on in your career. Be honest at this point and save yourself
future grief.

37. Are you willing to put the interests of the organization ahead ofyour own?
This is a straight loyalty and dedication question. Do not worry about
the deep ethical and philosophical implications. Just say yes.

38. Describe your management style?
Try to avoid labels. Some of the more common labels, like progressive,
salesman or consensus, can have several meanings or descriptions
depending on which management expert you listen to. The situational
style is safe, because it says you will manage according to the
situation, instead of one size fits all.

39. What have you learned from mistakes on the job?
Here you have to come up with something or you strain credibility. Make
it small, well intentioned mistake with a positive lesson learned. An
example would be working too far ahead of colleagues on a project and
thus throwing coordination off.

40. Do you have any blind spots?
Trick question. If you know about blind spots, they are no longer blind
spots. Do not reveal any personal areas of concern here. Let them do
their own discovery on your bad points. Do not hand it to them.

41. If you were hiring a person for this job, what would you look for?
Be careful to mention traits that are needed and that you have.

42. Do you think you are overqualified for this position?
Regardless of your qualifications, state that you are very well
qualified for the position.

43. How do you propose to compensate for your lack of experience?
First, if you have experience that the interviewer does not know about,
bring that up: Then, point out (if true) that you are a hard working
quick learner.

44. What qualities do you look for in a boss?
Be generic and positive. Safe qualities are knowledgeable, a sense of
humor, fair, loyal to subordinates and holder of high standards. All
bosses think they have these traits.

45. Tell me about a time when you helped resolve a dispute betweenothers?
Pick a specific incident. Concentrate on your problem solving technique
and not the dispute you settled.

46. What position do you prefer on a team working on a project?
Be honest. If you are comfortable in different roles, point that out.

47. Describe your work ethic?
Emphasize benefits to the organization. Things like, determination to
get the job done and work hard but enjoy your work are good.

48. What has been your biggest professional disappointment?
Be sure that you refer to something that was beyond your control. Show
acceptance and no negative feelings.

49. Tell me about the most fun you have had on the job?
Talk about having fun by accomplishing something for the organization.

50. Do you have any questions for me?
Always have some questions prepared. Questions prepared where you will be an asset to the organization are good. How soon will I be able to be productive? and What type of projects will I be able to assist on? are
examples.