Sample Test Cases

Written on 7:38 AM by MURALI KRISHNA

Test case for Date field can be:-

1) test format which is allowed like mm/dd/yy or MM/DD/YYYY or which one allowed.
2)Test for boundary values for date and month.
3) Test For null date /month/year
4) negatine date/month/year
5)Check for 30th feb



Test cases for ATM Machine

1. Machine is accepting ATM card
2. Machine is rejecting expired card
3. successful entry of PIN number
4. unsuccessful operation due to enter wrong PIN number 3 times
5. successful selection of language
6. successful selection of account type
7. unsuccessful operation due to invalid account type
8. successful selection of amount to be withdraw
9. successful withdrawal.
10. Expected message due to amount is greater than day limit
11. unsuccessful withdraw operation due to lack of money in ATM
12. Expected message due to amount to withdraw is greater than possible balance.
13. unsuccessful withdraw operation due to click cancel after insert card


Test cases for Pen
1) test for ball or ink pen.
2) test for ink color (blue, red, what)
3) write with pen to see whether it works or not
4) check brand name and logo
5) check for body color
6) check for body material
7) drop pen from a reasonable height
8) check whether it is a click pen, or a pen with cap?
9) check for pen weight
10) check the refill


Test cases for "a computewr shutdown operation"
1)verify shutdown selection using start menu (mouse).
2)verify shut down selection using altf4 (keyboard).
3)verify shutdown operation.
4)verify shutdown operation using poweroff.

Testcases for "ATM withdrawl operation with all rules and regulations"
1)verify card insertion.
2)verify card insertion with wrong angle insertion.
3)verify language selection.
4)verify pin number entry.
5)verify operation when wrong pin number entered three times.
6)verify withdrawl option selection.
7)verify operation when you selected wrongly account type w.r.t that inserted card.
8)verify account type selection.
9)verify amount entry.
10)verify withdrawl operatio when amount>possible balance.
11)verify withdrawl operation when amount >day limit of bank.
12)verify operation with network problem.
13)verify cancel after inserting the card.

Automation Testing versus Manual Testing

Written on 7:35 AM by MURALI KRISHNA

Automation Testing versus Manual Testing Guidelines
I met with my team’s automation experts a few weeks back to get their input on when to automate and when to manually test. The general rule of thumb has always been to use common sense. If you’re only going to run the test one or two times or the test is really expensive to automation, it is most likely a manual test. But then again, what good is saying “use common sense” when you need to come up with deterministic set of guidelines on how and when to automate?

Pros of Automation
•If you have to run a set of tests repeatedly, automation is a huge win for you
•It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner
•It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner (see What is a Nighlty)
•Aids in testing a large test matrix (different languages on different OS platforms). Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.
Cons of Automation
•It costs more to automate. Writing the test cases and writing or configuring the automate framework you’re using costs more initially than running the test manually.
•Can’t automate visual references, for example, if you can’t tell the font color via code or the automation tool, it is a manual test.
Pros of Manual
•If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
•It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.
Cons of Manual
•Running tests manually can be very time consuming
•Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome.
Other deciding factors
•What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.
•Is the return on investment worth automating? Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?
Criteria for automating
There are two sets of questions to determine whether automation is right for your test case:

Is this test scenario automatable?
1. Yes, and it will cost a little
2. Yes, but it will cost a lot
3. No, it is no possible to automate
How important is this test scenario?
1. I must absolutely test this scenario whenever possible
2. I need to test this scenario regularly
3. I only need to test this scenario once in a while
If you answered #1 to both questions – definitely automate that test
If you answered #1 or #2 to both questions – you should automate that test
If you answered #2 to both questions – you need to consider if it is really worth the investment to automate

What happens if you can’t automate?
Let’s say that you have a test that you absolutely need to run whenever possible, but it isn’t possible to automate. Your options are
• Reevaluate – do I really need to run this test this often?
• What’s the cost of doing this test manually?
• Look for new testing tools
• Consider test hooks

Blackbox Testing

Written on 7:32 AM by MURALI KRISHNA

Black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is also used for black box testing and white box testing is also sometimes called ’structural testing’. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s still discouraged.
Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the applicationa are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.
Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration, System, Acceptance and regression testing stages.
Tools used for Black Box testing:
Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.

Advantages of Black Box Testing
- Tester can be non-technical.
- Used to verify contradictions in actual system and the specifications.
- Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
- Chances of having unidentified paths during this testing

Methods of Black box Testing:
Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.
Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.
Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.
Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values
BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.
Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.
How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.

Common Interview Questions

Written on 7:21 AM by MURALI KRISHNA

1. Tell me about yourself?
The most often asked question in interviews. You need to have a short
statement prepared in your mind. Be careful that it does not sound
rehearsed. Limit it to work-related items unless instructed otherwise.
Talk about things you have done and jobs you have held that relate to
the position you are interviewing for. Start with the item farthest
back and work up to the present.

2. Why did you leave your last job?
Stay positive regardless of the circumstances. Never refer to a major
problem with management and never speak ill of supervisors, co-workers
or the organization. If you do, you will be the one looking bad. Keep
smiling and talk about leaving for a positive reason such as an
opportunity, a chance to do something special or other forward-looking
reasons.
3. What experience do you have in this field?
Speak about specifics that relate to the position you are applying for.
If you do not have specific experience, get as close as you can.

4. Do you consider yourself successful?
You should always answer yes and briefly explain why. A good
explanation is that you have set goals, and you have met some and are
on track to achieve the others.

5. What do co-workers say about you?
Be prepared with a quote or two from co-workers. Either a specific
statement or a paraphrase will work. Jill Clark, a co-worker at Smith
Company, always said I was the hardest workers she had ever known. It
is as powerful as Jill having said it at the interview herself.

6. What do you know about this organization?
This question is one reason to do some research on the organization
before the interview. Find out where they have been and where they are
going.

7. What have you done to improve your knowledge in the last year?
Try to include improvement activities that relate to the job. A wide
variety of activities can be mentioned as positive self-improvement.
Have some good ones handy to mention.

8. Are you applying for other jobs?

Be honest but do not spend a lot of time in this area. Keep the focus
on this job and what you can do for this organization. Anything else is
a distraction.

9. Why do you want to work for this organization?
This may take some thought and certainly, should be based on the
research you have done on the organization. Sincerity is extremely
important here and will easily be sensed. Relate it to your long-term
career goals.

10. Do you know anyone who works for us?
Be aware of the policy on relatives working for the organization. This
can affect your answer even though they asked about friends not
relatives. Be careful to mention a friend only if they are well thought
of.

11. What kind of salary do you need?
A loaded question. A nasty little game that you will probably lose if
you answer first. So, do not answer it. Instead, say something like,
That’s a tough question. Can you tell me the range for this position?
In most cases, the interviewer, taken off guard, will tell you. If not,
say that it can depend on the details of the job. Then give a wide
range.

12. Are you a team player?
You are, of course, a team player. Be sure to have examples ready.
Specifics that show you often perform for the good of the team rather
than for yourself are good evidence of your team attitude. Do not brag,
just say it in a matter-of-fact tone. This is a key point.

13. How long would you expect to work for us if hired?
Specifics here are not good. Something like this should work: I’d like
it to be a long time. Or As long as we both feel I’m doing a good job.

14. Have you ever had to fire anyone? How did you feel about that?
This is serious. Do not make light of it or in any way seem like you
like to fire people. At the same time, you will do it when it is the
right thing to do. When it comes to the organization versus the
individual who has created a harmful situation, you will protect the
organization. Remember firing is not the same as layoff or reduction in
force.

15. What is your philosophy towards work?
The interviewer is not looking for a long or flowery dissertation here.
Do you have strong feelings that the job gets done? Yes. That’s the
type of answer that works best here. Short and positive, showing a
benefit to the organization.

16. If you had enough money to retire right now, would you?
Answer yes if you would. But since you need to work, this is the type
of work you prefer. Do not say yes if you do not mean it.

17. Have you ever been asked to leave a position?
If you have not, say no. If you have, be honest, brief and avoid saying
negative things about the people or organization involved.

18. Explain how you would be an asset to this organization?
You should be anxious for this question. It gives you a chance to
highlight your best points as they relate to the position being
discussed. Give a little advance thought to this relationship.

19. Why should we hire you?
Point out how your assets meet what the organization needs. Do not
mention any other candidates to make a comparison.

20. Tell me about a suggestion you have made?
Have a good one ready. Be sure and use a suggestion that was accepted
and was then considered successful. One related to the type of work
applied for is a real plus.

21. What irritates you about co-workers?
This is a trap question. Think real hard but fail to come up with
anything that irritates you. A short statement that you seem to get
along with folks is great.

22. What is your greatest strength?
Numerous answers are good, just stay positive. A few good examples:
Your ability to prioritize, Your problem-solving skills, Your ability
to work under pressure, Your ability to focus on projects, Your
professional expertise, Your leadership skills, Your positive attitude.

23. Tell me about your dream job?
Stay away from a specific job. You cannot win. If you say the job you
are contending for is it, you strain credibility. If you say another
job is it, you plant the suspicion that you will be dissatisfied with
this position if hired. The best is to stay genetic and say something
like: A job where I love the work, like the people, can contribute and
can’t wait to get to work.

24. Why do you think you would do well at this job?
Give several reasons and include skills, experience and interest.

25. What are you looking for in a job?
See answer # 23

26. What kind of person would you refuse to work with?
Do not be trivial. It would take disloyalty to the organization,
violence or lawbreaking to get you to object. Minor objections will
label you as a whiner.

27. What is more important to you: the money or the work?
Money is always important, but the work is the most important. There is
no better answer.

28. What would your previous supervisor say your strongest point is?
There are numerous good possibilities:
Loyalty, Energy, Positive attitude, Leadership, Team player, Expertise,
Initiative, Patience, Hard work, Creativity, Problem solver.

29. Tell me about a problem you had with a supervisor?
Biggest trap of all. This is a test to see if you will speak ill of
your boss. If you fall for it and tell about a problem with a former
boss, you may well below the interview right there. Stay positive and
develop a poor memory about any trouble with a supervisor.

30. What has disappointed you about a job?
Don’t get trivial or negative. Safe areas are few but can include:
Not enough of a challenge. You were laid off in a reduction Company did
not win a contract, which would have given you more responsibility.

31. Tell me about your ability to work under pressure?
You may say that you thrive under certain types of pressure. Give an
example that relates to the type of position applied for.

32. Do your skills match this job or another job more closely?
Probably this one. Do not give fuel to the suspicion that you may want
another job more than this one.

33. What motivates you to do your best on the job?
This is a personal trait that only you can say, but good examples are:
Challenge, Achievement, Recognition.

34. Are you willing to work overtime? Nights? Weekends?
This is up to you. Be totally honest.

35. How would you know you were successful on this job?
Several ways are good measures:
You set high standards for yourself and meet them. Your outcomes are a
success.Your boss tell you that you are successful.

36. Would you be willing to relocate if required?
You should be clear on this with your family prior to the interview if
you think there is a chance it may come up. Do not say yes just to get
the job if the real answer is no. This can create a lot of problems
later on in your career. Be honest at this point and save yourself
future grief.

37. Are you willing to put the interests of the organization ahead ofyour own?
This is a straight loyalty and dedication question. Do not worry about
the deep ethical and philosophical implications. Just say yes.

38. Describe your management style?
Try to avoid labels. Some of the more common labels, like progressive,
salesman or consensus, can have several meanings or descriptions
depending on which management expert you listen to. The situational
style is safe, because it says you will manage according to the
situation, instead of one size fits all.

39. What have you learned from mistakes on the job?
Here you have to come up with something or you strain credibility. Make
it small, well intentioned mistake with a positive lesson learned. An
example would be working too far ahead of colleagues on a project and
thus throwing coordination off.

40. Do you have any blind spots?
Trick question. If you know about blind spots, they are no longer blind
spots. Do not reveal any personal areas of concern here. Let them do
their own discovery on your bad points. Do not hand it to them.

41. If you were hiring a person for this job, what would you look for?
Be careful to mention traits that are needed and that you have.

42. Do you think you are overqualified for this position?
Regardless of your qualifications, state that you are very well
qualified for the position.

43. How do you propose to compensate for your lack of experience?
First, if you have experience that the interviewer does not know about,
bring that up: Then, point out (if true) that you are a hard working
quick learner.

44. What qualities do you look for in a boss?
Be generic and positive. Safe qualities are knowledgeable, a sense of
humor, fair, loyal to subordinates and holder of high standards. All
bosses think they have these traits.

45. Tell me about a time when you helped resolve a dispute betweenothers?
Pick a specific incident. Concentrate on your problem solving technique
and not the dispute you settled.

46. What position do you prefer on a team working on a project?
Be honest. If you are comfortable in different roles, point that out.

47. Describe your work ethic?
Emphasize benefits to the organization. Things like, determination to
get the job done and work hard but enjoy your work are good.

48. What has been your biggest professional disappointment?
Be sure that you refer to something that was beyond your control. Show
acceptance and no negative feelings.

49. Tell me about the most fun you have had on the job?
Talk about having fun by accomplishing something for the organization.

50. Do you have any questions for me?
Always have some questions prepared. Questions prepared where you will be an asset to the organization are good. How soon will I be able to be productive? and What type of projects will I be able to assist on? are
examples.

Testing Dictionary

Written on 10:34 PM by MURALI KRISHNA

Testing Dictionary

Acceptance Testing
Formal testing conducted to enable a user, customer, or other authorized entity to determine whether to accept a system or component.

Actual Outcome
The behaviour actually produced when the object is tested under specified conditions.

Adding Value
Adding something that the customer wants that was not there before.

Ad hoc Testing
Testing carried out using no recognized test case design technique.

Alpha Testing
Simulated or actual operational testing at an in-house site not otherwise involved with the software developers.

Arc Testing
A test case design technique for a component in which test cases are designed to execute branch outcomes.

Backus-Naur Form
A metalanguage used to formally describe the syntax of a language.

Basic Block
A sequence of one or more consecutive, executable statements containing no branches.

Basis Test Set
A set of test cases derived from the code logic, which ensure that 100% branch coverage is achieved.

Bebugging
The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program.

Behaviour
The combination of input values and preconditions and the required response for a function of a system. The full specification of a function would normally comprise one or more behaviours.

Benchmarking
Comparing your product to the best competitors'.

Beta Testing
Operational testing at a site not otherwise involved with the software developers.

Big-bang Testing
Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.

Black Box Testing
Test case selection that is based on an analysis of the specification of the component without reference to its internal workings. Testing by looking only at the inputs and outputs, not at the insides of a program.
Sometimes also called "Requirements Based Testing". You don't need to be a programmer, you only need to know what the program is supposed to do and be able to tell whether an output is correct or not.

Bottom-up Testing
An approach to testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Boundary Value
An input value or output value which is on the boundary between equivalence classes, or an incremental distance either side of the boundary.

Boundary Value Analysis
A test case design technique for a component in which test cases are designed which include representatives of boundary values.

Boundary Value Coverage
The percentage of boundary values of the component's equivalence classes, which have been exercised by a test case suite.

Boundary Value Testing
A test case design technique for a component in which test cases are designed which include representatives of boundary values.

Boundary Value Coverage
The percentage of boundary values of the component's equivalence classes, which have been exercised by a test case suite.

Boundary Value Testing
A test case design technique for a component in which test cases are designed which include representatives of boundary values.

Branch
A conditional transfer of control from any statement to any other statement in a component, or an unconditional transfer of control from any statement to any other statement in the component except the next statement, or when a component has more than one entry point, a transfer of control to an entry point of the component.

Branch Condition
A condition within a decision.

Branch Condition Combination Coverage
The percentage of combinations of all branch condition outcomes in every decision that have been exercised by a test case suite.

Branch Condition Combination Testing
A test case design technique in which test cases are designed to execute combinations of branch condition outcomes.

Branch Condition Coverage
The percentage of branch condition outcomes in every decision that have been exercised by a test case suite.

Branch Condition Testing
A test case design technique in which test cases are designed to execute branch condition outcomes.

Branch Coverage
The percentage of branches that have been exercised by a test case suite.

Branch Outcome
The result of a decision (which therefore determines the control flow alternative taken).

Branch Point
A program point at which the control flow has two or more alternative routes.

Branch Testing
A test case design technique for a component in which test cases are designed to execute branch outcomes.

Bring to the Table
Refers to what each individual in a meeting can contribute to a meeting, for example, a design or brainstorming meetings.

Bug
A manifestation of an error in software. A fault, if encountered may cause a failure.

Bug Seeding
The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program.

C-use
A data use not in a condition.

Capture/Playback Tool
A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.

Capture/Replay Tool
A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.

CAST
Acronym for computer-aided software testing.

Cause-effect Graph
A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

Cause-effect Graphing
A test case design technique in which test cases are designed by consideration of cause-effect graphs.

Certification
The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use.

Chow's Coverage Metrics
The percentage of sequences of N-transitions that have been exercised by a test case suite.

Code Coverage
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
If we have a set of tests that gives Code Coverage, it simply means that, if you run all the tests, every line of the code is executed at least once by some test.

Code-based Testing
Designing tests based on objectives derived from the implementation (e.g., tests that execute specific control flow paths or use specific data items).

Compatibility Testing
Testing whether the system is compatible with other systems with which it should communicate.

Complete Path Testing
A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables.

Component
A minimal software item for which a separate specification is available.

Component Testing
The testing of individual software components.

Computation Data Use
A data use not in a condition. Also called C-use.

Concurrent (or Simultaneous) Engineering
Integrating the design, manufacturing, and test processes.

Condition
A Boolean expression containing no Boolean operators. For instance, A < B is a condition but A and B is not.

Condition Coverage
The percentage of branch condition outcomes in every decision that have been exercised by a test case suite.

Condition Outcome
The evaluation of a condition to TRUE or FALSE.

Conformance Criterion
Some method of judging whether or not the component's action on a particular specified input value conforms to the specification.

Conformance Testing
The process of testing that an implementation conforms to the specification on which it is based.

Continuous Improvement
The PDSA process of iteration which results in improving a product.

Control Flow
An abstract representation of all possible sequences of events in a program's execution.

Control Flow Graph
The diagrammatic representation of the possible alternative control flow paths through a component.

Control Flow Path
A sequence of executable statements of a component, from an entry point to an exit point.

Conversion Testing
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Correctness
The degree to which software conforms to its specification.

Coverage
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite.
A measure applied to a set of tests. The most important are called Code Coverage and Path Coverage. There are a number of intermediate types of coverage defined, moving up the ladder from Code Coverage (weak) to Path Coverage (strong, but hard to get). These definitions are called things like Decision -Condition Coverage, but they're seldom important in the real world. For the curious, they are covered in detail in Glenford Myers' book, The Art of Software Testing.

Coverage Item
An entity or property used as a basis for testing.

Customer Satisfaction
Meeting or exceeding a customer's expectations for a product or service.

Data Coupling
Data coupling is where one piece of code interacts with another by modifying a data object that the other code reads. Data coupling is normal in computer operations, where data is repeatedly modified until the desired result is obtained. However, unintentional data coupling is bad. It causes many hard to find bugs, and it causes all "side effect" bugs caused by changing existing systems. It's the reason why you need to test all paths to do really tight testing. The best way to combat it is with Regression Testing, to make sure that a change didn't break something else.

Data Definition
An executable statement where a variable is assigned a value.

Data Definition C-use Coverage
The percentage of data definition C-use pairs in a component that are exercised by a test case suite.

Data Definition C-use Pair
A data definition and computation data use, where the data use uses the value defined in the data definition.

Data Definition P-use Coverage
The percentage of data definition P-use pairs in a component that are exercised by a test case suite.

Data Definition P-use Pair
A data definition and predicate data use, where the data use uses the value defined in the data definition.

Data Definition-use Coverage
The percentage of data definition-use pairs in a component that are exercised by a test case suite.

Data Definition-use Pair
A data definition and data use , where the data use uses the value defined in the data definition.

Data Definition-use Testing

A test case design technique for a component in which test cases are designed to execute data definition-use pairs.

Data Flow Coverage
Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc.

Data Flow Testing
Testing in which test cases are designed based on variable usage within the code.

Data Use
An executable statement where the value of a variable is accessed.

Debugging
The process of finding and removing the causes of failures in software.

Decision
A program point at which the control flow has two or more alternative routes.

Decision Condition
A condition within a decision.

Decision Coverage
The percentage of decision outcomes that have been exercised by a test case suite.

Decision Outcome
The result of a decision (which therefore determines the control flow alternative taken).

Design
The creation of a specification from concepts.

Design-based Testing
Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms).

Desk Checking
The testing of software by the manual simulation of its execution.

Dirty Testing
Testing aimed at showing software does not work.

Documentation Testing
Testing concerned with the accuracy of documentation.

Domain
The set from which values are selected.

Domain Testing
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Dynamic Analysis
The process of evaluating a system or component based upon its behaviour during execution.

Driver
A throwaway little module that calls something we need to test, because the real guy who'll be calling it isn't available.
For example, suppose module A needs module X to fire it up.
X isn't here yet, so we use a stub:

Emulator
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

Entry Point
The first executable statement within a component.

Equivalence Class
A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
An Equivalence Class (EC) of input values is a group of values that all cause the same sequence of operations to occur.
In Black Box terms, they are all treated the same way according to the specs. Different input values within an Equivalence Class may give different answers, but the answers are produced by the same procedure.
In Glass Box terms, they all cause execution to go down the same path.

Equivalence Partition
A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

Equivalence Partition Coverage
The percentage of equivalence classes generated for the component, which have been exercised by a test case suite.

Equivalence Partition Testing
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Error
A human action that produces an incorrect result.

Error Guessing
A test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them.

Error Seeding
The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program.

Executable Statement
A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data.

Exercised
A program element is exercised by a test case when the input value causes the execution of that element, such as a statement, branch, or other structural element.

Exhaustive Testing
A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables.

Exit Point
The last executable statement within a component.

Facility Testing
Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.

Failure
Deviation of the software from its expected delivery or service.

Fault
A manifestation of an error in software. A fault, if encountered may cause a failure.

Feasible Path
A path for which there exists a set of input values and execution conditions, which causes it to be executed.

Feature Testing
Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.

Flow Charting
Creating a 'map' of the steps in a process.

Functional Chunk
The fundamental unit of testing.
Its precise definition is "The smallest piece of code for which all the inputs and outputs are meaningful at the spec level." This means that we can test it Black Box, and design the tests before the code arrives without regard to how it was coded, and also tell whether the results it gives are correct.

Functional Specification
The document that describes in detail the characteristics of the product with regard to its intended capability.

Functional Test Case Design
Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.

Glass Box Testing
Test case selection that is based on an analysis of the internal structure of the component.
Testing by looking only at the code.
Sometimes also called "Code Based Testing".
Obviously you need to be a programmer and you need to have the source code to do this.

Incremental Integration
A systematic way for putting the pieces of a system together one at a time, testing as each piece is added. We not only test that the new piece works, but we also test that it didn't break something else by running the RTS (Regression Test Set).

Incremental Testing
Integration testing where system components are integrated into the system one at a time until the entire system is integrated.

Independence
Separation of responsibilities, which ensures the accomplishment of objective evaluation.

Infeasible Path
A path, which cannot be exercised by any set of possible input values.

Input
A variable (whether stored within a component or outside it) that is read by the component.

Input Domain
The set of all possible inputs.

Input Value
An instance of an input.

Inspection
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

Installability Testing
Testing concerned with the installation procedures for the system.

Instrumentation
The insertion of additional code into the program in order to collect information about program behaviour during program execution.

Instrumenter
A software tool used to carry out instrumentation.
Integration
The process of combining components into larger assemblies.

Integration Testing
Testing performed to expose faults in the interfaces and in the interaction between integrated components.

Interface Testing
Integration testing where the interfaces between system components are tested.

Isolation Testing
Component testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs.

LCSAJ
A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ Coverage
The percentage of LCSAJs of a component, which are exercised by a test case suite.

LCSAJ Testing
A test case design technique for a component in which test cases are designed to execute LCSAJs.

Logic-coverage Testing
Test case selection that is based on an analysis of the internal structure of the component.

Logic-driven Testing
Test case selection that is based on an analysis of the internal structure of the component.

Maintainability Testing
Testing whether the system meets its specified objectives for maintainability.

Manufacturing
Creating a product from specifications.

Metrics
Ways to measure: e.g., time, cost, customer satisfaction, quality.

Modified Condition/Decision Coverage
The percentage of all branch condition outcomes that independently affect a decision outcome that have been exercised by a test case suite.

Modified Condition/Decision Testing
A test case design technique in which test cases are designed to execute branch condition outcomes that independently affect a decision outcome.

Multiple Condition Coverage
The percentage of combinations of all branch condition outcomes in every decision that have been exercised by a test case suite.

Mutation Analysis
A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program. See also The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program.

N-switch Coverage
The percentage of sequences of N-transitions that have been exercised by a test case suite.

N-switch Testing
A form of state transition testing in which test cases are designed to execute all valid sequences of N-transitions.

N-transitions
A sequence of N+1 transitions.

Negative Testing
Testing aimed at showing software does not work.

Non-functional Requirements Testing
Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc.

Operational Testing
Testing conducted to evaluate a system or component in its operational environment.
Oracle
A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test.

Outcome
Actual outcome or predicted outcome. This is the outcome of a test.

Output
A variable (whether stored within a component or outside it) that is written to by the component.

Output Domain
The set of all possible outputs.

Output Value
An instance of an output.

P-use
A data use in a predicate.

Partition Testing
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Path
A sequence of executable statements of a component, from an entry point to an exit point.

Path Coverage
The percentage of paths in a component exercised by a test case suite.
A set of tests that gives Path Coverage for some code if the set goes down each path at least once.
The difference between this and Code Coverage is that Path Coverage means not just "visiting" a line of code, but also includes how you got there and where you're going next. It therefore uncovers more bugs, especially those caused by Data Coupling.
However, it's impossible to get this level of coverage except perhaps for a tiny critical piece of code.

Path Sensitizing
Choosing a set of input values to force the execution of a component to take a given path.

Path Testing
A test case design technique in which test cases are designed to execute paths of a component.

Performance Testing
Testing conducted to evaluate the compliance of a system or component with specified performance requirements.

Portability Testing
Testing aimed at demonstrating the software can be ported to specified hardware or software platforms.

Precondition
Environmental and state conditions which must be fulfilled before the component can be executed with a particular input value.

Predicate
A logical expression, which evaluates to TRUE or FALSE, normally to direct the execution path in code.

Predicate Data Use
A data use in a predicate.

Predicted Outcome
The behaviour predicted by the specification of an object under specified conditions.

Process
What is actually done to create a product.

Program Instrumenter
A software tool used to carry out instrumentation.

Progressive Testing
Testing of new features after regression testing of previous features.

Pseudo-random
A series which appears to be random but is in fact generated according to some prearranged sequence.

Quality Tools
Tools used to measure and observe every aspect of the creation of a product.

Recovery Testing
Testing aimed at verifying the system's ability to recover from varying degrees of failure.

Regression Testing
Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Re-running a set of tests that used to work to make sure that changes to the system didn't break anything. It's usually run after each set of maintenance or enhancement changes, but is also very useful for Incremental Integration of a system.

RTS (Regression Test Set)
The set of tests used for Regression Testing. It should be complete enough so that the system is defined to "work correctly" when this set of tests runs without error.

Requirements-based Testing
Designing tests based on objectives derived from requirements for the software component (e.g., tests that exercise specific functions or probe the non-functional constraints such as performance or security).

Result
Actual outcome or predicted outcome. This is the outcome of a test.

Review
A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval.

Security Testing
Testing whether the system meets its specified security objectives.

Serviceability Testing
Testing whether the system meets its specified objectives for maintainability.

Simple Subpath
A subpath of the control flow graph in which no program part is executed more than necessary.

Simulation
The representation of selected behavioural characteristics of one physical or abstract system by another system.

Simulator
A device, computer program or system used during software verification, which behaves or operates like a given system when provided with a set of controlled inputs.

Six-Sigma Quality

Meaning 99.999997% perfect; only 3.4 defects in a million.

Source Statement
An entity in a programming language, which is typically the smallest indivisible unit of execution.

SPC
Statistical process control; used for measuring the conformance
of a product to specifications.

Specification
A description of a component's function in terms of its output values for specified input values under specified preconditions.

Specified Input
An input for which the specification predicts an outcome.

State Transition
A transition between two allowable states of a system or component.

State Transition Testing
A test case design technique in which test cases are designed to execute state transitions.

Statement

An entity in a programming language, which is typically the smallest indivisible unit of execution.

Statement Coverage
The percentage of executable statements in a component that have been exercised by a test case suite.

Statement Testing
A test case design technique for a component in which test cases are designed to execute statements.

Static Analysis
Analysis of a program carried out without executing the program.

Static Analyzer
A tool that carries out static analysis.

Static Testing
Testing of an object without execution on a computer.

Statistical Testing
A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.

Storage Testing
Testing whether the system meets its specified storage objectives.

Stress Testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

Structural Coverage
Coverage measures based on the internal structure of the component.

Structural Test Case Design
Test case selection that is based on an analysis of the internal structure of the component.

Structural Testing
Test case selection that is based on an analysis of the internal structure of the component.

Structured Basis Testing
A test case design technique in which test cases are derived from the code logic to achieve 100% branch coverage.

Structured Walkthrough
A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review.

Stub
A skeletal or special-purpose implementation of a software module, used to develop or test a component that calls or is otherwise dependent on it. A little throwaway module that can be called to make another module work (and hence be testable).
For example, if we want to test module A but it needs to call module B, which isn't available, we can use a quick little stub for B. It just answers "hello from b" or something similar; if asked to return a number it always returns the same number - like 100.


Subpath
A sequence of executable statements within a component.

Symbolic Evaluation
A static analysis technique that derives a symbolic expression for program paths.

Syntax Testing
A test case design technique for a component or system in which test case design is based upon the syntax of the input.

System Testing
The process of testing an integrated system to verify that it meets specified requirements.

Technical Requirements Testing
Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc.

Test
Testing the product for defects.

Test Automation
The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

Test Case
A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Case Design Technique
A method used to derive or select test cases.

Test Case Suite
A collection of one or more test cases for the software under test.

Test Comparator
A test tool that compares the actual outputs produced by the software under test with the expected outputs for that test case.

Test Completion Criterion
A criterion for determining when planned testing is complete, defined in terms of a test measurement technique.

Test Coverage
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite.

Test Driver
A program or test tool used to execute software against a test case suite.

Test Environment
A description of the hardware and software environment in which the tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test Execution
The processing of a test case suite by the software under test, producing an outcome.

Test Execution Technique
The method used to perform the actual test execution, e.g. manual, capture/playback tool, etc.

Test Generator
A program that generates test cases in accordance to a specified strategy or heuristic.

Test Harness
A testing tool that comprises a test driver and a test comparator.

Test Measurement Technique
A method used to measure test coverage items.

Test Outcome
Actual outcome or predicted outcome. This is the outcome of a test. The result of a decision (which therefore determines the control flow alternative taken). The evaluation of a condition to TRUE or FALSE.

Test Plan
A record of the test planning process detailing the degree of tester independence, the test environment, the test case design techniques and test measurement techniques to be used, and the rationale for their choice.

Test Procedure
A document providing detailed instructions for the execution of one or more test cases.

Test Records
For each test, an unambiguous record of the identities and versions of the component under test, the test specification, and actual outcome.

Test Script
Commonly used to refer to the automated test procedure used with a test harness.

Test Specification
For each test case, the coverage item, the initial state of the software under test, the input, and the predicted outcome.

Test Target
A set of test completion criteria.

Testing
The process of exercising software to verify that it satisfies specified requirements and to detect errors.

Thread Testing
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top-down Testing
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Control (TQM)
Controlling everything about a process.

Unit Testing
The testing of individual software components.
A Unit, as we use it in these techniques, is a very small piece of code with only a few inputs and outputs. The key thing is that these inputs and outputs may be implementation details (counters, array indices, pointers) and are not necessarily real world objects that are described in the specs. If all the inputs and outputs are real-world things, then we have a special kind of Unit, the Functional Chunk, which is ideal for testing. Most ordinary Units are tested by the programmer who wrote them, although a much better practice to have them subsequently tested by a different programmer.
Caution: There is no agreement out there about what a "unit" or "module" or subsystem" etc. really is. I've heard 50,000 line COBOL programs described as "units". So the Unit defined here is, in fact, a definition.

Usability Testing
Testing the ease with which users can learn and use a product.

Validation
Determination of the correctness of the products of software development with respect to the user needs and requirements.

Verification
The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase.

Volume Testing
Testing where the system is subjected to large volumes of data.

Walkthrough

A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review.

White Box Testing
Test case selection that is based on an analysis of the internal structure of the component.

Testing tools Faqs

Written on 10:29 PM by MURALI KRISHNA

Testing tools Faqs

1. What are the entry and exit points in test plan?
A. Entry point of test plan is- that functional description to be tested and required procedure is the exit point.
2. What is the use of “swing time” in Load runner?
A. Elapsed time indicates swing time(to determine the performance of the application under load)
3. What is internationalization?
A. It means that following standards. Ex. ISO/CMM/Six Sigma.
4. What are the different functions in winrunner?
A. Analog functions, Context Sensitive functions, Customization functions, Miscellaneous and Standard functions.
5. What are the testings involved in black-box testing?
A. Acceptance Testing, Control-flow, Data flow and integrity, capability, stress testing, user-interface, regression, performance, potential bugs, beta test, release test, utilities.
6. What is integration testing?
A. A test related to more than one module called integration testing .
7. What is unit testing?
A. A test at module level is called unit testing.
8.What are the documents prepared for ISO?
A. Quality manual, Quality procedures, work Instructions formats.
9. What is the difference between Load & Stress testing?
A. Load testing is used to estimate under incremental load, while stress testing used to find bugs when an application working at maximum volumes of resources.
10. What are CMM levels? Explain? Key process areas? (V & III)
A. 5 levels.
11. What is memory leakage?
A. Improper allocations/de-allocations of memory during execution time.
12. What are external functions?
A. External functions means that a function defined in one test and used in another tests.
13. What are data table functions?
A. ddt_open(), ddt_update_from _db(), ddt_save() ,ddt_get_row_count(), ddt_set_rows(), ddt_val(), ddt_close().
14. What are the files using in winrunner?
A. Script files, checklist files, expected values files, data table files, guimap files, start up script files.
15. What is exception handing?
A. To remove runtime testing problems, we can use exception handling.
16. What are exceptions in winrunner?
A. TSL, Object, Pop-up and URL
17. What is GUI map file? Explain?
A. GUI map file consists of “logical names” and ‘physical descriptions’ of objects & windows, which are recognized for each object/window during recording and running.
18. What are the add- ins in winrunner?
A. Web Test, ActiveX, Power Builder and Visual Basic.
19. What is visual recorder in Load runner?




24. How to open GUI map file?
A. Using GUI map editor.
25. How can it be known when to stop testing?
A. When existing test cases show any application change (called mutation).

26. What are the parameters in Load runner?
A. Elapsed, Transaction, Response, Hit per second and Throughput.
27. How do you read text from an application?
A. Get text and Select text for(for web)
28. How can you map custom classes to standard classes?
A. By using Virtual Object Wizard.
29. What is s/w testing?
A. Verification and Validation of an Application.
30. Why you need a test plan? What is test plan?
A. Test plan specified process and scheduling of an application.
31. What is big bang testing? Or informal testing?
A. It means that a testing on total system.
32. What is formal testing?
A. Formal(or Incremental) means ‘until testing’.
33. What is mutation testing?
A. Small changes identified by existing test cases.
34. What is regression testing?
A. Regression means that execution of test cases on new version of application/modified version.
35. What is stress, security testing?
A. Security testing means that the testing of an application depends on Authorization and encryption/decryption cases.
36. What is test case?
A. Test case is a small issue to test functionality.
37. What is the diff between function and Test case?
A. Test cases are derived from functionality.
38. Contents of Test plan?
A. Test team, Test scheduling, Test factors, Approach, Functionality list of Test case.
39. Difference between water fall model and iterative model?
A. Waterfall is a single thread process where as Iterative is multiple thread process.
40. Where to use private and public functions in script?
A. private functions are only applied on current Test and public functions applied anywhere.
41. What is system testing?
A. A Test at system level.
42. What are the major bugs you find in the application you have tested?
A. Load test errors.
43. What are the minor bugs you find in the application you have tested?
A. User interface errors.
44. What is the life cycle of bug reporting?
A. Bug detection=>fixing the bug=>bug reproducing=>report creation=>bug submission.
45. On what basis the test plan is prepared?
A. Test manager develops Test Plan based on development plan(SRS and FRS)
46. What you will do after preparing test cases?
A. Test procedure preparation after selection of test case.
47. What is the formula used in white box testing?
A. In White-box testing, tester concentrates on what is requirement of program and what is the functionality of the program.
48. When test cases will be prepared, after coding or before coding?
A. Test cases are prepared at design level.
49. What are the documents you have in testing environment?
A. Testing policy, testing strategy, testing methodology, test plan, test cases, test scenarios, test procedures, test scripts, logs and bug reports.
50. What makes a good s/w quality assurance engineer?
A. QA engineer produce suggestions to tester to increase the strength of the testing process.
51. What is testing life cycle? What is bug tracking life cycle?
A. Life cycle means testing at all development stages.
52. What if the s/w has so many bugs, it can not really be tested?
A. Be bugging(not debugging) is a process to estimate defects before testing.
53. How does a Client/Server environment effect testing?
A. In Client/Server(C/S) testing, testers follows these steps:
i. Assess Readiness(Integration of C/S process)
ii. Assess key process.
iii. Perform testing.
54. How can WWW be tested?
A. To Test web applications, testers follows these steps:]
i. Select Web based risks
ii. Select Web based tests
iii. Select Web based tools
iv Perform tests
55. What is s/w quality assurance?
A. QA shows, customer satisfaction of application, including features and flaws.
56. Can U tell me why does s/w have bugs?
A. Different members at different levels develop software applications.
57. What is verification problems in s/w development process?
A. Verification:- To Check application based on corresponding development documents.
Validation:- To Check whether the application functionality equals to customer’s expectation
Walkthrough:- Is the Review of the Total application functionality.
58. What are 5 common problems in s/w development process?
A. i. Poor requirements
ii. Improper scheduling
iii. Testing with wrong criteria
iv. Miscommunication.
59. What is s/w quality?
A. Customer satisfaction including no.of features/flaws.
60. Will automated testing tools make testing easier?
A. Automated testing decreases complexity in testing, when applications consists of:
i. More external interfaces
ii. Type of external interfaces
iii. No of releases
iv. Maturity in application etc.
61. What are the silk test tools to manage, execute and interprets your scripts?
A. In silk testing, we can follow below test process:
i. Plan inclusion
ii. Recapturing
iii. Recording(script)
iv. Run the script
v. Analyze defects
vi. Manual report
62. What is smoke testing? Out time editor, the result processor, the debugger?
A. A test case with wrong criteria. It specified indication of bug.
63. How do you set the parameters when test the Client/Server application?
A. In C/S Load Testing, we can depend on elapsed transaction and response times.
64. What is recovery system? Can you give some functions that override some of the default behaviour?
A. Default script enter, Default script exit.
65. Why silk test sees objects as custom windows?
A. Silk test sees each objects as user-defined. Because this tool used java application testing.
66. Can you test an application in silk test that is running on another system?
A. Yes

67. Can you test multiple different applications simultaneously?
A. Yes. We can concentrate on more test cases simultaneously because each application test has different test cases.
68. What is spawn statement? Why it is used?
A. To maintain delay, during execution time for synchronization.
Spawn(time in seconds)
69. What is application state and base state and call state?
A. Application State: Is a situation in application
Base State : Is the starting state of application
Call State : Is the state of application
70. What is difference between winrunner and silk test?
A. Test procedure Vs. Object oriented script; Different process Vs. Single threaded.
71. How does winrunner identifies GUI objects?
A. Logical names and physical descriptions.
72. How do you program tests with TSL?
A. A program, test using TSL at “exe” level.
73. What TL_STEP function is used?
A. To insert user defined error message in Test log.
74. What is the difference b/w pause();,message();?
A. pause()-used to display a message with termination of process.
Report-msg()-used to give message in test results table.
75. What is batch test?
A. Execution of more than one tests simultaneously.
76. What is DON in silk?
A. In silk test, silk agent is Don, It displays at status bar.
77. What is the base class for all the classes in Silk Test?
A. In silk test, Application name is base class to all classes in scripting.
78. What is the use of style bits in Custom class in silk?
A. The class of object in verify window is called style of custom class.
79. What is extension in silk?
A. .t is the extension of test case, .pln for test plan.
80. What .inc file consists in silk?
A. .inc is the extension of test frame.
81. how do you create a user defined class in silk
A. It allows user defined classes in script.
82. What are agents in silk?
A. Capturing agents, recording and running agents.
83. how do you refer your test case in .t file through Test plan?
A. . pln is extension of test plan and it specifies .t as test case.
84. Will TSL supports function overloading & operator overloading?
A. TSL allows function overloading, but not operator overloading.
85. If you purchase a software, what test you perform?
A. Purchased software is called off-the-shelf software. To test this application,we can perform functionality testing.
86. Did you see user defined TSL in your project?
A. Yes
87. How many functions you write in your project?
A. Depends on requirement, to define more then one user-defined functions.
88. What are key features the Bug tracking database must offer?
A. Bug Tracking database provide test doc, retrieving and maintenance.
89. Write a program in TSL to get data from data table and feed them your edited fields in the application?
A. Data driven test(ddt).
90. What is negative testing?
A. A Test with Fail criteria.
91. What is change management process?
A. Change management process is used to test the changes in existing application, which is in maintenance stage.
92. What is configuration management process?
A. Configuration management process is used to verify the configuration of the existing application performance changes.
93. What is version control?
A. Maturity in new version compare to the previous version.
94. What is defect tracking?
95. What is security testing?
A. Whether this appli follows correct authorization of Encrypt and Descript procedure or not.
96. What is good code?
A. More functionality with less statements and meets customer requirements.
97. What is digital signature test?
A. By bitmap testing.
98. What is sort testing?
A. Used to test sorting tech. In your application.
99. What is memory leakage testing?
A. Stress testing identifies memory leakages.
100. What is data driven automation?
A. To run existing test case with different input values, we can use Data driven test.
101. What is compatibility testing?
A. When application server on different input values, we can use Data driven test.
102. What is defect deficiency? If A is the no. of bugs found in Alpha test, B is the no.of bugs found in Beta test, what no?
A. Defect deficiency/defect removal efficiency is defined by the formula:
A/(A+B) where A: No. of bugs in previous test
B: No.of bugs in present test.
103. How will you choose a tool in test automation? How will you find the tools work well with your existing system?
A. To select a tool, we can depends on:
i. Scripting style of tool
ii. Scripting updating
iii. Reusability of scripting
iv. Readability of scripting
v. Batch
104. How can data caching have a negative effect in Load testing results?
A. In load testing, we will get negative results when no available large memory areas and does not clear items from buffer.
105. What are the benefits of creating multiple actions within any virtual user scripts?
A. We can use more than on operator actions to estimate performance of each operation through single test.
106. How do you scope, organize and execute your project?
A. According to standards and customer requirements.
107. What sort of things would you put down in your bug report?
A. Bug severity/priority or main factors to list out defects.
108. Should we test every possible combination/scenario for a program?
A. Tester tries to test all possible test cases for application.
109. What metrics you feel important to publish in organization?
A. In an organization, we can use LOC(luer of code) related and functional point related metrics.
110. What is your worst experience in project?
A. Finding bugs, reproducing bugs is important in bug tracking, but all bugs are not reproducible.
111. What is your experience in code analyzers?
A. Pseudo code analyzers are used to estimate the logic of program without execution.

112. How did you involve in bug fix cycle between developer and QA?
A. Tester working as middle position in between QA and developer with bug tracking stages.
113. How do you know your code has met specification when there are no specifications?
A. A tester identifies the functionality of program depends on specifications. If there is no specification then he/she
i. Can depends on estimated functionality by self(acts as customer)
ii. Previous version and
iii. Direct communication with customer.
114. What type of documents would you need for QA/QC Testing?
A. Testing policy=>strategy=>methodology=>…etc.
115. How you participated in integration testing?
A. Participate in integration testing with compound test cases of modules.
116. How would you ensure 100% coverage of testing?
A. Testers prepare test cases related to all functional items to ensure 100% of testing coverage.
117. What are basic and core practices for a QA specialist?
A. QA Analyst identifies all functional items and give possible cases to test.
118. What are basic elements in defect report?
A. Defect report consists of:
i. program to be tested
ii. Tester name
iii. Date and time
iv. Severity
v. Summary
vi. Reason(optional)
vii. Priority
viii. Assigned to etc…
119. How do you prioritize testing task with in a project?
A. Depends on the severity the priority will be decided depends on the functionality of the test case.
120. Do you know of metrics that help you in estimate the size of the testing efforts?
A. We can use benchmark testing to develop metrics.
121. Discuss economics of automation and role of metrics in testing?
A. Economics of automation depends on no.of external interfaces, types of interface and maturity in application.
122. What methodologies do you need to develop test cases?
A. To develop test cases, we can use functionality list and standards of the organization.
123. Difference between test strategy and test plan?
A. Test strategy specifies the mapping between the factors and development stages. Test plan specifies actual process and schedule of testing of that project.
124. If you do stress testing? What conclusions can you arrival?
A. Correctness of application. When you run an application at the maximum values of resources.
125. What is difference between CMM and CMMI?
A. CMM/Integrated
126. What is compilation define w.r.t Load runner?
A. Load Runner point of view, compilation means that the Vuser script consists of correct syntaxes and semantics.
127. What is s/w defect life cycle?
128. What is equivalence partitioning?
A. Equivalence partitioning is used to combine same type of test cases related to single functionality/feature/module.
129. What type of scripting techniques for test automation do you know?
A. C-oriented TSL, VB-oriented SQA suite and Java-oriented silk test.
130. What criteria would you used to select web transactions for Load testing?
A. Web server related criteria Ex: e-business Vuser type in load runner.
131. Explain some techniques for developing s/w components w.r.t testability?
A. V-model
132. Describe components of a typical test plan such as tools for interactive products and database products as well as cause and effect graphs and data flow diagrams?
A. Depends on DFD testers imagine an approach to test a factor.
133. When have you had to focus on data integrity?
A. In integrating testing we can concentrate on data integrity.
134. How to import a DLL file in TSL Script?
A. By compiled module concept, logical DLLs.
135. How to write virtual API scripts in Test Director?
A. Test Director provides a facility of launching option connected to functional testing tool, which creates test script.
136. What is requirement phase testing?
A. Walk through, Reviews and Inspections.
137. What is Design phase testing?
A. Reviews, Inspections and prototypes.
138. What is program phase testing?
A. White-box testing/structural.
139. What is back-end testing? How to do?
A. Database testing is called back-end testing.
140. Why you are following Water-fall model?
A. There is no time and cost restrictions, we can choose waterfall model to develop an application.
141. When to use iterative model, Spiral model, RAD model?
142. What are the different bug-tracking tools?
A. Test Director, Test manager etc.

Test Execution Process

Written on 10:27 PM by MURALI KRISHNA

Test Execution Process

The preparation to test the application is now over. The test team should next plan the execution of the test on the application. In this section, we will see how test execution is performed.

Stages of Testing

Three passes
Tests on the application are done on stages. Test execution takes place in three passes or sometimes four passes depending on the state of the application. They are:

Pre IST or Pass 0: This is done to check the health of the system before the start of the test process. This stage may not be applicable to most test process. Free form testing will be adopted in this stage.

Comprehensive or Pass 1: All the test scripts developed for testing are executed. Some cases the application may not have certain module(s) ready for test, hence they will be covered comprehensively in the next pass. The testing here should not only cover all test cases but also business cycles as defined in the application.

Discrepancy or Pass 2: All Test scripts that have resulted in a defect during the comprehensive pass should executed. In other words, all defects that have been fixed should be retested. Function points that may be affected by the defect should also be taken up for testing. Automated test scripts captured during the pass one are used here. This type of testing is called as Regression testing. Defects that are not fixed will be executed only after they are fixed.

Sanity or Pass 3: This is the final round in the test process. This is done either at the client's site or at Same Organization depending on the strategy adopted. This is done in order to check if the system is sane enough for the next stage i.e. UAT or production as the case may be under a isolated environment. Ideally the defects that are fixed from the previous pass are checked and freeform testing done to ensure integrity is conducted.

Test Preparation Process

Written on 10:23 PM by MURALI KRISHNA

Test Preparation Process

Baseline Documents
Construction of an application and testing are done using certain documents. These documents are written in sequence, each of it derived from the previous document.
Business Requirement
This document describes users needs for the application. This is done over a period of time, and going through various levels of requirements. This should also portrays functionalities that are technically feasible within the stipulated times frames for delivery of the application.
As this contains user perspective requirements. User Acceptance Test is based on this document.
How to read a Business Requirement?
In case of the Integrated Test Process, this document is used to understand the user requirements and find the gaps between the User Requirement and Functional Specification.
User Acceptance Test team should break the business requirement document into modules depending on how the user will use the application. While reading the document, test team should put themselves as end users of the application. This document would serve as a base for UAT test preparation.
Functional Specification
This document describes the functional needs, design of the flow and user maintained parameters. These are primarily derived from Business Requirement document, which specifies the client's business needs.
The proposed application should adhere to the specifications specified in this document. This is used henceforth to develop further documents for software construction and validation and verification of the software.
In order to achieve synchronization between the software construction and testing process. Functional Specification (FS) serves as the Base document.
How to read a Functional Specification?
The testing process begins by first understanding the functional specifications. The FS is normally divided into modules. The tester should understand the entire functionality that is proposed in the document by reading it thoroughly.
It is natural for a tester at this point to get confused on the total flow and functionality. In order to overcome these, it is advisable for the tester to read the document multiple times, seeking clarifications then and there until clarity is achieved.
Testers are then given a module or multiple modules for validation and verification. These modules then become the tester's responsibility.
The Tester should then begin to acquire an in-depth knowledge of their respective modules. In the process, these modules should be split into segments like field level validations, module rules, business rules etc. In order to do the same modules precisely the tester should interpret importance and its role within the application.
A high level understanding of the data requirements for respective modules is also expected from the tester at this point.
Interaction with test lead at this juncture is crucial to draw a testing approach, like an end-to-end test coverage or individual test. (Explained later in the document)
Tester's Reading Perspective
Functional specification is sometimes written assuming some level of knowledge of the Testers and constructors. We can categorize the explanations by
Explicit Rules: Functionality expressed as conditions clearly in writing, in the document.
Example
Date of a particular field should be system date
Implicit Rules: Functionality that is implied based on what is expressed as a specification/condition or requirement of a user.
Example
FS would mention the following for a deposit creation
Start Date Field: Should be = or > than the system date
Maturity Date Field: Should be = or > than the system date
Under this condition, the implied specification derived is that, Start date should not be equal to the maturity date
The tester must also bear in mind, the test type i.e. Integrated System Testing (IST) or User Acceptance Testing (UAT). Based on this, he should orient his testing approach.
Design Specification
This document is prepared based on the functional specification. It contains the system architecture, table structures and program specifications. This is ideally prepared and used by the construction team. The Test Team should also have a detailed understanding of the design specification in order to understand the system architecture.
System specification This document is a combination of functional specification and design specification. This is used in case of small applications or an enhancement to an application. Under such situations it may not be advisable make two documents.
Prototype
This is look and feel representation of the application that is proposed. This basically shows the placement of the fields, modules and generic flow of the application. The main objective of the prototype is to demonstrate the understanding of the application to the users and obtain their buy-in before actual design and construction begins.
The development team also uses the prototype as a guide to build the application. This is usually done using HTML or MS PowerPoint with user interaction facility.
Scenarios in Prototype
The flow and positioning of the fields and modules are projected using several possible business scenarios derived from the application functionality.
Testers should not expect all possible scenarios to be covered in the prototype.
Flow of Prototype
The flow and positioning are derived from initial documentation off the project. A project is normally dynamic during initial stages, and hence tester should bear in mind the changes to the specification, if any, while using the prototype to develop test conditions.
It is a value addition to the project when tester can identify mismatches between the specifications and prototype, as the application can be rectified in the initial stages itself.

Different Types of Testing

Written on 3:10 AM by MURALI KRISHNA

Performance testing:
1. Performance testing is designed to test run time performance of software within the context of an integrated system. It is not until all systems elements are fully integrated and certified as free of defects the true performance of a system can be ascertained.
2. Performance tests are often coupled with stress testing and often require both hardware and software infrastructure. That is, it is necessary to measure resource utilization in an exacting fashion. External instrumentation can monitor intervals, log events. By instrument the system, the tester can uncover situations that lead to degradations and possible system failure.

Security testing:If your site requires firewalls, encryption, user authentication, financial transactions, or access to databases with sensitive data, you may need to test these and also test your site's overall protection against unauthorized internal or external access.

Exploratory Testing:Often taken to mean a creative, internal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

Benefits Realization tests: With the increased focus on the value of Business returns obtained from investments in information technology, this type of test or analysis is becoming more critical. The benefits realization test is a test or analysis conducted after an application is moved into production in order to determine whether the application is likely to deliver the original projected benefits. The analysis is usually conducted by the business user or client group who requested the project and results are reported back to executive management.

Mutation Testing:Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

Sanity testing:Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Build Acceptance Tests: Build Acceptance Tests should take less than 2-3 hours to complete (15 minutes is typical). These test cases simply ensure that the application can be built and installed successfully. Other related test cases ensure that Testing received the proper Development Release Document plus other build related information (drop point, etc.). The objective is to determine if further testing is possible. If any Level 1 test case fails, the build is returned to developers un-tested.

Smoke Tests : Smoke Tests should be automated and take less than 2-3 hours (20 minutes is typical). These tests cases verify the major functionality a high level. The objective is to determine if further testing is possible. These test cases should emphasize breadth more than depth. All components should be touched, and every major feature should be tested briefly by the Smoke Test. If any Level 2 test case fails, the build is returned to developers un-tested.

Bug Regression Testing :Every bug that was “Open” during the previous build, but marked as “Fixed, Needs Re-Testing” for the current build under test, will need to be regressed, or re-tested. Once the smoke test is completed, all resolved bugs need to be regressed. It should take between 5 minutes to 1 hour to regress most bugs.

Database Testing: Database testing done manually in real time, it check the data flow between front end back end.Observing that operations, which are operated on front-end is effected on back-end or not.The approach is as follows:
While adding a record there' front-end check back-end that addition of record is effected or not. So same for delete, update, Some other database testing checking for mandatory fields, checking for constraints and rules applied on the table , some time check the procedure using SQL Query analyzer.


Functional Testing (or) Business functional testing
:All the functions in the applications should be tested against the requirements document to ensure that the product conforms with what was specified.(They meet functional requirements)
Verifies the crucial business functions are working in the application. Business functions are generally defined in the requirements Document. Each business function has certain rules, which can’t be broken. Whether they applied to the user interface behavior or data behind the applications. Both levels need to be verified. Business functions may span several windows (or) several menu options, So simply testing that all windows and menus can be used is not enough to verify the business functions. You must verify the business functions as discrete units of your testing.

Functional testing
Study SRS
Identify Unit Functions
For each unit function
Take each input function
Identify Equivalence class
Form Test cases
Form Test cases for boundary values
From Test cases for Error Guessing
Form Unit function v/s Test cases, Cross Reference Matrix

User Interface Testing (or) structural testing: It verifies whether all the objects of user interface design specifications are met. It examines the spelling of button test, window title test and label test. Checks for the consistency or duplication of accelerator key letters and examines the positions and alignments of window objects.


Volume Testing:
Testing the applications with voluminous amount of data and see whether the application produces the anticipated results. (Boundary value analysis).

Stress Testing: Testing the applications response when there is a scarcity for system resources.

Load Testing: It verifies the performance of the server under stress of many clients requesting data at the same time.

Installation testing: The tester should install the systems to determine whether installation process is viable or not based on the installation guide.

Configuration Testing: The system should be tested to determine it works correctly with appropriate software and hardware configurations.

Compatibility Testing: The system should be tested to determine whether it is compatible with other systems (applications) that it needs to interface with.

Documentation Testing: It is performed to verify the accuracy and completeness of user documentation.
This testing is done to verify whether the documented functionality matches the software functionality.
The documentation is easy to follow, comprehensive and well edited.
If the application under test has context sensitive help, it must be verified as part of documentation testing .