Wednesday, December 29, 2010
Web testing-Check list
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing
Tuesday, December 28, 2010
Difference between web based and windows based applications
Windows based application is more precisely a computer based application,which is run through OS(win,linux,mac os x).
2. Web based application Can be used by different users accessing the same program...hence mobility is more.
Windows based application Has to be accessed by the particular computer.
3 Speed is slow for Web based application
Speed is comparatively faster for Windows based applications
4 Database handling plays a major part in web based
Role of database is less or none in desktop application
5 Range is wider for web applications-http,www,sntp,snmp.
Limited to a particular computer for window based application
6 A Web based application Uses network layer
Windows based application Uses application layer
What are the fields that contained in test plan??
2) Introduction : About project
3) Test Items: Names of all modules in the project.
4) Features to be tested :
5)Features not to be tested:
6) Testing approach: finalized TRM, selected testing techniques
7) Entry Criteria : When testing can be started
8) Exit Criteria : when testing can be stopped
9)Fail or Pass criteria:
10) Test Environment : The environment where the test is to be carried out.
11) Test Delivarables:
12) Staff and training
13) Roles and Responsibilities
14 ) Project starting and End dates
15) Risks and mitigation
Wednesday, December 22, 2010
what is a testcase and usecase ?
For example if the user enters valid userid and password and after clicking on login button the system should display home page.
Here the user is ACTOR. The operations performed by him are ACTIONS. The Systems display of home page is RESPONSE.
Actually using use cases we will write our test cases if any use cases are provided in the SRS.
A test case is a set of test inputs,executional conditions,with some expected results developed for validating a particular functionality of AUT.
what is the difference between delivarables and Release notes?
Deliverables are provided at the end of the testing. Test plan, test cases, defect reports, documented defects which are not fixed etc come under deliverables.
Tuesday, December 21, 2010
Incremental Integration testing
For this type of testing we should know the parent / child relationship between the modules means which module is the parent / child of which module.
V Model
The development team will apply "do-procedures" to achieve the goals and the testing team will apply "Check-procedures" to verify that. Its a parallel process and finally arrives to the product with almost no bugs or errors
Traditional Water fall model will not allow to do the testing and the coding process in parallel. V - model in the SDLC will allow the process to have testing and coding as a parallel activity which enables the changes to occur more dynamic.
Bug Leak
Sunday, December 19, 2010
What are the software testing methologies??
- Waterfall model
- V model
- Spiral model
- RUP
- Agile model
- RAD
Friday, December 17, 2010
Difference between smoke test and sanity testing
Sanity testing: After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.
Difference between Smoke & Sanity Software Testing:
- Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
- The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases.
- Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
- Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.
White Box Testing
Pilot Testing
Independent Verification & Validation
Unit Test
Ad Hoc Testing
Automated Testing
Black Box Testing
Beta Testing
Alpha Testing
Testing which is done at the developer's site is known as alpha testing
SMOKE TESTING
CONFORMANCE TESTING
Compatibility testing
Security testing
Recovery testing
Usability testing
System testing
Sanity testing
Acceptance testing
Performance testing
Load testing
Stress Testing
Wednesday, September 29, 2010
Test Scenario for ATM Machine
Insert the Card. Please check the card inserted properly or not. If the card inserted reverse. If the card inserted correctly. It will ask English, Hindi Click on anyone means English or Hindi. It will ask Pin Number. If Pin number is correct, it will show some options like Withdrawl,Balance and so on...... If Pin number is wrong , it will ask again. Now click on your choosen option.if your choice is withdrawl click on widhdrawl it will ask amount. it check the balance your amount against the enter amount. if it is greater than your balance, it will show error message. if it less than of your balance,it will ask do you want receipt of this transactinon ? Yes or No If you click on Yes..you will get receipt. If you click on No..there is no receipt. now you can collect your cash. Again it will ask continue or exit. Choise is yours. If you click on Exit,your inserted card will come out and you will get also recipt.
Monday, July 26, 2010
Change Control
Tuesday, July 6, 2010
Difference between adhoc testing,monkey testing and exploratory testing
Monkey testing:-monkey testing is a testing that runs with no specific test in mind. The monkey in this case is the producer of any input data (whether that be file data, or input device data).
Keep pressing some keys randomely and check whether the software fails or not.
Exploratory testing is simultaneous learning, test design and test execution.It is a type of adhoc testing, but here the tester does not have much idea about the application, he explores the system in an attempt to learn the application and simultaneously test it
Monday, June 28, 2010
Prototyping Model
The prototyping model assumes that you do not have clear requirements at the beginning of the project. Often, customers have a vague idea of the requirements in the form of objectives that they want the system to address. With the prototyping model, you build a simplified version of the system and seek feedback from the parties who have a stake in the project. The next iteration incorporates the feedback and improves on the requirements specification. The prototypes that are built during the iterations can be any of the following:
- A simple user interface without any actual data processing logic
- A few subsystems with functionality that is partially or completely implemented
- Existing components that demonstrate the functionality that will be incorporated into the system
The prototyping model consists of the following steps.
- Capture requirements. This step involves collecting the requirements over a period of time as they become available.
- Design the system. After capturing the requirements, a new design is made or an existing one is modified to address the new requirements.
- Create or modify the prototype. A prototype is created or an existing prototype is modified based on the design from the previous step.
- Assess based on feedback. The prototype is sent to the stakeholders for review. Based on their feedback, an impact analysis is conducted for the requirements, the design, and the prototype. The role of testing at this step is to ensure that customer feedback is incorporated in the next version of the prototype.
- Refine the prototype. The prototype is refined based on the impact analysis conducted in the previous step.
- Implement the system. After the requirements are understood, the system is rewritten either from scratch or by reusing the prototypes. The testing effort consists of the following:
- Ensuring that the system meets the refined requirements
- Code review
- Unit testing
- System testing
The main advantage of the prototyping model is that it allows you to start with requirements that are not clearly defined.
The main disadvantage of the prototyping model is that it can lead to poorly designed systems. The prototypes are usually built without regard to how they might be used later, so attempts to reuse them may result in inefficient systems. This model emphasizes refining the requirements based on customer feedback, rather than ensuring a better product through quick change based on test feedback.
Incremental or Iterative Development
The incremental, or iterative, development model breaks the project into small parts. Each part is subjected to multiple iterations of the waterfall model. At the end of each iteration, a new module is completed or an existing one is improved on, the module is integrated into the structure, and the structure is then tested as a whole.
For example, using the iterative development model, a project can be divided into 12 one- to four-week iterations. The system is tested at the end of each iteration, and the test feedback is immediately incorporated at the end of each test cycle. The time required for successive iterations can be reduced based on the experience gained from past iterations. The system grows by adding new functions during the development portion of each iteration. Each cycle tackles a relatively small set of requirements; therefore, testing evolves as the system evolves. In contrast, in a classic waterfall life cycle, each phase (requirement analysis, system design, and so on) occurs once in the development cycle for the entire set of system requirements.
The main advantage of the iterative development model is that corrective actions can be taken at the end of each iteration. The corrective actions can be changes to the specification because of incorrect interpretation of the requirements, changes to the requirements themselves, and other design or code-related changes based on the system testing conducted at the end of each cycle.
The main disadvantages of the iterative development model are as follows:
- The communication overhead for the project team is significant, because each iteration involves giving feedback about deliverables, effort, timelines, and so on.
- It is difficult to freeze requirements, and they may continue to change in later iterations because of increasing customer demands. As a result, more iterations may be added to the project, leading to project delays and cost overruns.
- The project requires a very efficient change control mechanism to manage changes made to the system during each iteration.
Waterfall Model
The waterfall model is one of the earliest structured models for software development. It consists of the following sequential phases through which the development life cycle progresses:
- System feasibility. In this phase, you consider the various aspects of the targeted business process, find out which aspects are worth incorporating into a system, and evaluate various approaches to building the required software.
- Requirement analysis. In this phase, you capture software requirements in such a way that they can be translated into actual use cases for the system. The requirements can derive from use cases, performance goals, target deployment, and so on.
- System design. In this phase, you identify the interacting components that make up the system. You define the exposed interfaces, the communication between the interfaces, key algorithms used, and the sequence of interaction. An architecture and design review is conducted at the end of this phase to ensure that the design conforms to the previously defined requirements.
- Coding and unit testing. In this phase, you write code for the modules that make up the system. You also review the code and individually test the functionality of each module.
- Integration and system testing. In this phase, you integrate all of the modules in the system and test them as a single system for all of the use cases, making sure that the modules meet the requirements.
- Deployment and maintenance. In this phase, you deploy the software system in the production environment. You then correct any errors that are identified in this phase, and add or modify functionality based on the updated requirements.
The waterfall model has the following advantages:
- It allows you to compartmentalize the life cycle into various phases, which allows you to plan the resources and effort required through the development process.
- It enforces testing in every stage in the form of reviews and unit testing. You conduct design reviews, code reviews, unit testing, and integration testing during the stages of the life cycle.
- It allows you to set expectations for deliverables after each phase.
The waterfall model has the following disadvantages:
- You do not see a working version of the software until late in the life cycle. For this reason, you can fail to detect problems until the system testing phase. Problems may be more costly to fix in this phase than they would have been earlier in the life cycle.
- When an application is in the system testing phase, it is difficult to change something that was not carefully considered in the system design phase. The emphasis on early planning tends to delay or restrict the amount of change that the testing effort can instigate, which is not the case when a working model is tested for immediate feedback.
- For a phase to begin, the preceding phase must be complete; for example, the system design phase cannot begin until the requirement analysis phase is complete and the requirements are frozen. As a result, the waterfall model is not able to accommodate uncertainties that may persist after a phase is completed. These uncertainties may lead to delays and extended project schedules.
Agile Methodology
Most software development life cycle methodologies are either iterative or follow a sequential model (as the waterfall model does). As software development becomes more complex, these models cannot efficiently adapt to the continuous and numerous changes that occur. Agile methodology was developed to respond to changes quickly and smoothly. Although the iterative methodologies tend to remove the disadvantage of sequential models, they still are based on the traditional waterfall approach. Agile methodology is a collection of values, principles, and practices that incorporates iterative development, test, and feedback into a new style of development. For an overview of agile methodology, see the Agile Modeling site at http://www.agilemodeling.com/.
The key differences between agile and traditional methodologies are as follows:
- Development is incremental rather than sequential. Software is developed in incremental, rapid cycles. This results in small, incremental releases, with each release building on previous functionality. Each release is thoroughly tested, which ensures that all issues are addressed in the next iteration.
- People and interactions are emphasized, rather than processes and tools. Customers, developers, and testers constantly interact with each other. This interaction ensures that the tester is aware of the requirements for the features being developed during a particular iteration and can easily identify any discrepancy between the system and the requirements.
- Working software is the priority rather than detailed documentation. Agile methodologies rely on face-to-face communication and collaboration, with people working in pairs. Because of the extensive communication with customers and among team members, the project does not need a comprehensive requirements document.
- Customer collaboration is used, rather than contract negotiation. All agile projects include customers as a part of the team. When developers have questions about a requirement, they immediately get clarification from customers.
- Responding to change is emphasized, rather than extensive planning. Extreme Programming does not preclude planning your project. However, it suggests changing the plan to accommodate any changes in assumptions for the plan, rather than stubbornly trying to follow the original plan.
Agile methodology has various derivate approaches, such as Extreme Programming, Dynamic Systems Development Method (DSDM), and SCRUM. Extreme Programming is one of the most widely used approaches.
Sunday, June 27, 2010
Difference between equivalence partitioning and boundary value analysis?
Equivalence partitioning : Equivalence Partitioning determines the number of test cases for a given scenario.
Equivalence partitioning is a black box testing technique with the following goal:
1.To reduce the number of test cases to a necessary minimum.
2.To select the right test cases to cover all possible scenarios.
EP is applied to the inputs of a tested component. The equivalence partitions are usually derived from the specification of the component's behaviour. An input has certain ranges which are valid and other ranges which are invalid. This may be best explained at the following example of a function which has the pass parameter "month" of a date. The valid range for the month is 1 to 12, standing for January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13.(Input is checked for valid and invalid conditions)
. ... -2 -1 0 1 .............. 12 13 14 15 .....
invalid partition 1 valid partition invalid partition 2
The testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behaviour of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviour of the program.
To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably.
Equivalence partitioning is no stand alone method to determine test cases. It has to be supplemented by boundary value analysis. Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effectivetest cases out of these partitions.
Boundary Value Analysis :
Boundary Value Analysis determines the effectiveness of test cases for a given scenario. To set up boundary value analysis test cases the tester first has to determine which boundaries are at the interface of a software component. This has to be done by applying the equivalence partitioning technique.Boundary value analysis and equivalence partitioning are inevitably linked together.
For the example of the month a date would have the following partitions: .
.. -2 -1 0 1 .............. 12 13 14 15 .....
--invalid partition 1 valid partition invalid partition 2
By applying boundary value analysis we can select a test case at each side of the boundary between two partitions . In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "dirty" test case. A "clean" test case should give a valid operation result ofthe program . A "dirty" test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to warning and request to enter correct data.
The boundary value analysis can have 6 test cases: n, n-1, and n+1 for the upper limit; and n, n-1, and n+1 for the lower limit.
Thursday, April 8, 2010
What is CMM?
What is ad hoc testing?
Ad hoc testing is a commonly used term for software testing performed without planning and documentation.
The tests are intended to be run only once, unless a defect is discovered.
Ad hoc testing is a testing approach; it is the least formal testing approach.
What is boundary value analysis?
What is quality assurance?
Rob Davis’ QA service depends on the customers and projects. A lot will depend on team leads or managers, feedback to developers and communications among customers, managers, developers’ test engineers and testers.
What is software quality assurance?
What can be done if requirements are changing continuously?
· Ensure the code is well commented and well documented; this makes changes easier for the developers.
· Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.
· In the project’s initial schedule, allow for some extra time to commensurate with probable changes.
· Move new requirements to a ‘Phase 2′ version of an application and use the original requirements for the ‘Phase 1′ version.
· Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.
· Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that’s their job.
· Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
· Design some flexibility into automated test scripts;
· Focus initial automated testing on application aspects that are most likely to remain unchanged;
· Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;
· Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;
· Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.
How do you know when to stop testing?
Deadlines, e.g. release deadlines, testing deadlines;
Test cases completed with certain percentage passed;
Test budget has been depleted;
Coverage of code, functionality, or requirements reaches a specified point;
Bug rate falls below a certain level; or
Beta or alpha testing period ends.
What if the software is so buggy it can’t be tested at all?
What is configuration management?
What should be done after a bug is found?
What is a test plan?
What is the role of documentation in QA?
What makes a good QA engineer?
What makes a good test engineer?
Has a “test to break” attitude,
Takes the point of view of the customer,
Has a strong desire for quality,
Has an attention to detail, He’s also
Tactful and diplomatic and
Has good a communication skill, both oral and written. And he
Has previous software development experience, too.
Give me five solutions to problems that occur during software development.
Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.
Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.
Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.
Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they’re adequately reflected in related schedule changes. Use prototypes early on so customers’ expectations are clarified and customers can see what to expect; this will minimize changes later on.
Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.
Do automated testing tools make testing easier?
Give me five common problems that occur during software development.
The schedule is unrealistic if too much work is crammed in too little time.
Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.
It’s extremely common that new features are added after development is underway.
Miscommunication either means the developers don’t know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.
How do you introduce a new software QA process?
Why are there so many software bugs?
There are unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.
Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.
Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.
Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.
Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.
Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code, or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.
Software development tools , including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.
What is software life cycle?
What is good design?
What is good code?
What is quality?
What is an inspection?
What is a walkthrough?
What is validation?
What is verification?
Wednesday, April 7, 2010
What is a test case?
particular objective, such as to exercise a particular program path or to verify compliance
with a specific requirement.
Or it is the Documentation specifying inputs, predicted results, and a set
of execution conditions for a test item.
Monday, March 29, 2010
White Box Testing
Types of testing under White/Glass Box Testing Strategy:
Unit Testing:
The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built.
Static and dynamic Analysis:
Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.
Statement Coverage:
In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect.
Branch Coverage:
No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application.
Security Testing:
Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques.
Mutation Testing:
A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.
Methods of Black Box Testing
2.Equivalence Partitioning
3.Error Guessing
Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.
Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values
Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.
How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.
Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.
Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.
Black Box Testing
Also known as functional testing. Black box testing is a software testing technique whereby the internal workings of the item being tested are not known by the tester. For example, in a black box test on a software design the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications.
The advantages of this type of testing include:
- The test is unbiased because the designer and the tester are independent of each other.
- The tester does not need knowledge of any specific programming languages.
- The test is done from the point of view of the user, not the designer.
- Test cases can be designed as soon as the specifications are complete.
The disadvantages of this type of testing include:
- The test can be redundant if the software designer has already run a test case.
- The test cases are difficult to design.
- Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.
Types of software Testing
Software Testing Types:
Black box testing - Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.
White box testing - This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing - Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.
Incremental integration testing - Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.
Integration testing - Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing - This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.
System testing - Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
End-to-end testing - Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.
Regression testing - Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.
Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.
Load testing - Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
Stress testing - System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
Performance testing - Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
Usability testing - User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.
Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.
Recovery testing - Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing - Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.
Compatibility testing - Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.
Comparison testing - Comparison of product strengths and weaknesses with previous versions or other similar products.
Alpha testing - In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.
Beta testing - Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.
Bug Life Cycle
1) New: When QA files new bug.
2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.
3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.
4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.
6) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.
8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.
9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.
Severity and Priority of Bugs
Urgent/Show – stopper: Like system crash or error message forcing to close the window, System stops working totally or partially. A major area of the users system is affected by the incident and It is significant to business processes.
Medium/Workaround: When a problem is required in the specs but tester can go on with testing. It affects a more isolated piece of functionality. It occurs only at one or two customers or is intermittent.
Low: Failures that are unlikely to occur in normal use. Problems do not impact use of the product in any substantive way. Have no or very low impact to business processes
State exact error messages.
PRIORITY means something Deserves Prior Attention. It represents the importance of a bug from Customer point of view. Voices precedence established by urgency and it is associated with scheduling a bug Priority Levels can be defined as follows:
High: This has a major impact on the customer. This must be fixed immediately.
Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development or a patch must be issued if possible.
Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.
Software Testing Key Concepts
* Input Combination and Preconditions: Testing all combination of inputs and initial state (preconditions), is not feasible. This means finding large number of infrequent defects is difficult.
* Static and Dynamic Analysis: Static testing does not require execution of the code for finding defects, whereas in dynamic testing, software code is executed to demonstrate the results of running tests.
* Verification and Validation: Software testing is done considering these two factors.
1. Verification: This verifies whether the product is done according to the specification?
2. Validation: This checks whether the product meets the customer requirement?
* Software Quality Assurance: Software testing is an important part of the software quality assurance. Quality assurance is an activity, which proves the suitability of the product by taking care of the quality of a product and ensuring that the customer requirements are met.
Sunday, March 28, 2010
Test Life Cycle
2. Test Plan
3. Test Design.
4. Design Reviews
5. Code Reviews
6. Test Cases preparation.
7. Test Execution
8. Test Reports.
9. Bugs Reporting
10. Reworking on patches.
11. Release to production.
Requirements Stage
Normally in many companies, developers itself take part in the requirements stage. Especially for product-based companies, a tester should also be involved in this stage. Since a tester thinks from the user side whereas a developer can’t. A separate panel should be formed for each module comprising a developer, a tester and a user. Panel meetings should be scheduled in order to gather everyone’s view. All the requirements should be documented properly for further use and this document is called “Software Requirements Specifications”.
Test Plan
Without a good plan, no work is a success. A successful work always contains a good plan. The testing process of software should also require good plan. Test plan document is the most important document that brings in a process – oriented approach. A test plan document should be prepared after the requirements of the project are confirmed. The test plan document must consist of the following information:
• Total number of features to be tested.
• Testing approaches to be followed.
• The testing methodologies
• Number of man-hours required.
• Resources required for the whole testing process.
• The testing tools that are to be used.
• The test cases, etc
Test Design
Test Design is done based on the requirements of the project. Test has to be designed based on whether manual or automated testing is done. For automation testing, the different paths for testing are to be identified first. An end to end checklist has to be prepared covering all the features of the project.
The test design is represented pictographically. The test design involves various stages. These stages can be summarized as follows:
• The different modules of the software are identified first.
• Next, the paths connecting all the modules are identified.
Then the design is drawn. The test design is the most critical one, which decides the test case preparation. So the test design assesses the quality oftesting process.
Test Cases Preparation
Test cases should be prepared based on the following scenarios:
• Positive scenarios
• Negative scenarios
• Boundary conditions and
• Real World scenarios
Design Reviews
The software design is done in systematical manner or using the UML language. The tester can do the reviews over the design and can suggest the ideas and the modifications needed.
Code Reviews
Code reviews are similar to unit testing. Once the code is ready for release, the tester should be ready to do unit testing for the code. He must be ready with his own unit test cases. Though a developer does the unit testing, a tester must also do it. The developers may oversee some of the minute mistakes in the code, which a tester may find out.
Test Execution and Bugs Reporting
Once the unit testing is completed and the code is released to QA, the functional testing is done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If any top-level failures occur, the bugs should be reported to the developer immediately to get the required workaround.
The test reports should be documented properly and the bugs have to be reported to the developer after the testing is completed.
Once the bugs are fixed, another release is given to the QA with the modified changes. Regression testing is executed. Once the QA assures the software, the software is released to production. Before releasing to production, another round of top-level testing is done.
The testing process is an iterative process. Once the bugs are fixed, the testing has to be done repeatedly. Thus the testing process is an unending process.
What makes a good Software QA engineer?
Verification and Validation
* Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications.
* The determination of consistency, correctness & completeness of a program at each stage.
Validation:
* Validation typically involves actual testing and takes place after verifications are completed
* The determination of correctness of a final program with respect to its requirements.
Introduction to Software Testing
* Error: Error or mistake is a human action that produces wrong or incorrect result.
* Defect (Bug, Fault): A flaw in the system or a product that can cause the component to fail or misfunction.
* Failure: It is the variance between the actual and expected result.
* Risk: Risk is a factor that could result in negativity or a chance of loss or damage.
Thus Software testing is the process of finding defects/bugs in the system, that occurs due to an error in the application, which could lead to failure of the resultant product and increase in probability of high risk. In short, software testing have different goals and objectives, which often include:
1. finding defects;
2. gaining confidence in and providing information about the level of quality;
3. preventing defects.
Software Quality Assurance
* Software Quality Assurance involves reviewing and auditing the software products and activities to verify that they comply with the applicable procedures and standards and providing the software project and other appropriate managers with the results of these reviews and audits.