Wednesday, December 29, 2010

Web testing-Check list

1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing

Development phases and Types of testing in V model

Tuesday, December 28, 2010

Difference between web based and windows based applications

1. A Web based application runs from a web,through a browser.
   Windows based application is more precisely a computer based application,which is run through  OS(win,linux,mac os x).
2. Web based application Can be used by different users accessing the same program...hence mobility is more.
  Windows based application Has to be accessed by the particular computer.
3 Speed is slow for Web based application
  Speed is comparatively faster for Windows based applications
4 Database handling plays a major part in web based
  Role of database is less or none in desktop application
5 Range is wider for web applications-http,www,sntp,snmp.
  Limited to a particular computer for window based application
6 A Web based application Uses network layer
 Windows based application Uses application layer

What are the fields that contained in test plan??

1 ) Test plan ID : Unique number or name
2) Introduction : About project
3) Test Items: Names of all modules in the project.
4) Features to be tested :
5)Features not to be tested:
6) Testing approach: finalized TRM, selected testing techniques
7) Entry Criteria : When testing can be started
8) Exit Criteria : when testing can be stopped
9)Fail or Pass criteria:
10) Test Environment : The environment where the test is to be carried out.
11) Test Delivarables:
12) Staff and training
13) Roles and Responsibilities
14 ) Project starting and End dates
15) Risks and mitigation

Wednesday, December 22, 2010

what is a testcase and usecase ?

Use case is a description of a certain feature of an application in terms of actors ,actions and responses.
For example if the user enters valid userid and password and after clicking on login button the system should display home page.
Here the user is ACTOR. The operations performed by him are ACTIONS. The Systems display of home page is RESPONSE.
Actually using use cases we will write our test cases if any use cases are provided in the SRS.
A test case is a set of test inputs,executional conditions,with some expected results developed for validating a particular functionality of AUT.

what is the difference between delivarables and Release notes?

Release notes are provided for every new release of the build with fixed bugs. It contains what are all the bugs fixed and what are pending bugs.
Deliverables are provided at the end of the testing. Test plan, test cases, defect reports, documented defects which are not fixed etc come under deliverables.

Tuesday, December 21, 2010

Incremental Integration testing

In this type of testing we have to add the interface incrementally and check the data flows between modules.

For this type of testing we should know the parent / child relationship between the modules means which module is the parent / child of which module.

V Model

V model is the classic software development model. It encapsulates the steps in Verification and Validation phases for each step in the SDLC. For each phase, the subsequent phase becomes the verification (QA) phase and the corresponding testing phase in the other arm of the V becomes the validating (Testing) phase


The development team will apply "do-procedures" to achieve the goals and the testing team will apply "Check-procedures" to verify that. Its a parallel process and finally arrives to the product with almost no bugs or errors


Traditional Water fall model will not allow to do the testing and the coding process in parallel. V - model in the SDLC will allow the process to have testing and coding as a parallel activity which enables the changes to occur more dynamic.

Bug Leak

The defect which can be found and Re-produced by End user/Client people the same we(Test Engineer) unable to found at the time of testing. This is also known as Defect Leakage .
 

Sunday, December 19, 2010

What are the software testing methologies??

  • Waterfall model
  • V model
  • Spiral model
  • RUP
  • Agile model
  • RAD

Friday, December 17, 2010

Difference between smoke test and sanity testing

Smoke Testing: Software Testing done to ensure that whether the build can be accepted for through software testing or not. Basically, it is done to check the stability of the build received for software testing.

Sanity testing: After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.

Difference between Smoke & Sanity Software Testing:

  • Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
  • The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases.
  • Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
  • Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.

White Box Testing

Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.

Pilot Testing

Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (see beta testing).

Independent Verification & Validation

The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices.

Unit Test

The first test in the development process is the unit test. The source code is normally divided into modules, which in turn are divided into smaller units called units. These units have specific behavior. The test done on these units of code is called unit test. Unit test depends upon the language on which the project is developed. Unit tests ensure that each unique path of the project performs accurately to the documented specifications and contains clearly defined inputs and expected results.

Ad Hoc Testing

Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.

Automated Testing

Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests.

Black Box Testing

Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.

Beta Testing

Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released.

Alpha Testing

Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.

Testing which is done at the developer's site is known as alpha testing

SMOKE TESTING

A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

CONFORMANCE TESTING

Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.

Compatibility testing

Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.

Security testing

Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.

Recovery testing

Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Usability testing

User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

System testing

Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.

Sanity testing

Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

Acceptance testing

Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.

Performance testing

Term often used interchangeably with ‘stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

Load testing

Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Stress Testing

System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

Wednesday, September 29, 2010

Test Scenario for ATM Machine

Insert the Card.
Please check the card inserted properly or not.
If the card inserted reverse.
If the card inserted correctly.
It will ask English, Hindi
Click on anyone means English or Hindi.
It will ask Pin Number.
If Pin number is correct, it will show some options like 
Withdrawl,Balance and so on......
If Pin number is wrong , it will ask again.
Now click on your choosen option.if your choice is 
withdrawl click on widhdrawl 
it will ask amount.
it check the balance your amount against the enter amount.
if it is greater than your balance, it will show error 
message.
if it less than of your balance,it will ask do you want 
receipt of this transactinon ? Yes or No
If you click on Yes..you will get receipt.
If you click on No..there is no receipt.
now you can collect your cash.
Again it will ask continue or exit.
Choise is yours.
If you click on Exit,your inserted card will come out and 
you will get also recipt.

Monday, July 26, 2010

Change Control

is a formal process used to ensure that changes to a product or system are introduced in a controlled and coordinated manner. It reduces the possibility that unnecessary changes will be introduced to a system without forethought, introducing faults into the system or undoing changes made by other users of software. The goals of a change control procedure usually include minimal disruption to services, reduction in back-out activities, and cost-effective utilization of resources involved in implementing change.

Tuesday, July 6, 2010

Difference between adhoc testing,monkey testing and exploratory testing

Adhoc testing: This Kind of testing dosen't have a any process/test case/Test senarios defined/preplanned to do it.It involves simultaneous test design and test execution.

Monkey testing:-monkey testing is a testing that runs with no specific test in mind. The monkey in this case is the producer of any input data (whether that be file data, or input device data).
Keep pressing some keys randomely and check whether the software fails or not.

Exploratory testing is simultaneous learning, test design and test execution.It is a type of adhoc testing, but here the tester does not have much idea about the application, he explores the system in an attempt to learn the application and simultaneously test it

Monday, June 28, 2010

Prototyping Model

The prototyping model assumes that you do not have clear requirements at the beginning of the project. Often, customers have a vague idea of the requirements in the form of objectives that they want the system to address. With the prototyping model, you build a simplified version of the system and seek feedback from the parties who have a stake in the project. The next iteration incorporates the feedback and improves on the requirements specification. The prototypes that are built during the iterations can be any of the following:

  • A simple user interface without any actual data processing logic
  • A few subsystems with functionality that is partially or completely implemented
  • Existing components that demonstrate the functionality that will be incorporated into the system

The prototyping model consists of the following steps.

  1. Capture requirements. This step involves collecting the requirements over a period of time as they become available.
  2. Design the system. After capturing the requirements, a new design is made or an existing one is modified to address the new requirements.
  3. Create or modify the prototype. A prototype is created or an existing prototype is modified based on the design from the previous step.
  4. Assess based on feedback. The prototype is sent to the stakeholders for review. Based on their feedback, an impact analysis is conducted for the requirements, the design, and the prototype. The role of testing at this step is to ensure that customer feedback is incorporated in the next version of the prototype.
  5. Refine the prototype. The prototype is refined based on the impact analysis conducted in the previous step.
  6. Implement the system. After the requirements are understood, the system is rewritten either from scratch or by reusing the prototypes. The testing effort consists of the following:
    • Ensuring that the system meets the refined requirements
    • Code review
    • Unit testing
    • System testing

The main advantage of the prototyping model is that it allows you to start with requirements that are not clearly defined.

The main disadvantage of the prototyping model is that it can lead to poorly designed systems. The prototypes are usually built without regard to how they might be used later, so attempts to reuse them may result in inefficient systems. This model emphasizes refining the requirements based on customer feedback, rather than ensuring a better product through quick change based on test feedback.

Incremental or Iterative Development

The incremental, or iterative, development model breaks the project into small parts. Each part is subjected to multiple iterations of the waterfall model. At the end of each iteration, a new module is completed or an existing one is improved on, the module is integrated into the structure, and the structure is then tested as a whole.

For example, using the iterative development model, a project can be divided into 12 one- to four-week iterations. The system is tested at the end of each iteration, and the test feedback is immediately incorporated at the end of each test cycle. The time required for successive iterations can be reduced based on the experience gained from past iterations. The system grows by adding new functions during the development portion of each iteration. Each cycle tackles a relatively small set of requirements; therefore, testing evolves as the system evolves. In contrast, in a classic waterfall life cycle, each phase (requirement analysis, system design, and so on) occurs once in the development cycle for the entire set of system requirements.

The main advantage of the iterative development model is that corrective actions can be taken at the end of each iteration. The corrective actions can be changes to the specification because of incorrect interpretation of the requirements, changes to the requirements themselves, and other design or code-related changes based on the system testing conducted at the end of each cycle.

The main disadvantages of the iterative development model are as follows:

  • The communication overhead for the project team is significant, because each iteration involves giving feedback about deliverables, effort, timelines, and so on.
  • It is difficult to freeze requirements, and they may continue to change in later iterations because of increasing customer demands. As a result, more iterations may be added to the project, leading to project delays and cost overruns.
  • The project requires a very efficient change control mechanism to manage changes made to the system during each iteration.

Waterfall Model

The waterfall model is one of the earliest structured models for software development. It consists of the following sequential phases through which the development life cycle progresses:

  • System feasibility. In this phase, you consider the various aspects of the targeted business process, find out which aspects are worth incorporating into a system, and evaluate various approaches to building the required software.
  • Requirement analysis. In this phase, you capture software requirements in such a way that they can be translated into actual use cases for the system. The requirements can derive from use cases, performance goals, target deployment, and so on.
  • System design. In this phase, you identify the interacting components that make up the system. You define the exposed interfaces, the communication between the interfaces, key algorithms used, and the sequence of interaction. An architecture and design review is conducted at the end of this phase to ensure that the design conforms to the previously defined requirements.
  • Coding and unit testing. In this phase, you write code for the modules that make up the system. You also review the code and individually test the functionality of each module.
  • Integration and system testing. In this phase, you integrate all of the modules in the system and test them as a single system for all of the use cases, making sure that the modules meet the requirements.
  • Deployment and maintenance. In this phase, you deploy the software system in the production environment. You then correct any errors that are identified in this phase, and add or modify functionality based on the updated requirements.

The waterfall model has the following advantages:

  • It allows you to compartmentalize the life cycle into various phases, which allows you to plan the resources and effort required through the development process.
  • It enforces testing in every stage in the form of reviews and unit testing. You conduct design reviews, code reviews, unit testing, and integration testing during the stages of the life cycle.
  • It allows you to set expectations for deliverables after each phase.

The waterfall model has the following disadvantages:

  • You do not see a working version of the software until late in the life cycle. For this reason, you can fail to detect problems until the system testing phase. Problems may be more costly to fix in this phase than they would have been earlier in the life cycle.
  • When an application is in the system testing phase, it is difficult to change something that was not carefully considered in the system design phase. The emphasis on early planning tends to delay or restrict the amount of change that the testing effort can instigate, which is not the case when a working model is tested for immediate feedback.
  • For a phase to begin, the preceding phase must be complete; for example, the system design phase cannot begin until the requirement analysis phase is complete and the requirements are frozen. As a result, the waterfall model is not able to accommodate uncertainties that may persist after a phase is completed. These uncertainties may lead to delays and extended project schedules.

Agile Methodology

Most software development life cycle methodologies are either iterative or follow a sequential model (as the waterfall model does). As software development becomes more complex, these models cannot efficiently adapt to the continuous and numerous changes that occur. Agile methodology was developed to respond to changes quickly and smoothly. Although the iterative methodologies tend to remove the disadvantage of sequential models, they still are based on the traditional waterfall approach. Agile methodology is a collection of values, principles, and practices that incorporates iterative development, test, and feedback into a new style of development. For an overview of agile methodology, see the Agile Modeling site at http://www.agilemodeling.com/.

The key differences between agile and traditional methodologies are as follows:

  • Development is incremental rather than sequential. Software is developed in incremental, rapid cycles. This results in small, incremental releases, with each release building on previous functionality. Each release is thoroughly tested, which ensures that all issues are addressed in the next iteration.
  • People and interactions are emphasized, rather than processes and tools. Customers, developers, and testers constantly interact with each other. This interaction ensures that the tester is aware of the requirements for the features being developed during a particular iteration and can easily identify any discrepancy between the system and the requirements.
  • Working software is the priority rather than detailed documentation. Agile methodologies rely on face-to-face communication and collaboration, with people working in pairs. Because of the extensive communication with customers and among team members, the project does not need a comprehensive requirements document.
  • Customer collaboration is used, rather than contract negotiation. All agile projects include customers as a part of the team. When developers have questions about a requirement, they immediately get clarification from customers.
  • Responding to change is emphasized, rather than extensive planning. Extreme Programming does not preclude planning your project. However, it suggests changing the plan to accommodate any changes in assumptions for the plan, rather than stubbornly trying to follow the original plan.

Agile methodology has various derivate approaches, such as Extreme Programming, Dynamic Systems Development Method (DSDM), and SCRUM. Extreme Programming is one of the most widely used approaches.

Sunday, June 27, 2010

Difference between equivalence partitioning and boundary value analysis?

The main difference between EP and BVA is that EP determines the number of test cases to be generated for a given scenario where as BVA will determine the effectivenss of those generated test cases.

Equivalence partitioning : Equivalence Partitioning determines the number of test cases for a given scenario.
Equivalence partitioning is a black box testing technique with the following goal:
1.To reduce the number of test cases to a necessary minimum.
2.To select the right test cases to cover all possible scenarios.

EP is applied to the inputs of a tested component. The equivalence partitions are usually derived from the specification of the component's behaviour. An input has certain ranges which are valid and other ranges which are invalid. This may be best explained at the following example of a function which has the pass parameter "month" of a date. The valid range for the month is 1 to 12, standing for January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13.(Input is checked for valid and invalid conditions)

. ... -2 -1 0 1 .............. 12 13 14 15 .....
invalid partition 1 valid partition invalid partition 2

The testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behaviour of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviour of the program.

To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably.

Equivalence partitioning is no stand alone method to determine test cases. It has to be supplemented by boundary value analysis. Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effectivetest cases out of these partitions.


Boundary Value Analysis :

Boundary Value Analysis determines the effectiveness of test cases for a given scenario. To set up boundary value analysis test cases the tester first has to determine which boundaries are at the interface of a software component. This has to be done by applying the equivalence partitioning technique.Boundary value analysis and equivalence partitioning are inevitably linked together.

For the example of the month a date would have the following partitions: .
.. -2 -1 0 1 .............. 12 13 14 15 .....
--invalid partition 1 valid partition invalid partition 2

By applying boundary value analysis we can select a test case at each side of the boundary between two partitions . In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "dirty" test case. A "clean" test case should give a valid operation result ofthe program . A "dirty" test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to warning and request to enter correct data.
The boundary value analysis can have 6 test cases: n, n-1, and n+1 for the upper limit; and n, n-1, and n+1 for the lower limit.

Thursday, April 8, 2010

What is CMM?

The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

What is ad hoc testing?




Ad hoc testing is a commonly used term for software testing performed without planning and documentation.
The tests are intended to be run only once, unless a defect is discovered.


Ad hoc testing is a testing approach; it is the least formal testing approach. 

What is boundary value analysis?

Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code, is to exercise it at its natural boundaries.

What is quality assurance?

Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.

Rob Davis’ QA service depends on the customers and projects. A lot will depend on team leads or managers, feedback to developers and communications among customers, managers, developers’ test engineers and testers.

What is software quality assurance?

Software Quality Assurance, when Rob Davis does it, is oriented to *prevention*. It involves the entire software development process. Prevention is monitoring and improving the process, making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. Software Testing, when performed by Rob Davis, is also oriented to *detection*. Testing involves the operation of a system or application under controlled conditions and evaluating the results. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams, which include a mix of test engineers, testers and developers who work closely together, with overall QA processes monitored by project managers. It depends on what best fits your organization’s size and business structure. Rob Davis can provide QA and/or Software QA. This document details some aspects of how he can provide software testing/QA service. For more information, e-mail rob@robdavispe.com

What can be done if requirements are changing continuously?

Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application’s initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to…

· Ensure the code is well commented and well documented; this makes changes easier for the developers.

· Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.

· In the project’s initial schedule, allow for some extra time to commensurate with probable changes.

· Move new requirements to a ‘Phase 2′ version of an application and use the original requirements for the ‘Phase 1′ version.

· Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.

· Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that’s their job.

· Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.

· Design some flexibility into automated test scripts;

· Focus initial automated testing on application aspects that are most likely to remain unchanged;

· Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;

· Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;

· Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.

How do you know when to stop testing?

This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are…

Deadlines, e.g. release deadlines, testing deadlines;

Test cases completed with certain percentage passed;

Test budget has been depleted;

Coverage of code, functionality, or requirements reaches a specified point;

Bug rate falls below a certain level; or

Beta or alpha testing period ends.

What if the software is so buggy it can’t be tested at all?

In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem

What is configuration management?

Configuration management (CM) covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily adapt to your software tool and process needs.

What should be done after a bug is found?

When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn’t create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.

What is a test plan?

A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it.

What is the role of documentation in QA?

Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.

What makes a good QA engineer?

The same qualities a good test engineer has are useful for a QA engineer. Additionally, Rob Davis understands the entire software development process and how it fits into the business approach and the goals of the organization. Rob Davis’ communication skills and the ability to understand various sides of issues are important. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important.

What makes a good test engineer?

Rob Davis is a good test engineer because he

Has a “test to break” attitude,

Takes the point of view of the customer,

Has a strong desire for quality,

Has an attention to detail, He’s also

Tactful and diplomatic and

Has good a communication skill, both oral and written. And he

Has previous software development experience, too.

Give me five solutions to problems that occur during software development.

Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.

Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.

Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.

Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.

Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they’re adequately reflected in related schedule changes. Use prototypes early on so customers’ expectations are clarified and customers can see what to expect; this will minimize changes later on.

Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.

Do automated testing tools make testing easier?

Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile. A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret. If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change. One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts. Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task. You CAN learn to use automated testing tools, with little or no outside help. Get CAN get free information.

Give me five common problems that occur during software development.

Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication. Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.

The schedule is unrealistic if too much work is crammed in too little time.

Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.

It’s extremely common that new features are added after development is underway.

Miscommunication either means the developers don’t know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.

How do you introduce a new software QA process?

It depends on the size of the organization and the risks involved. For large organizations with high-risk projects, a serious management buy-in is required and a formalized QA process is necessary. For medium size organizations with lower risk projects, management and organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA processes should be balanced with productivity, in order to keep any bureaucracy from getting out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot depends on team leads and managers, feedback to developers and good communication is essential among customers, managers, developers, test engineers and testers. Regardless the size of the company, the greatest value for effort is in managing requirement processes, where the goal is requirements that are clear, complete and testable.

Why are there so many software bugs?

Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.

There are unclear software requirements because there is miscommunication as to what the software should or shouldn’t do.

Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications

, enormous relational databases and the sheer size of applications.

Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.

As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.

Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.

Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.

Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code, or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.

Software development tools , including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.

What is software life cycle?

Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out.

What is good design?

Design could mean to many things, but often refers to functional design or internal design. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable and maintainable; is robust with sufficient error handling and status logging capability; and works correctly when implemented.

What is good code?

A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards.

What is quality?

Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization’s management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free.

What is an inspection?

An inspection is a formal meeting, more formalized than a walkthrough and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.

What is a walkthrough?

A walkthrough is an informal meeting for evaluation or informational purposes. A walkthrough is also a process at an abstract level. It’s the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code walkthroughs is to ensure the code fits the purpose. Walkthroughs also offer opportunities to assess an individual’s or team’s competency.

What is validation?

Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.

What is verification?

Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings. You CAN learn to do verification, with little or no outside help. Get CAN get free information.

Wednesday, April 7, 2010

What is a test case?

Test case is a set of test inputs, execution conditions, and expected results developed for a
particular objective, such as to exercise a particular program path or to verify compliance
with a specific requirement.

Or it is the Documentation specifying inputs, predicted results, and a set
of execution conditions for a test item.

Monday, March 29, 2010

White Box Testing

White box testing strategy deals with the internal logic and structure of the code. White box testing is also called as glass, structural, open box or clear box testing.

Types of testing under White/Glass Box Testing Strategy:

Unit Testing:
The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built.

Static and dynamic Analysis:
Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.

Statement Coverage:
In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect.

Branch Coverage:
No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application.

Security Testing:
Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques.

Mutation Testing:
A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.

Methods of Black Box Testing

1.Boundary value analysis
2.Equivalence Partitioning
3.Error Guessing

Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.

Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values

Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

How is this partitioning performed while testing:

1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.

Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.

Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.

Black Box Testing

Testing software based on output requirements and without any knowledge of the internal structure or coding in the program.

Also known as functional testing. Black box testing is a software testing technique whereby the internal workings of the item being tested are not known by the tester. For example, in a black box test on a software design the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications.

The advantages of this type of testing include:

  • The test is unbiased because the designer and the tester are independent of each other.
  • The tester does not need knowledge of any specific programming languages.
  • The test is done from the point of view of the user, not the designer.
  • Test cases can be designed as soon as the specifications are complete.

The disadvantages of this type of testing include:

  • The test can be redundant if the software designer has already run a test case.
  • The test cases are difficult to design.
  • Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.

Types of software Testing

Software Testing Types:

Black box testing - Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

White box testing - This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

Unit testing - Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.

Incremental integration testing - Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.

Integration testing - Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing - This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

System testing - Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.

End-to-end testing - Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

Regression testing - Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.

Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.

Load testing - Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Stress testing - System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

Performance testing - Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

Usability testing - User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment.

Recovery testing - Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing - Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks.

Compatibility testing - Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above.

Comparison testing - Comparison of product strengths and weaknesses with previous versions or other similar products.

Alpha testing - In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.

Beta testing - Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.

Bug Life Cycle

1) New: When QA files new bug.

2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.

3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.

4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.

5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.

6) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.

7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.

8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.

9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.

Severity and Priority of Bugs

SEVERITY- is the state or quality of being severe. Severity tells us HOW BAD the BUG is. It defines the importance of BUG from FUNCTIONALITY point of view and implies adherence to rigorous standards or high principles. Severity levels can be defined as follows:

Urgent/Show – stopper: Like system crash or error message forcing to close the window, System stops working totally or partially. A major area of the users system is affected by the incident and It is significant to business processes.


Medium/Workaround: When a problem is required in the specs but tester can go on with testing. It affects a more isolated piece of functionality. It occurs only at one or two customers or is intermittent.


Low: Failures that are unlikely to occur in normal use. Problems do not impact use of the product in any substantive way. Have no or very low impact to business processes
State exact error messages.


PRIORITY means something Deserves Prior Attention. It represents the importance of a bug from Customer point of view. Voices precedence established by urgency and it is associated with scheduling a bug Priority Levels can be defined as follows:


High: This has a major impact on the customer. This must be fixed immediately.


Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in development or a patch must be issued if possible.


Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next release.

Software Testing Key Concepts

* Defects and Failures: As we discussed earlier, defects are not caused only due to the coding errors, but most commonly due to the requirement gaps in the non-functional requirement, such as usability, testability, scalability, maintainability, performance and security. A failure is caused due to the deviation between an actual and an expected result. But not all defects result to failures. A defect can turn into a failure due to the change in the environment and or the change in the configuration of the system requirements.
* Input Combination and Preconditions: Testing all combination of inputs and initial state (preconditions), is not feasible. This means finding large number of infrequent defects is difficult.
* Static and Dynamic Analysis: Static testing does not require execution of the code for finding defects, whereas in dynamic testing, software code is executed to demonstrate the results of running tests.
* Verification and Validation: Software testing is done considering these two factors.
1. Verification: This verifies whether the product is done according to the specification?
2. Validation: This checks whether the product meets the customer requirement?
* Software Quality Assurance: Software testing is an important part of the software quality assurance. Quality assurance is an activity, which proves the suitability of the product by taking care of the quality of a product and ensuring that the customer requirements are met.

Sunday, March 28, 2010

Test Life Cycle

1. Requirements stage
2. Test Plan
3. Test Design.
4. Design Reviews
5. Code Reviews
6. Test Cases preparation.
7. Test Execution
8. Test Reports.
9. Bugs Reporting
10. Reworking on patches.
11. Release to production.

Requirements Stage

Normally in many companies, developers itself take part in the requirements stage. Especially for product-based companies, a tester should also be involved in this stage. Since a tester thinks from the user side whereas a developer can’t. A separate panel should be formed for each module comprising a developer, a tester and a user. Panel meetings should be scheduled in order to gather everyone’s view. All the requirements should be documented properly for further use and this document is called “Software Requirements Specifications”.


Test Plan

Without a good plan, no work is a success. A successful work always contains a good plan. The testing process of software should also require good plan. Test plan document is the most important document that brings in a process – oriented approach. A test plan document should be prepared after the requirements of the project are confirmed. The test plan document must consist of the following information:

• Total number of features to be tested.
• Testing approaches to be followed.
• The testing methodologies
• Number of man-hours required.
• Resources required for the whole testing process.
• The testing tools that are to be used.
• The test cases, etc


Test Design


Test Design is done based on the requirements of the project. Test has to be designed based on whether manual or automated testing is done. For automation testing, the different paths for testing are to be identified first. An end to end checklist has to be prepared covering all the features of the project.

The test design is represented pictographically. The test design involves various stages. These stages can be summarized as follows:

• The different modules of the software are identified first.
• Next, the paths connecting all the modules are identified.

Then the design is drawn. The test design is the most critical one, which decides the test case preparation. So the test design assesses the quality oftesting process.


Test Cases Preparation

Test cases should be prepared based on the following scenarios:

• Positive scenarios
• Negative scenarios
• Boundary conditions and
• Real World scenarios


Design Reviews


The software design is done in systematical manner or using the UML language. The tester can do the reviews over the design and can suggest the ideas and the modifications needed.



Code Reviews

Code reviews are similar to unit testing. Once the code is ready for release, the tester should be ready to do unit testing for the code. He must be ready with his own unit test cases. Though a developer does the unit testing, a tester must also do it. The developers may oversee some of the minute mistakes in the code, which a tester may find out.


Test Execution and Bugs Reporting

Once the unit testing is completed and the code is released to QA, the functional testing is done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If any top-level failures occur, the bugs should be reported to the developer immediately to get the required workaround.

The test reports should be documented properly and the bugs have to be reported to the developer after the testing is completed.

Release to Production

Once the bugs are fixed, another release is given to the QA with the modified changes. Regression testing is executed. Once the QA assures the software, the software is released to production. Before releasing to production, another round of top-level testing is done.

The testing process is an iterative process. Once the bugs are fixed, the testing has to be done repeatedly. Thus the testing process is an unending process.

What makes a good Software QA engineer?

The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

Verification and Validation

Verification:
* Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications.
* The determination of consistency, correctness & completeness of a program at each stage.

Validation:
* Validation typically involves actual testing and takes place after verifications are completed
* The determination of correctness of a final program with respect to its requirements.

Introduction to Software Testing

Before moving further towards introduction to software testing, we need to know a few concepts that will simplify the definition of software testing.

* Error: Error or mistake is a human action that produces wrong or incorrect result.
* Defect (Bug, Fault): A flaw in the system or a product that can cause the component to fail or misfunction.
* Failure: It is the variance between the actual and expected result.
* Risk: Risk is a factor that could result in negativity or a chance of loss or damage.

Thus Software testing is the process of finding defects/bugs in the system, that occurs due to an error in the application, which could lead to failure of the resultant product and increase in probability of high risk. In short, software testing have different goals and objectives, which often include:

1. finding defects;
2. gaining confidence in and providing information about the level of quality;
3. preventing defects.

Software Quality Assurance

* The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built.
* Software Quality Assurance involves reviewing and auditing the software products and activities to verify that they comply with the applicable procedures and standards and providing the software project and other appropriate managers with the results of these reviews and audits.