Smoke Testing
A software smoke test determines whether the program launches and whether its interfaces are accessible and responsible (for example, the responsiveness of a web page or an input button). We will test smoke testing is the most cost effective method for identifying and fixing defects in software; some even believe that it is the most effective of all.
Ex:
Sanity testing
A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. Rather than going in the depth we will cover the breadth of the SUT. A daily build and sanity test is among industry best practices. Tests the core functionality of the SUT.
If the smoke test fails, it is impossible to conduct a sanity test. In contrast, the ideal sanity test exercises the smallest subset of application functions needed to determine whether the application logic is generally functional and correct. If the sanity test fails, it is not reasonable to attempt more rigorous testing.
Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly determining whether an application is too flawed to merit any rigorous testing. Many companies run sanity tests on a weekly build as part of their development process.
Retesting
Retesting the same module of the software to confirm that the original defect has been successfully removed after it is fixed. Same failed test cases are re executed. Also known as confirmation testing.
Ex: Calculator Software Application:
Found a defect in addition module. Draw calc. so when the calculator comes back after the defect is fixed. We only test for the addition module we won’t test for any other module or the whole system.
Regression testing
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed. To check the impact of the defects fixed on the other code. Application code changes so there is a possibility of an error.
Usability testing
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions. Ease of use.
Checking from customer’s perspective.
Ex Email Service:
Button with out a label.
Color scheme of an email application. Ease of operability.
User Interface /GUI testing
Testing how the applications interface and the user interacts with each other. This includes how the application handles keyboard and mouse input and how it displays screen text, images, buttons, menus, dialog boxes, icons, toolbars and more. The process of testing a product to ensure it meets its written specifications laid down by World Wide Web Consortium (W3C)( only web).Checking from the application’s perspective . As per the defined UI design, placement ,color.
Compatibility testing
The process of testing to determine the interoperability of the software product. i.e. capability of the software product to interact with the two or more components or system.
Ex:
Website works with different browsers and operating system.
I developed software if it works on different OS like 98, XP, Vista
Configuration testing
The process of testing a system with each of the configurations of the software and hardware. Also known as portability testing. (Configuration: The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.) Developed software is compatible with the hardware. Airtel games on different mobiles.
Ex: We will test the software
Integration testing
Testing performed to expose defects in the interactions between integrated components or systems.
Ex: Calculator Software Application
Let’s say we developed addition, subtraction, multiplication, division individually as different components. These modules were working fine individually now all the modules should work along with each other also. So I will integrate all of them and test.
System testing
The process of testing an integrated system to verify that it meets specified requirements. (System: A collection of components organized to accomplish a specific function or set of functions.)
Acceptance testing
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
Exploratory Testing
An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
Exploratory testing is simultaneous learning, test design, and test execution. While the software is being tested, the tester learns things that together with experience and creativity generate new good tests to run.
Ad hoc testing
Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and unpredictability guides the test execution activity. An emphasis on creativity and spontaneity, also known as Monkey testing.
Sometimes it so happens that the designed test cases may not give you the whole coverage of the software under test (SUT). Ad hoc testing can find holes in your test strategy, and can expose relationships between subsystems that would otherwise not be visible. Defects found while doing ad hoc testing are often examples of entire classes of forgotten test cases. A key strength of ad hoc testing is the ability of the tester to do unexpected operations, and then to make a value judgment about the correctness of the results.
Performance
The goal of performance testing is to identify performance defects, rather performance bottlenecks in the application with correspondence to the time. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly. When functionality gets some parameters. Response time is the major criteria in terms of software. Performance criteria is predefined.
Ex: for a Banking Web site monitoring the response time 5, 10 ,15 sec
Load
Measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.
Goals of load testing is to
1.) Expose defects that do not surface in cursory testing, such as memory management defects, memory leaks, buffer overflows, etc.
2.) Ensure that the application meets the performance baseline established during performance testing.
Stress Testing
Stress testing is done to determine the maximum load bearing ability of the system. Testing conducted to evaluate a system or component at or beyond the limits
of its specified requirements. Stress testing tries to break the system under test by depleting its resources (in which case it is sometimes called negative testing). The main purpose behind this madness is to make sure that the system fails and recovers gracefully -- this quality is known as recoverability.
Volume Testing
Testing the software application with huge amount of data. Volume test will check if there are any problems when running the system under test with realistic amount of data or maximum amount of data.
A software smoke test determines whether the program launches and whether its interfaces are accessible and responsible (for example, the responsiveness of a web page or an input button). We will test smoke testing is the most cost effective method for identifying and fixing defects in software; some even believe that it is the most effective of all.
Ex:
Sanity testing
A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. Rather than going in the depth we will cover the breadth of the SUT. A daily build and sanity test is among industry best practices. Tests the core functionality of the SUT.
If the smoke test fails, it is impossible to conduct a sanity test. In contrast, the ideal sanity test exercises the smallest subset of application functions needed to determine whether the application logic is generally functional and correct. If the sanity test fails, it is not reasonable to attempt more rigorous testing.
Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly determining whether an application is too flawed to merit any rigorous testing. Many companies run sanity tests on a weekly build as part of their development process.
Retesting
Retesting the same module of the software to confirm that the original defect has been successfully removed after it is fixed. Same failed test cases are re executed. Also known as confirmation testing.
Ex: Calculator Software Application:
Found a defect in addition module. Draw calc. so when the calculator comes back after the defect is fixed. We only test for the addition module we won’t test for any other module or the whole system.
Regression testing
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed. To check the impact of the defects fixed on the other code. Application code changes so there is a possibility of an error.
Usability testing
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions. Ease of use.
Checking from customer’s perspective.
Ex Email Service:
Button with out a label.
Color scheme of an email application. Ease of operability.
- Efficiency -- How long does it take people to complete basic tasks? (For example, find something to create a new account and email a person.)
- Recall -- How much does the person remember afterwards or after periods of non-use.
- Emotional response –Color etc. How does the person feel about the tasks completed?
User Interface /GUI testing
Testing how the applications interface and the user interacts with each other. This includes how the application handles keyboard and mouse input and how it displays screen text, images, buttons, menus, dialog boxes, icons, toolbars and more. The process of testing a product to ensure it meets its written specifications laid down by World Wide Web Consortium (W3C)( only web).Checking from the application’s perspective . As per the defined UI design, placement ,color.
Compatibility testing
The process of testing to determine the interoperability of the software product. i.e. capability of the software product to interact with the two or more components or system.
Ex:
Website works with different browsers and operating system.
I developed software if it works on different OS like 98, XP, Vista
Configuration testing
The process of testing a system with each of the configurations of the software and hardware. Also known as portability testing. (Configuration: The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.) Developed software is compatible with the hardware. Airtel games on different mobiles.
Ex: We will test the software
Integration testing
Testing performed to expose defects in the interactions between integrated components or systems.
Ex: Calculator Software Application
Let’s say we developed addition, subtraction, multiplication, division individually as different components. These modules were working fine individually now all the modules should work along with each other also. So I will integrate all of them and test.
System testing
The process of testing an integrated system to verify that it meets specified requirements. (System: A collection of components organized to accomplish a specific function or set of functions.)
Acceptance testing
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
Exploratory Testing
An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
Exploratory testing is simultaneous learning, test design, and test execution. While the software is being tested, the tester learns things that together with experience and creativity generate new good tests to run.
Ad hoc testing
Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and unpredictability guides the test execution activity. An emphasis on creativity and spontaneity, also known as Monkey testing.
Sometimes it so happens that the designed test cases may not give you the whole coverage of the software under test (SUT). Ad hoc testing can find holes in your test strategy, and can expose relationships between subsystems that would otherwise not be visible. Defects found while doing ad hoc testing are often examples of entire classes of forgotten test cases. A key strength of ad hoc testing is the ability of the tester to do unexpected operations, and then to make a value judgment about the correctness of the results.
Performance
The goal of performance testing is to identify performance defects, rather performance bottlenecks in the application with correspondence to the time. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly. When functionality gets some parameters. Response time is the major criteria in terms of software. Performance criteria is predefined.
Ex: for a Banking Web site monitoring the response time 5, 10 ,15 sec
Load
Measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.
Goals of load testing is to
1.) Expose defects that do not surface in cursory testing, such as memory management defects, memory leaks, buffer overflows, etc.
2.) Ensure that the application meets the performance baseline established during performance testing.
Stress Testing
Stress testing is done to determine the maximum load bearing ability of the system. Testing conducted to evaluate a system or component at or beyond the limits
of its specified requirements. Stress testing tries to break the system under test by depleting its resources (in which case it is sometimes called negative testing). The main purpose behind this madness is to make sure that the system fails and recovers gracefully -- this quality is known as recoverability.
Volume Testing
Testing the software application with huge amount of data. Volume test will check if there are any problems when running the system under test with realistic amount of data or maximum amount of data.
Thank you for the nice article here. Really nice and keep update to explore more gaming tips and ideas.
ReplyDeleteVideo Game Testing Companies