Software Test Process

QUALITY ASSURANCE
Software quality assurance is an integral part of the software development lifecycle. Its aim is to verify the quality of a program based upon a provided software specifications. It is the responsibility of the software testing team to carry out the proper tests necessary to ensure a project meets business requirements from a functional and user perspective. QA is also encouraged to provide input in regards to overall usability, flow and even aesthetics

Software Testing Types

Integration Testing

In order to properly perform integration testing, testers must possess substantial knowledge of the software architecture for the project Unlije unit testing which focuses solely on functionality, integration testing concentrates on the interaction and communication between different components within the software.

The goal here is to verify that data or information is being passed and processed throughout modules in a system according to previously defined specifications. Examples of this include testing between a client-side we application and its database backend, or a function calling a credit card processing system.

Smoke Testing

Smoke is the initial testing process excercised to check whether the software under test is ready/stable for further testing

The term 'Smoke Testing' is came from the hardware testing, in the hardware testing initial pass is done to check if it didn't catch the fire or smoked in the initial switch on.

Prior to start Smoke testing few test cases need to created once to use for smoke testing. These test cases are executed prior to start actual testing to check critical functionalities of the program is working fine. This set of test cases written such a way that all functionality is verified but not in deep. The objective is not to perform exhaustive testing, the tester need check the navifation's & adding simple things, the tester needs to ask simple questions "Can tester able to access software application?", "Does user navigates from one window to other?", "Check the GUI is responsive" etc  (GUI is an abbreviation for Graphical User Interface) etc.

Sytem Testing

System testing of software or hardware is conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

Regression Testing

Software develpment is not a static process. Throughout testing phases, Developers make modifications and additions which alter the way in which a program is structured. In these situations, regression testing is required to verify that the software is functioning as expected after alterations have been made.

In order to perform proper regression testing, take note of which components have been modified and test the functionality of these components in addition to any interactions they have with other unaltered components. This will ensure that the modified code is being used to its full capacity and has not adversely affected the functionality of the overall program.

User Acceptance Testing (UAT)

Often performed by a small group of actual users or other related stakeholders, user acceptance testing (UAT) usually does not take place until the standard quality assurance process has been completed and a product free from major defects has been delivered. UAT accesses the software from a user-perspective looking at flow, functionality and usability in the field.

Performance and Load Testing (stress testing)

Often times, software is developed with the intention of having multiple users which can put strain on a system. Performance and load testing is meant to simulate an environment in which the software is actively engaged by a realistic number of users. In doing so, any problems related to loading, performance and speed should come to light.

From a testing perspective, it is important to understand the difference between client-side and server-side processing, which refers to the location where most of the processing power for the software lies - either with the client's device or with the software provider(server).

Testing the overall speed and performance of a project on a number of different client devices will provide insight into the range and capabilities of the software which may expose weaknesses or simply provide useful information for inclusion in user manuals or guides.

Load testing tipically tests the limits of a server as overall performance tends decrease as the number of concurrent users increases.

There are a number of programs available in the market aimed at simulating load for the purposes of testing.



SOFTWARE TEST PROCESS

Testing is a process rather than a single activity. This process starts from test planning then designing test cases, preparing for execution and evaluating status till the test closure. Let's review the steps of this activity.

Understand Requirements

In order to create a proper test plan and perform the subsequent testing of a software project, it is crucial that all parties involved in the project are aware of its goals and requirements. Business Stakeholders, Software Developers and Software Testers should collaborate at this stage to understand and gather the requirements which define the project.

Regardless of which software develpment methodology is in place, the software testing team should be kept informed and up-to-date in regards to projects in the planning and development planning and development process stages.

Failure to do so will compromise the testing team's ability to properly perform their testing

Create Test Plan

After the requirements of the project are clear, the next step is to create a test plan which includes an overview of the testing approaches to be taken in order to thouroughly  test the software.

Examples include incorporating a mix  of blackbox and whitebox testing which focus on the software functionality and code structure respectively.

The test plan should indentify the key end requirements of the software from a user persepective and cover a range of users types and/or user scenarios with the intention of covering the program's complete range of functionality.

All parties involved in the software development process including Business Experts, Software Developers and Software QA should ideally review the test plan and provide their input.

This will ensure that from a bussiness perspective, all user requirements will be addressed and proper code coverage is achieved form a software perspective. Once the test plan is agreed upon, the software quality assurance team can move towards developing specific test cases.

Develop Test Cases

The test plan is a broad overview of the testing requirements for a given project. In order to carry out testing, a more specific set of test cases must be developed based on the test plan. Test cases should be written in a manner which allows users with little background information to follow the intructions without having to refer to any supplementary documentation.

Within the suite of test cases, include a range of tests which are aimed at covering as much code branching possible. This requires looking at the structure of the code and ensuring that test cases include conditions which force each function, loop and case to be triggered. Further test the program's structure by adding boundary and edge testing -which examines the limits of a program based on its documented requirements.

For instance, if a field is intended for a numeric value between 1 and 19, ensure taht the field is inclusive of both 1 and 10 and test to for a numbers outside of the range to enure that the error is handled correctly.


Including positive test cases and their corresponding negative test cases are also key in testing a program's structure and robustness. While most test cases tend to be positive in that they tests a program's behavior based on valid input, it is also important to take notes of how a program handle invalid input (negative test case). An example of a negative test case would be including characters in a numerical field.

Test Execution

Once the test case are written, it is up to an individual or team to execute the cases as specified, marking each as having passed or failed. It is key that for failed test cases in particular, testers will record at which step in the test case the failure occurred along with having information regarding any pertinent variables and/or conditions involved.

Report

Following the execution of the test cases, it is important to compule all the findings into one report which provides descriptions od the defects found, the scope and implications of the defects and the steps required to recreate them. This can be organized in a variety of different waus including grouping by functional units or modules, user scenarios or simplu by referring to previously discussed.

It is also useful to include and essessment within the report stating the relative priorities of each defect or clusters of defects in addition to notes on overall usability and functionality. This will assist the resto of the software development team in gaining a complete view of the project, its defects and which issues to resolve un order to move forward.

Reports of defects found during testing can also provide deeper insight into the way in which the software was developed. For instance, a cluster of failed test cases surrounding a module or functional unit may call attention to significatn structural issues within that unit. Defects which persist and are repeated throughout the project will shed light on amendments to development practices to prevent these issues from occuring in future projects or iterations.


Test Defect Fixes

Once the Software Developers have reviewed the report and corrected the valid defects, Software Testers should re-test the failed test cases to ensure that they now meet the specified requirements.

Regression Test

After all the intended fixes are in place and have been testing and passed individually, it is crucial ro perform regression testing to ensure that other portions of the software have not been adversely affected by the recent modifications. Software testers must identify the components that were altered and which other components interact with them in order to determine the breadth if their testing.

Important
Software Quality Assurance is an integral part of the software development life cycle and should not be thought of as and end-cap to the development process. The earlier QA is involved in a development project, the more thorough testing is capable of being. However, exhaustive testing and 100% coverage of a software project is never possible. Taking time and resource constraints into consideration, a software quality assurance team must prioritize testing based on a project's functionality and use.

Maintaining a balanced approach to testing is key to developing high-quality software. Going beyond functionale user-perspective test cases ensure that a program is not only viable, but robust as well. Do not ignore error handling throug negative and boundary testing as this is a very important component in a program.

When writing test plans, test cases and reports, make it a point to be direct and specific. Keep note of environmental conditions (operating system, browser, etc.) and variables during testing to ensure that failed test cases can be recreated and understood. Finally, work closely with the rest software development team and stay on top of any software modifications or changes in requirements.

Error Handling

Error handling is an important aspect of a computer programming. This not only includes adding necessary logic to test for and handle errors but also involves marking error messages meaningful.

The required time should be spent writing code to detect errors.

Error messages should be spent writing code to detect errors.

Error messages should be meaningful. When possible, they should indicate what the proble is, where the problem occurred, and when the problem occurred.

Error messages should be meaningful. When possible, they should indicate what the problem us, where the problem occurred, and when the problem occurred.

Error messages should be stored in way that makes them easy to review (ie. database table, file in the filesystem or even better save them in a secure database) For non-interactive applications, such as program which runs a part of a scheduled job, error messages should be logged into a file. Interactive applications can either send error messages to a log file, standard output, or standard error. Interactive applications can also use popup windows to display error messages.

Comentarios

Entradas populares