Lesson 5...Guide

50 %
50 %
Information about Lesson 5...Guide

Published on December 6, 2008

Author: bhushan4qtp

Source: slideshare.net

Description

For Queries mail to : bhushan4qtp@gmail.com or
bhushan4qtp@yahoo.com

Aditi Technologies Basic Concepts of Software Testing Software Testing Basic Concepts and Industry awareness Page 1 of 60

Aditi Technologies Basic Concepts of Software Testing Table of Contents 1. Introduction.................................................................................... .............4 2. Types of Testing........................................................................ ...................4 2.1. White Box Testing...................................................................................4 2.2. Black Box Testing.............................................................................. ......4 2.3. Unit Testing....................................................................................... .....5 2.3.1. Benefits ...................................................................................... .....5 2.3.2. Encourages change ...........................................................................5 2.3.3. Simplifies Integration ........................................................................5 2.3.4. Documents the code .........................................................................5 2.3.5. Separation of Interface from Implementation .......................................6 2.3.6. Limitations ................................................................................... 6 .... 2.4. Integration testing .................................................................................6 2.4.1. Purpose .................................................................................. .........6 2.5. Performance Testing......................................................................... .......7 2.5.1. Technology ......................................................................................7 2.5.2. Performance specifications .................................................................7 2.5.3. Tasks to undertake ...........................................................................8 2.6. Stress Testing ..................................................................................... 8 ... 2.7. Security Testing......................................................................... .............9 2.7.1. Security Testing Techniques ...............................................................9 2.8. Usability Testing.....................................................................................9 2.9. Stability Testing....................................................................................10 2.10. Acceptance Testing..............................................................................10 2.11. Installation Testing..............................................................................11 2.12. Alpha Testing......................................................................................11 2.13. Beta Testing.......................................................................................11 2.14. Product Testing...................................................................................11 2.15. System Testing...................................................................................12 2.16. Regression Testing............................................................................... 12 2.17. Compatibility Testing...........................................................................13 2.18. Test Cases, Suits, Scripts and Scenario..................................................13 2.19. Defect Tracking...................................................................................14 2.20. Formal Verification......................................................................... ......14 2.20.1. Validation and Verification ..............................................................14 2.21. Fuzz Testing.......................................................................................15 2.21.1. Uses .................................................................................. ..........15 2.21.2. Fuzz testing methods .....................................................................15 2.21.3. Event-driven fuzz ..........................................................................16 2.21.4. Character-driven fuzz ....................................................................16 2.21.5. Database fuzz ......................................................................... ......16 3. Manual Testing...........................................................................................17 3.1. Facts...................................................................................................17 3.2. Software Crisis ............................................................................. ........17 3.3. Software Myths ....................................................................................17 3.3.1. Management Myths .........................................................................17 3.3.2. Developers Myths ...........................................................................17 3.3.3. Customer’s Myth ....................................................................... ......18 3.3.4. What do we do? ........................................................................ ......18 3.4. Software Quality Assurance: ..................................................................18 3.4.1. Verification: ...................................................................................18 3.4.2. Validation: .....................................................................................18 3.5. Software Life Cycle Models: ...................................................................18 Page 2 of 60

Aditi Technologies Basic Concepts of Software Testing 3.6. What makes a good Software QA engineer? .............................................19 3.7. Testing: ..............................................................................................19 3.7.1. Why Testing? .................................................................................19 3.8. Test Life Cycle .....................................................................................19 3.9. Testing Techniques ...............................................................................19 3.10. Test Plan: ........................................................................... ...............20 3.10.1. Test Specification: .........................................................................20 4. Testing Procedure.......................................................................................20 4.1. Bug Tracking .......................................................................................21 5. Testing Tools and Software..........................................................................23 5.1. Load and Performance Test Tools ...........................................................23 5.2. Java test Tools........................................................................ ..............23 5.3. Link Checking Tools...............................................................................27 5.4. Perl Testing Tools..................................................................................28 5.5. Web Functional and Regression Testing Tools............................................29 5.6. Web Site Security Test Tools...................................................................33 5.7. Web Site Management Tools...................................................................37 5.8. Other Web Testing Tools........................................................................44 6. Testing FAQ ..............................................................................................50 Page 3 of 60

Aditi Technologies Basic Concepts of Software Testing 1. Introduction Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software. In other words Testing is nothing but CRITICISM or COMPARISION. Here comparison in the sense comparing the actual value with expected one. There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is quot;the process of questioning a product in order to evaluate itquot;, where the quot;questionsquot; are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces. The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria. 2. Types of Testing 2.1. White Box Testing White box testing is also known as glass box, structural, clear box and open box testing. This is a software testing technique whereby explicit knowledge of the internal workings of the item being tested are used to select the test data. Unlike black box testing, white box testing uses specific knowledge of programming code to examine outputs. The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable. 2.2. Black Box Testing Testing of a function without knowing internal structure of the program. Black-box and white-box are test design methods. Black-box test design treats the system as a quot;black-boxquot;, so it doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the quot;boxquot;, and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box. Page 4 of 60

Aditi Technologies Basic Concepts of Software Testing While black-box and white-box are terms that are still in popular use, many people prefer the terms quot;behavioralquot; and quot;structuralquot;. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to use a single test design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Some call this quot;gray-boxquot; or quot;translucent-boxquot; test design, but others wish we'd stop talking about boxes altogether. It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually associated with structural test design, but this is because testers usually don't have well-defined requirements at the unit level to validate. 2.3. Unit Testing In computer programming, a unit test is a method of testing the correctness of a particular module of source code. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. This type of testing is mostly done by the developers. 2.3.1. Benefits The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. This isolated testing provides four main benefits: 2.3.2. Encourages change Unit testing allows the programmer to re-factor code at a later date, and make sure the module still works correctly (regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly. 2.3.3. Simplifies Integration Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts will make integration testing easier. 2.3.4. Documents the code Unit testing provides a sort of quot;living documentquot; for the class being tested. Clients looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs. Page 5 of 60

Aditi Technologies Basic Concepts of Software Testing 2.3.5. Separation of Interface from Implementation Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database; in order to test the class, the tester finds herself writing code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own Mock Object. This results in loosely coupled code, thus minimizing dependencies in the system. 2.3.6. Limitations It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. In addition, it may not be trivial to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities. 2.4. Integration testing Integration Testing is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing. takes as its input modules that have been checked out by unit testing, groups them in larger aggregates, applies tests defined in an Integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing. 2.4.1. Purpose The purpose of Integration testing is to verify functional, performance and reliability requirements placed on major design items. These quot;design itemsquot;, i.e. assemblages (or groups of units), are exercised through their interfaces using Black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested; individual subsystems are exercised through their input interface. All test cases are constructed to test that all components within assemblages interact correctly, for example, across procedure calls or process activations. The overall idea is the quot;building blockquot; approach in which verified assemblages are added to a verified base which is then used to support the Integration testing of further assemblages. Page 6 of 60

Aditi Technologies Basic Concepts of Software Testing 2.5. Performance Testing In software engineering, performance testing is testing that is performed to determine how fast some aspect of a system performs under a particular workload. Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system to perform badly. In the diagnostic case, software engineers use tools such as profilers to measure what parts of a device or software contribute most to the poor performance or to establish throughput levels (and thresholds) for maintained acceptable response time. In performance testing, it is often crucial (and often difficult to arrange) for the test conditions to be similar to the expected actual use. 2.5.1. Technology Performance testing technology employs one or more PCs to act as injectors – each emulating the presence or numbers of users and each running an automated sequence of interactions (recorded as a script, or as a series of scripts to emulate different types of user interaction) with the host whose performance is being tested. Usually, a separate PC acts as a test conductor, coordinating and gathering metrics from each of the injectors and collating performance data for reporting purposes. The usual sequence is to ramp up the load – starting with a small number of virtual users and increasing the number over a period to some maximum. The test result shows how the performance varies with the load, given as number of users vs. response time. Various tools, including Compuware Corporation's QACenter Performance Edition, are available to perform such tests. Tools in this category usually execute a suite of tests which will emulate real users against the system. Sometimes the results can reveal oddities, e.g., that while the average response time might be acceptable, there are outliers of a few key transactions that take considerably longer to complete – something that might be caused by inefficient database queries, etc. Performance testing can be combined with stress testing, in order to see what happens when an acceptable load is exceeded –does the system crash? How long does it take to recover if a large load is reduced? Does it fail in a way that causes collateral damage? 2.5.2. Performance specifications Performance testing is frequently not performed against a specification, i.e. no one will have expressed what the maximum acceptable response time for a given population of users is. However, performance testing is frequently used as part of the process of performance profile tuning. The idea is to identify the “weakest link” – there is inevitably a part of the system which, if it is made to respond faster, will result in the overall system running faster. It is sometimes a difficult task to identify which part of the system represents this critical path, and some test tools come provided with (or can have add- Page 7 of 60

Aditi Technologies Basic Concepts of Software Testing ons that provide) instrumentation that runs on the server and reports transaction times, database access times, network overhead, etc. which can be analyzed together with the raw performance statistics. Without such instrumentation one might have to have someone crouched over Windows Task Manager at the server to see how much CPU load the performance tests are generating. There is an apocryphal story of a company that spent a large amount optimizing their software without having performed a proper analysis of the problem. They ended up rewriting the system’s ‘idle loop’, where they had found the system spent most of its time, but even having the most efficient idle loop in the world obviously didn’t improve overall performance one iota! Performance testing almost invariably identifies that it is parts of the software (rather than hardware) that contribute most to delays in processing users’ requests. Performance testing can be performed across the web, and even done in different parts of the country, since it is known that the response times of the internet itself vary regionally. It can also be done in-house, although routers would then need to be configured to introduce the lag what would typically occur on public networks. It is always helpful to have a statement of the likely peak numbers of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification. 2.5.3. Tasks to undertake Tasks to perform such a test would include:  Analysis of the types of interaction that should be emulated and the production of scripts to do those emulations  Decision whether to use internal or external resources to perform the tests.  Set up of a configuration of injectors/controller  Set up of the test configuration (ideally identical hardware to the production platform), router configuration, quiet network (we don’t want results upset by other users), deployment of server instrumentation.  Running the tests – probably repeatedly in order to see whether any unaccounted for factor might affect the results.  Analyzing the results, either pass/fail, or investigation of critical path and recommendation of corrective action. 2.6. Stress Testing Stress Testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. For example, a web server may be stress tested Page 8 of 60

Aditi Technologies Basic Concepts of Software Testing using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads. Stress testing a subset of load testing. Also see testing, software testing, performance testing. 2.7. Security Testing Application vulnerabilities leave your system open to attacks, Downtime, Data theft, Data corruption and application Defacement. Security within an application or web service is crucial to avoid such vulnerabilities and new threats. While automated tools can help to eliminate many generic security issues, the detection of application vulnerabilities requires independent evaluation of your specific application's features and functions by experts. An external security vulnerability review by Third Eye Testing will give you the best possible confidence that your application is as secure as possible. 2.7.1. Security Testing Techniques  Vulnerability Scanning  Network Scanning  Password Cracking  Log Views  Virus Detect  Penetration Testing  File Integrity Checkers  War Dialing 2.8. Usability Testing Usability testing is a means for measuring how well people can use some human-made object (such as a web page, a computer interface, a document, or a device) for its intended purpose, i.e. usability testing measures the usability of the object. Usability testing focuses on a particular object or a small set of objects, whereas general human-computer interaction studies attempt to formulate universal principles. If usability testing uncovers difficulties, such as people having difficulty understanding instructions, manipulating parts, or interpreting feedback, then developers should improve the design and test it again. During usability testing, the aim is to observe people using the product in as realistic a situation as possible, to discover errors and areas of improvement. Designers commonly focus excessively on creating designs that look quot;coolquot;, compromising usability and functionality. This is often caused by pressure from the people in charge, forcing designers to develop systems based on management expectations instead of people's needs. A designers' primary function should be more than appearance, including making things work with people. quot;Caution: simply gathering opinions is not usability testing -- you must arrange an experiment that measures a subject's ability to use your document.quot; Page 9 of 60

Aditi Technologies Basic Concepts of Software Testing Rather than showing users a rough draft and asking, quot;Do you understand this?quot;, usability testing involves watching people trying to use something for its intended purpose. For example, when testing instructions for assembling a toy, the test subjects should be given the instructions and a box of parts. Instruction phrasing, illustration quality, and the toy's design all affect the assembly process. Setting up a usability test involves carefully creating a scenario, or realistic situation, wherein the person performs a list of tasks using the product being tested while observers watch and take notes. Several other test instruments such as scripted instructions, paper prototypes, and pre- and post-test questionnaires are also used to gather feedback on the product being tested. For example, to test the attachment function of an e-mail program, a scenario would describe a situation where a person needs to send an e-mail attachment, and ask him or her to undertake this task. The aim is to observe how people function in a realistic manner, so that developers can see problem areas, and what people like. The technique popularly used to gather data during a usability test is called a think aloud protocol. 2.9. Stability Testing In software testing, stability testing is an attempt to determine if an application will crash. In the pharmaceutical field, it refers to a period of time during which a multi-dose product retains its quality after the container is opened. 2.10.Acceptance Testing User acceptance testing (UAT) is one of the final stages of a software project and will often occur before the customer accepts a new system. Users of the system will perform these tests which, ideally, developers have derived from the User Requirements Specification, to which the system should conform. Test designers will draw up a formal test plan and devise a range of severity levels. The focus in this type of testing is less on simple problems (spelling mistakes, cosmetic problems) and show stoppers (major problems like the software crashing, software will not run etc.). Developers should have worked out these issues during unit testing and integration testing. Rather, the focus is on a final verification of the required business function and flow of the system. The test scripts will emulate real-world usage of the system. The idea is that if the software works as intended and without issues during a simulation of normal use, it will work just the same in production. Results of these tests will allow both the customers and the developers to be confident that the system will work as intended. Page 10 of 60

Aditi Technologies Basic Concepts of Software Testing 2.11.Installation Testing Installation testing (in software engineering) can simply be defined as any testing that occurs outside of the development environment. Such testing will frequently occur on the computer system the software product will eventually be installed on. Whilst the ideal installation might simply appear to be to run a setup program, the generation of that setup program itself and its efficacy in a variety of machine and operating system environments can require extensive testing before it can be used with confidence. In distributed systems, particularly where software is to be released into an already live target environment (such as an operational web site) installation (or deployment as it is sometimes called) can involve database schema changes as well as the installation of new software. Deployment plans in such circumstances may include back-out procedures whose use is intended to roll the target environment back in the event that the deployment is unsuccessful. Ideally, the deployment plan itself should be tested in an environment that is a replica of the live environment. A factor that can increase the organizational requirements of such an exercise is the need to synchronize the data in the test deployment environment with that in the live environment with minimum disruption to live operation. 2.12.Alpha Testing In software development, testing is usually required before release to the general public. In-house developers often test the software in what is known as 'ALPHA' testing which is often performed under a debugger or with hardware-assisted debugging to catch bugs quickly. It can then be handed over to testing staff for additional inspection in an environment similar to how it was intended to be used. This technique is known as black box testing. This is often known as the second stage of alpha testing. 2.13.Beta Testing Many a time, the software is released to a limited audience who would finally form the end users, to use it / test it and come back with feedback or bugs. This process helps in determining whether the final software meets its intended purpose and whether the end users would accept the same. The product handed out as a Beta Release is not bug free, however no serious or critical bugs would exist. A beta release is very close to the final release. 2.14.Product Testing Software Product development companies face unique challenges in testing. Only suitably organized and executed test process can contribute to the success of a software product. Page 11 of 60

Aditi Technologies Basic Concepts of Software Testing Product testing experts design the test process to take advantage of the economies of scope and scale that are present in a software product. These activities are sequenced and scheduled so that a test activity occurs immediately following the construction activity whose output the test is intended to validate. 2.15.System Testing According to the IEEE Standard Computer Dictionary, System testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of Black box testing, and as such, should require no knowledge of the inner design of the code or logic (IEEE. IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York, NY. 1990.). Alpha testing and Beta testing are sub-categories of System testing. As a rule, System testing takes, as its input, all of the quot;integratedquot; software components that have successfully passed Integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of Integration testing is to detect any inconsistencies between the software units that are integrated together called assemblages or between any of the assemblages and hardware. System testing is more of a limiting type of testing, where it seeks to detect both defects within the quot;inter-assemblagesquot; and also the system as a whole. 2.16.Regression Testing Regression Testing is typically carried out at the end of the development cycle. During this testing, all bug previously identified and fixed is tested along with it's impacted areas to confirm the fix and it's impact if any. According to the IEEE Standard Computer Dictionary, Regression testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. Regression testing falls within the scope of Black box testing, and as such, should require no knowledge of the inner design of the code or logic (IEEE. IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York, NY. 1990.). Alpha testing and Beta testing are sub-categories of Regression testing. As a rule, Regression testing takes, as its input, all of the quot;integratedquot; software components that have successfully passed Integration testing and also the software Regression itself integrated with any applicable hardware Regression(s). The purpose of Integration testing is to detect any inconsistencies between the software units that are integrated together called assemblages or between any of the assemblages and hardware. Regression testing is more of a limiting type of Page 12 of 60

Aditi Technologies Basic Concepts of Software Testing testing, where it seeks to detect both defects within the quot;inter-assemblagesquot; and also the system as a whole. 2.17.Compatibility Testing One of the challenges of software development is ensuring that the application works properly on the different platforms and operating systems on the market and also with the applications and devices in its environment. Compatibility testing service aims at locating application problems by running them in real environments, thus ensuring you that the application is compatible with various hardware, operating system and browser versions. 2.18.Test Cases, Suits, Scripts and Scenario Black box testers usually write test cases for the majority of their testing activities. A test case is usually a single step, and its expected result, along with various additional pieces of information. It can occasionally be a series of steps but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table. The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests. Collections of test cases are sometimes incorrectly termed a test plan. They may also be called a test script, or even a test scenario. Most white box tester write and use test scripts in unit, system, and regression testing. Test scripts should be written for modules with the highest risk of failure and the highest impact if the risk becomes an issue. Most companies that use automated testing will call the code that is used their test scripts. A scenario test is a test based on a hypothetical story used to help a person think through a complex problem or system. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test has five key characteristics. It is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to Page 13 of 60

Aditi Technologies Basic Concepts of Software Testing evaluate. They are usually different from test cases in that test cases are single steps and scenarios cover a number of steps. Test suites and scenarios can be used in concert for complete system tests. Scenario testing is similar to, but not the same as session-based testing, which is more closely related to exploratory testing, but the two concepts can be used in conjunction. 2.19.Defect Tracking In engineering, defect tracking is the process of finding defects in a product, (by inspection, testing, or recording feedback from customers), and tracking them to closure. Defect tracking is important in software engineering as complex software systems typically have tens or hundreds of thousands of defects: managing, evaluating and prioritizing these defects is a difficult task. Defect tracking systems are computer database systems that store defects and help people to manage them. 2.20.Formal Verification In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of a system with respect to a certain formal specification or property, using formal methods. System types that are considered in the literature for formal verification include finite state machines (FSM), labeled transition systems (LTS) and their compositions, Petri nets, timed automata and hybrid automata, cryptographic protocols, combinatorial circuits, digital circuits with internal memory, and abstractions of general software components. The properties to be verified are often described in temporal logics, such as linear temporal logic (LTL) or computational tree logic (CTL). Usually formal verification is carried out algorithmically. The main approaches to implementing formal verification include state space enumeration, symbolic state space enumeration, abstract interpretation, abstraction refinement, process- algebraic methods, and reasoning with the aid of automatic theorem provers such as HOL or Isabelle. 2.20.1.Validation and Verification Verification is one aspect of testing a product's fitness for purpose. Validation is the complementary aspect. Often one refers to the overall checking process as V & V. Validation: quot;Are we building the right product?” i.e., does the product do what the user really requires. Verification: quot;Are we building the product right?” i.e., does the product conform to the specifications. Page 14 of 60

Aditi Technologies Basic Concepts of Software Testing The verification process consists of static and dynamic parts. E.g., for a software product one can inspect the source code (static) and run against specific test cases (dynamic). Validation usually can only be done dynamically, i.e., the product is tested by putting it through typical usages and atypical usages (quot;Can we break it?quot;). 2.21.Fuzz Testing Fuzz testing is a software testing technique. The basic idea is to attach the inputs of a program to a source of random data. If the program fails (for example, by crashing, or by failing in-built code assertions), then there are defects to correct. The great advantage of fuzz testing is that the test design is extremely simple, and free of preconceptions about system behavior. 2.21.1.Uses Fuzz testing is often used in large software development projects that perform black box testing. These usually have a budget to develop test tools, and fuzz testing is one of the techniques which offer a high benefit to cost ratio. Fuzz testing is also used as a gross measurement of a large software system's quality. The advantage here is that the cost of generating the tests is relatively low. For example, third party testers have used fuzz testing to evaluate the relative merits of different operating systems and application programs. Fuzz testing is thought to enhance software security and software safety because it often finds odd oversights and defects which human testers would fail to find, and even careful human test designers would fail to create tests for. However, fuzz testing is not a substitute for exhaustive testing or formal methods: it can only provide a random sample of the system's behavior, and in many cases passing a fuzz test may only demonstrate that a piece of software handles exceptions without crashing, rather than behaving correctly. Thus, fuzz testing can only be regarded as a proxy for program correctness, rather than a direct measure, with fuzz test failures actually being more useful as a bug-finding tool than fuzz test passes as an assurance of quality. 2.21.2.Fuzz testing methods As a practical matter, developers need to reproduce errors in order to fix them. For this reason, almost all fuzz testing makes a record of the data it manufactures, usually before applying it to the software, so that if the computer fails dramatically, the test data is preserved. Modern software has several different types of inputs: Page 15 of 60

Aditi Technologies Basic Concepts of Software Testing  Event driven inputs are usually from a graphical user interface, or possibly from a mechanism in an embedded system.  Character driven inputs are from files or data streams.  Database inputs are from tabular data, such as relational databases. There are at least two different forms of fuzz testing:  Valid fuzz attempts to assure that the random input is reasonable, or conforms to actual production data.  Simple fuzz usually uses a pseudo random number generator to provide input.  A combined approach uses valid test data with some proportion of totally random input injected. By using all of these techniques in combination, fuzz-generated randomness can test the un-designed behavior surrounding a wider range of designed system states. Fuzz testing may use tools to simulate all of these domains. 2.21.3.Event-driven fuzz Normally this is provided as a queue of data-structures. The queue is filled with data structures that have random values. The most common problem with an event-driven program is that it will often simply use the data in the queue, without even crude validation. To succeed in a fuzz-tested environment, software must validate all fields of every queue entry, decode every possible binary value, and then ignore impossible requests. One of the more interesting issues with real-time event handling is that if error reporting is too verbose, simply providing error status can cause resource problems or a crash. Robust error detection systems will report only the most significant or most recent error over a period of time. 2.21.4.Character-driven fuzz Normally this is provided as a stream of random data. The classic source in UNIX is the random data generator. One common problem with a character driven program is a buffer overrun, when the character data exceeds the available buffer space. This problem tends to recur in every instance in which a string or number is parsed from the data stream and placed in a limited-size area. Another is that decode tables or logic may be incomplete, not handling every possible binary value. 2.21.5.Database fuzz The standard database scheme is usually filled with fuzz that is random data of random sizes. Some IT shops use software tools to migrate and manipulate Page 16 of 60

Aditi Technologies Basic Concepts of Software Testing such databases. Often the same schema descriptions can be used to automatically generate fuzz databases. Database fuzz is controversial, because input and comparison constraints reduce the invalid data in a database. However, often the database is more tolerant of odd data than its client software, and a general-purpose interface is available to users. Since major customer and enterprise management software is starting to be open-source, database-based security attacks are becoming more credible. A common problem with fuzz databases is buffer overrun. A common data dictionary, with some form of automated enforcement is quite helpful and entirely possible. To enforce this, normally all the database clients need to be recompiled and retested at the same time. Another common problem is that database clients may not understand the binary possibilities of the database field type, or, legacy software might have been ported to a new database system with different possible binary values. A normal, inexpensive solution is to have each program validate database inputs in the same fashion as user inputs. The normal way to achieve this is to periodically quot;cleanquot; production databases with automated verifiers. 3. Manual Testing 3.1. Facts  In India itself, Software industry growth has been phenomenal.  IT field has enormously grown in the past 50 years.  IT industry in India is expected to touch 10,000 crores of which software share is dramatically increasing. 3.2. Software Crisis  Software cost/schedules are grossly inaccurate. Cost overruns of several times, schedule slippage’s by months, or even years are common.  Productivity of people has not kept pace with demand. Added to it is the shortage of skilled people.  Productivity of people has not kept pace with demand Added to it is the shortage of skilled people. 3.3. Software Myths 3.3.1. Management Myths  Software Management is different.  Why change or approach to development?  We have provided the state-of-the-art hardware.  Problems are technical  If project is late, add more engineers.  We need better people. 3.3.2. Developers Myths  We must start with firm requirements Page 17 of 60

Aditi Technologies Basic Concepts of Software Testing  Why bother about Software Engineering techniques, I will go to terminal and code it.  Once coding is complete, my job is done.  How can you measure the quality...it is so intangible. 3.3.3. Customer’s Myth  A general statement of objective is good enough to produce software.  Anyway software is “Flex-ware”, it can accommodate my changing needs. 3.3.4. What do we do?  Use Software Engineering techniques/processes.  Institutionalize them and make them as part of your development culture.  Adopt Quality Assurance Frameworks : ISO, CMM  Choose the one that meets your requirements and adopt where necessary. 3.4. Software Quality Assurance: The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built.  Software Quality Assurance involves reviewing and auditing the software products and activities to verify that they comply with the applicable procedures and standards and providing the software project and other appropriate managers with the results of these reviews and audits. 3.4.1. Verification:  Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications.  The determination of consistency, correctness & completeness of a program at each stage. 3.4.2. Validation:  Validation typically involves actual testing and takes place after verifications are completed  The determination of correctness of a final program with respect to its requirements. 3.5. Software Life Cycle Models:  Prototyping Model  Waterfall Model – Sequential  Spiral Model  V Model - Sequential Page 18 of 60

Aditi Technologies Basic Concepts of Software Testing 3.6. What makes a good Software QA engineer? The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews. 3.7. Testing:  An examination of the behavior of a program by executing on sample data sets.  Testing comprises of set of activities to detect defects in a produced material.  To unearth & correct defects.  To detect defects early & to reduce cost of defect fixing.  To avoid user detecting problems.  To ensure that product works as users expected it to. 3.7.1. Why Testing?  To unearth and correct defects.  To detect defects early and to reduce cost of defect fixing.  To ensure that product works as user expected it to.  To avoid user detecting problems. 3.8. Test Life Cycle  Identify Test Candidates  Test Plan  Design Test Cases  Execute Tests  Evaluate Results  Document Test Results  Casual Analysis/ Preparation of Validation Reports  Regression Testing / Follow up on reported bugs. 3.9. Testing Techniques  Black Box Testing  White Box Testing  Regression Testing These principles & techniques can be applied to any type of testing. Page 19 of 60

Aditi Technologies Basic Concepts of Software Testing 3.10.Test Plan: A Test Plan is a detailed project plan for testing, covering the scope of testing, the methodology to be used, the tasks to be performed, resources, schedules, risks, and dependencies. A Test Plan is developed prior to the implementation of a project to provide a well defined and understood project roadmap. 3.10.1.Test Specification: A Test Specification defines exactly what tests will be performed and what their scope and objectives will be. A Test Specification is produced as the first step in implementing a Test Plan, prior to the onset of manual testing and/or automated test suite development. It provides a repeatable, comprehensive definition of a testing campaign. 4. Testing Procedure The following are some of the steps to consider:  Obtain requirements, functional design, and internal design specifications and other necessary documents.  Obtain budget and schedule requirements. Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)  Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests.  Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.  Determine test environment requirements (hardware, software, communications, etc.)  Determine test-ware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)  Determine test input data requirements  Identify tasks, those responsible for tasks, and labor requirements  Set schedule estimates, timelines, milestones  Determine input equivalence classes, boundary value analyses, error classes  Prepare test plan document and have needed reviews/approvals  Write test cases  Have needed reviews/inspections/approvals of test cases  Prepare test environment and test-ware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data  Obtain and install software releases  Perform tests  Evaluate and report results Page 20 of 60

Aditi Technologies Basic Concepts of Software Testing  Track problems/bugs and fixes  Retest as needed  Maintain and update test plans, test cases, test environment, and test ware through life cycle 4.1. Bug Tracking What's a 'test case'?  A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.  Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible. What should be done after a bug is found?  The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:  Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.  Bug identifier (number, ID, etc.)  Current bug status (e.g., 'Released for Retest', 'New', etc.)  The application name or identifier and version  The function, module, feature, object, screen, etc. where the bug occurred  Environment specifics, system, platform, relevant hardware specifics  Test case name/number/identifier  One-line bug description  Full bug description  Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool  Names and/or descriptions of file/data/messages/etc. used in test  File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem  Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common  Was the bug reproducible?  Tester name  Test date  Bug reporting date  Name of developer/group/organization the problem is assigned to Page 21 of 60

Aditi Technologies Basic Concepts of Software Testing  Description of problem cause  Description of fix  Code section/file/module/class/method that was fixed  Date of fix  Application version that contains the fix  Tester responsible for retest  Retest date  Retest results  Regression testing requirements  Tester responsible for regression tests  Regression testing results  A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers. Why does software have bugs?  Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).  Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object- oriented techniques can complicate instead of simplify a project unless it is well engineered.  Programming errors - programmers, like anyone else, can make mistakes.  Changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.  Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.  Egos - people prefer to say things like: o 'no problem' o 'piece of cake' o 'I can whip that out in a few hours' o 'it should be easy to update that old code'  Instead of: Page 22 of 60

Aditi Technologies Basic Concepts of Software Testing o 'that adds a lot of complexity and we could end up o making a lot of mistakes' o 'we have no idea if we can do that; we'll wing it' o 'I can't estimate how long it will take, until I take a close look at it' o 'we can't figure out what that old spaghetti code did in the first place'  If there are too many unrealistic 'no problems', the result is bugs.  Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').  Software development tools - visual tools, class libraries, compilers,

Add a comment

Related presentations

Related pages

Lesson Guide Lesson 5 - Science: What is True?

Lesson Guide Lesson 5 - Science: What is True? Introduction In this, our fifth worldview tour, we will head northwest, enter the cavern of nature, gaze upon
Read more

Lesson 5 Guide - Southwest School Of Botanical

Materia Medica Lesson 5 - page 2 STATUS: C LINUM (Flaxseed) SEEDS. Whole seeds, 1-2 teaspoons in cup of warm water, Ground seeds for a poultice.
Read more

Lesson 5 Guide - math.purdue.edu

Lesson 5 Guide Text: 5.5 The following questions facilitate your understanding of the course material. Answers can be found either in the lecture or in the ...
Read more

Lesson 5 Guide - web.scarsdaleschools.org

Lesson 5 Guide I’m glad to see you up close with your clipboards and Post-its.Today we will be ... LESSON GUIDE When you share your learning, ...
Read more

LESSON GUIDE IN ENGLISH GRADE 5 WOET-10-LGIEG57

Lesson Guide In English Grade 5 - WOET-10-LGIEG57 PDF File 1/4 ... LESSON GUIDE IN ENGLISH GRADE 5 GUIDE € € LESSON GUIDE IN ENGLISH GRADE 5 € ...
Read more

5-lesson5-guide - University of Nottingham

Laxton Open Field Village - lesson plans and activities for teachers
Read more

Amazon.com: Lesson 5: Guide To Education: Gertrude Berg ...

Buy Lesson 5: Guide To Education: Read Digital Music Reviews - Amazon.com
Read more

Lesson 5 - A Guide to Mobile Apps 2 - sites.google.com

Lesson 5. This is a video about the molemash app. A mole pops up at random locations on the screen, moving once every second. Touching the mole causes the ...
Read more

Search › science 5th grade scott foresman chapter 5 guide ...

Study sets matching "science 5th grade scott foresman chapter 5 guide" Study sets. Classes. Users
Read more