Sw quality metrics

100 %
0 %
Information about Sw quality metrics

Published on February 27, 2014

Author: SruthiBalaji

Source: slideshare.net

Done by, B.Shruthi (11109A067 )

 Software Quality Metrics.  Types of Software Quality Metrics.  Three groups of Software Quality Metrics.  Difference Between Errors, Defects, Faults, and Failures.  Lines of code.  Function Points.  Customer Satisfaction Metrics

 The subset of metrics that focus on quality  Software quality metrics can be divided into:   End-product quality metrics In-process quality metrics  The essence of software quality engineering is to investigate the relationships among inprocess metric, project characteristics , and end-product quality, and, based on the findings, engineer improvements in quality to both the process and the product.

 Product metrics – e.g., size, complexity, design features, performance, quality level  Process metrics – e.g., effectiveness of defect removal, response time of the fix process  Project metrics – e.g., number of software developers, cost, schedule, productivity

 Product quality  In-process quality  Maintenance quality  Product Quality Metrices

 Intrinsic   product quality Mean time to failure Defect density  Customer   related Customer problems Customer satisfaction

 Intrinsic product quality is usually measured by:   the number of “bugs” (functional defects) in the software (defect density), or how long the software can run before “crashing” (MTTF – mean time to failure)  The two metrics are correlated but different

An error is a human mistake that results in incorrect software.  The resulting fault is an accidental condition that causes a unit of the system to fail to function as required.  A defect is an anomaly in a product.  A failure occurs when a functional unit of a software-related system can no longer perform its required function or cannot perform it within specified limits 

 This metric is the number of defects over the opportunities for error (OPE) during some specified time frame.  We can use the number of unique causes of observed failures (failures are just defects materialized) to approximate the number of defects.  The size of the software in either lines of code or function points is used to approximate OPE.

 Possible       variations Count only executable lines Count executable lines plus data definitions Count executable lines, data definitions, and comments Count executable lines, data definitions, comments, and job control language Count lines as physical lines on an input screen Count lines as terminated by logical delimiters

 Other     difficulties LOC measures are language dependent Can’t make comparisons when different languages are used or different operational definitions of LOC are used For productivity studies the problems in using LOC are greater since LOC is negatively correlated with design efficiency Code enhancements and revisions complicates the situation – must calculate defect rate of new and changed lines of code only

 Depends on the availability on having LOC counts for both the entire produce as well as the new and changed code  Depends on tracking defects to the release origin (the portion of code that contains the defects) and at what release that code was added, changed, or enhanced

A function can be defined as a collection of executable statements that performs a certain task, together with declarations of the formal parameters and local variables manipulated by those statements.  In practice functions are measured indirectly.  Many of the problems associated with LOC counts are addressed.

 The number of function points is a weighted total of five major components that comprise an application.      Number of external inputs x 4 Number of external outputs x 5 Number of logical internal files x10 Number of external interface files x 7 Number of external inquiries x 4

 The function count (FC) is a weighted total of five major components that comprise an application.      Number of external inputs x (3 to 6) Number of external outputs x (4 to 7) Number of logical internal files x (7 to 15) Number of external interface files x (5 to 10) Number of external inquiries x (3 to 6) the weighting factor depends on complexity

 Each number is multiplied by the weighting factor and then they are summed.  This weighted sum (FC) is further refined by multiplying it by the Value Adjustment Factor (VAF).  Each of 14 general system characteristics are assessed on a scale of 0 to 5 as to their impact on (importance to) the application.

1. 2. 3. 4. 5. 6. 7. Data Communications Distributed functions Performance Heavily used configuration Transaction rate Online data entry End-user efficiency

8. 9. 10. 11. 12. 13. 14. Online update Complex processing Reusability Installation ease Operational ease Multiple sites Facilitation of change

    VAF is the sum of these 14 characteristics divided by 100 plus 0.65. Notice that if an average rating is given each of the 14 factors, their sum is 35 and therefore VAF =1 The final function point total is then the function count multiplied by VAF FP = FC x VAF

Customer problems are all the difficulties customers encounter when using the product.  They include:        Valid defects Usability problems Unclear documentation or information Duplicates of valid defects (problems already fixed but not known to customer) User errors The problem metric is usually expressed in terms of problems per user month (PUM)

 PUM = Total problems that customers reported for a time period <divided by> Total number of license-months of the software during the period where Number of license-months = Number of the install licenses of the software x Number of months in the calculation period

 Improve the development process and reduce the product defects.  Reduce the non-defect-oriented problems by improving all aspects of the products (e.g., usability, documentation), customer education, and support.  Increase the sale (number of installed licenses) of the product.

Defect Rate Problems per UserMonth (PUM) Numerator Valid and unique product defects All customer problems (defects and nondefects, first time and repeated) Denominator Size of product (KLOC or function point) Customer usage of the product (user-months) Measurement perspective Producer—software development organization Customer Scope Intrinsic product quality Intrinsic product quality plus other factors

Customer Satisfaction Issues Customer Problems Defects

 Customer satisfaction is often measured by customer survey data via the five-point scale:      Very satisfied Satisfied Neutral Dissatisfied Very dissatisfied

 CUPRIMDSO          Capability (functionality) Usability Performance Reliability Installability Maintainability Documentation Service Overall

 FURPS      Functionality Usability Reliability Performance Service

1. 2. 3. 4. Percent of completely satisfied customers Percent of satisfied customers (satisfied and completely satisfied) Percent of dissatisfied customers (dissatisfied and completely dissatisfied) Percent of nonsatisfied customers (neutral, dissatisfied, and completely dissatisfied)

 Defect density during machine testing  Defect arrival pattern during machine testing  Phase-based defect removal pattern  Defect removal effectiveness

 Defect rate during formal machine testing (testing after code is integrated into the system library) is usually positively correlated with the defect rate in the field.  The simple metric of defects per KLOC or function point is a good indicator of quality while the product is being tested.

 Scenarios  for judging release quality: If the defect rate during testing is the same or lower than that of the previous release, then ask: Does the testing for the current release deteriorate?   If the answer is no, the quality perspective is positive. If the answer is yes, you need to do extra testing.

 Scenarios for judging release quality (cont’d):  If the defect rate during testing is substantially higher than that of the previous release, then ask: Did we plan for and actually improve testing effectiveness?   If the answer is no, the quality perspective is negative. If the answer is yes, then the quality perspective is the same or positive.

 The pattern of defect arrivals gives more information than defect density during testing.  The objective is to look for defect arrivals that stabilize at a very low level, or times between failures that are far apart before ending the testing effort and releasing the software.

The defect arrivals during the testing phase by time interval (e.g., week). These are raw arrivals, not all of which are valid.  The pattern of valid defect arrivals – when problem determination is done on the reported problems. This is the true defect pattern.  The pattern of defect backlog over time. This is needed because development organizations cannot investigate and fix all reported problems immediately. This metric is a workload statement as well as a quality statement. 

 This is an extension of the test defect density metric.  It requires tracking defects in all phases of the development cycle.  The pattern of phase-based defect removal reflects the overall defect removal ability of the development process.

 DRE = (Defects removed during a development phase <divided by> Defects latent in the product) x 100%  The denominator can only be approximated.  It is usually estimated as: Defects removed during the phase + Defects found later

 When done for the front end of the process (before code integration), it is called early defect removal effectiveness.  When done for a specific phase, it is called phase effectiveness.

 The goal during maintenance is to fix the defects as soon as possible with excellent fix quality  The following metrics are important:     Fix backlog and backlog management index Fix response time and fix responsiveness Percent delinquent fixes Fix quality

Add a comment

Related presentations

Related pages

Software Quality Metrics Overview - Pearson

Software Quality Metrics Overview Software metrics can be classified into three categories: product metrics, process metrics, and project metrics.
Read more

Software metric - Wikipedia, the free encyclopedia

A software metric is a standard of measure of a degree to which a software system or process ... (metrics are functions, while ... quality assurance ...
Read more

Quality metrics: A guide to measuring software quality

Learn about software quality metrics for Agile development, defect tracking, code coverage and the attributes of successful metrics programs.
Read more

SWQualityMetrics - scribd.com

Software Quality Metrics Overview. Types of Software Metrics Product metrics ± e.g., size, complexity, design features, performance, quality level Process ...
Read more

Software Quality Metrics - Developer.com

Software cost overruns, schedule delays, and poor quality have been endemic in the software industry for more than 50 years. —Capers Jones
Read more

Software Metrics - Massachusetts Institute of Technology

Software Metrics (2) . Code Static Dynamic Programmer productivity Design Testing ... If poor quality software produced quickly, may appear to be more
Read more

Quality Metrics, Scorecards and Dashboards - CAPAtrak

Quality Metrics? How do Quality Metrics directly support the Quality Management System? ... October 11, 2008 "Quality Metrics, Scorecards and Dashboards"
Read more

Quality Assurance Metrics and Quality Objectives

Quality Assurance Metrics are necessary for ISO 9001 implementation. Here is an explanation of Quality Objectives and the requirements for certification.
Read more

Quantitative Evaluation of Software Quality - CSSE

Quantitative Evaluation of Software Quality B. W. Boehm J. R. Brown M. Lipow TRW Systems and Energy Group Keywords software engineering quality assurance
Read more

Software Metrics Chapter 4 - IIT College of Science

4 SW Metrics Terms • Metric (IEEE Standard Glossary of Software Engineering Terms) – A quantitative measure of the degree to which a system, component ...
Read more