Volume 4, Issue 1

Reliability Edge Home

Examining Risk Priority Numbers in FMEA

Software Used


→ Xfmea

[Editor's Note: This article has been updated since its original publication to reflect a more recent version of the software interface.]

The Risk Priority Number (RPN) methodology is a technique for analyzing the risk associated with potential problems identified during a Failure Mode and Effects Analysis (FMEA). This article presents a brief overview of the basic RPN method and then examines some additional and alternative ways to use RPN ratings to evaluate the risk associated with a product or process design and to prioritize problems for corrective action. Note that this article discusses RPNs calculated at the level of the potential causes of failure (Severity x Occurrence x Detection). However, there is a great deal of variation among FMEA practitioners as to the specific analysis procedure and some analyses may include alternative calculation methods. 

Overview of Risk Priority Numbers

An FMEA can be performed to identify the potential failure modes for a product or process. The RPN method then requires the analysis team to use past experience and engineering judgment to rate each potential problem according to three rating scales:

  • Severity, which rates the severity of the potential effect of the failure.
  • Occurrence, which rates the likelihood that the failure will occur.
  • Detection, which rates the likelihood that the problem will be detected before it reaches the end-user/customer.

Rating scales usually range from 1 to 5 or from 1 to 10, with the higher number representing the higher seriousness or risk. For example, on a ten point Occurrence scale, 10 indicates that the failure is very likely to occur and is worse than 1, which indicates that the failure is very unlikely to occur. The specific rating descriptions and criteria are defined by the organization or the analysis team to fit the products or processes that are being analyzed. As an example, Figure 1 shows a generic five point scale for Severity [Stamatis, 445].

Figure 1: Generic Five Point Severity Scale

Figure 1: Generic five point Severity scale

After the ratings have been assigned, the RPN for each issue is calculated by multiplying Severity x Occurrence x Detection.

RPN = Severity x Occurrence x Detection

The RPN value for each potential problem can then be used to compare the issues identified within the analysis. Typically, if the RPN falls within a pre-determined range, corrective action may be recommended or required to reduce the risk (i.e., to reduce the likelihood of occurrence, increase the likelihood of prior detection or, if possible, reduce the severity of the failure effect). When using this risk assessment technique, it is important to remember that RPN ratings are relative to a particular analysis (performed with a common set of rating scales and an analysis team that strives to make consistent rating assignments for all issues identified within the analysis). Therefore, an RPN in one analysis is comparable to other RPNs in the same analysis but it may not be comparable to RPNs in another analysis.

The rest of this article discusses related techniques that can be used in addition to or instead of the basic RPN method described here.

Revised RPNs and Percent Reduction in RPN

In some cases, it may be appropriate to revise the initial risk assessment based on the assumption (or the fact) that the recommended actions have been completed. This provides an indication of the effectiveness of corrective actions and can also be used to evaluate the value to the organization of performing the FMEA. To calculate revised RPNs, the analysis team assigns a second set of Severity, Occurrence and Detection ratings for each issue (using the same rating scales) and multiplies the revised ratings to calculate the revised RPNs. If both initial and revised RPNs have been assigned, the percent reduction in RPN can also be calculated as follows:

Percent reduction in RPN calculation.

For example, if the initial ratings for a potential problem are S = 7, O = 8 and D = 5 and the revised ratings are S = 7, O = 6 and D = 4, then the percent reduction in RPN from initial to revised is (280-168)/280, or 40%. This indicates that the organization was able to reduce the risk associated with the issue by 40% through the performance of the FMEA and the implementation of corrective actions.

Percent reduction in RPN table.

Occurrence/Severity Matrix

Because the RPN is the product of three ratings, different circumstances can produce similar or identical RPNs. For example, an RPN of 100 can occur when S = 10, O = 2 and D = 5; when S = 1, O = 10 and D = 10; when S = 4, O = 5 and D = 5, etc. In addition, it may not be appropriate to give equal weight to the three ratings that comprise the RPN. For example, an organization may consider issues with high severity and/or high occurrence ratings to represent a higher risk than issues with high detection ratings. Therefore, basing decisions solely on the RPN (considered in isolation) may result in inefficiency and/or increased risk.

The Occurrence/Severity matrix provides an additional or alternative way to use the rating scales to prioritize potential problems. This matrix displays the Occurrence scale vertically and the Severity scale horizontally. The points represent potential causes of failure and they are marked at the location where the Severity and Occurrence ratings intersect. The analysis team can then establish boundaries on the matrix to identify high, medium and low priorities. Figure 2 displays a matrix chart generated with ReliaSoft's Xfmea software. In this example, the Occurrence and Detection ratings were set based on a ten point scale, the high priority issues are identified with a red triangle (up), the medium priority issues are identified with a yellow circle and the low priority issues are identified with a green triangle (down). Within the software, when the user clicks a point in the matrix, the description of the potential problem is displayed. For presentation in other documents, a text legend can be used to accompany the matrix graphic. 

Figure 2: Occurrence/Severity Matrix generated with Xfmea's Plot Viewer

Figure 2: Occurrence/Severity Matrix generated with Xfmea's Plot Viewer

Rank Issues by Severity, Occurrence or Detection

Ranking issues according to their individual Severity, Occurrence or Detection ratings is another way to analyze potential problems. For example, the organization may determine that corrective action is required for any issue with an RPN that falls within a specified range and also for any issue with a high severity rating. In this case, a potential problem may have an RPN of 40 (Severity = 10, Occurrence = 2 and Detection = 2). This may not be high enough to trigger corrective action based on RPN but the analysis team may decide to initiate a corrective action anyway because of the very high severity of the potential effect of the failure.

Figure 3 presents a graphical view of failure causes ranked by likelihood of occurrence in a pareto (bar) chart generated by Xfmea. This chart provides the ability to click a bar to display the issue description and to generate a detailed legend for print-ready output. Xfmea also provides this information in a print-ready tabular format and generates similar charts and reports for Severity and Detection ratings.

Figure 3: Charts of Causes ranked by Occurrence rating generated with Xfmea.
Figure 3: Charts of causes ranked by Occurrence rating generated with Xfmea.

Risk Ranking Tables

In addition to, or instead of, the other risk assessment tools described here, the organization may choose to develop risk ranking tables to assist the decision-making process. These tables will typically identify whether corrective action is required based on some combination of Severity, Occurrence, Detection and/or RPN values. As an example, the table in Figure 4 places Severity horizontally and Occurrence vertically [McCollin, 39].

Figure 4: Sample Risk Ranking Table

Figure 4: Sample risk ranking table

The letters and numbers inside the table indicate whether a corrective action is required for each case.

  • N = No corrective action needed.
  • C = Corrective action needed.
  • # = Corrective action needed if the Detection rating is equal to or greater than the given number.

For example, according to the risk ranking table in Figure 4, if Severity = 6 and Occurrence = 5, then corrective action is required if Detection = 4 or higher. If Severity = 9 or 10, then corrective action is always required. If Occurrence = 1 and Severity = 8 or lower, then corrective action is never required, and so on.

Other variations of this decision-making table are possible and the appropriate table will be determined by the organization or analysis team based on the characteristics of the product or process being analyzed and other organizational factors, such as budget, customer requirements, applicable legal regulations, etc.

Conclusion

As this article demonstrates, the Risk Priority Number (RPN) methodology can be used to assess the risk associated with potential problems in a product or process design and to prioritize issues for corrective action. A particular analysis team may choose to supplement or replace the basic RPN methodology with other related techniques, such as revised RPNs, the Occurrence/Severity matrix, ranking lists and/or risk ranking tables. All of these techniques rely heavily on engineering judgment and must be customized to fit the product or process that is being analyzed and the particular needs/priorities of the organization. ReliaSoft's Xfmea software facilitates analysis, data management and reporting for all types of FMEA, with features to support most of the RPN techniques described here. On the web at http://www.reliasoft.com/xfmea

References

The following references relate directly to the examples presented in this article. Numerous other resources are available on FMEA techniques and styles.

Crowe, Dana and Alec Feinberg, Design for Reliability, Chapter 12 "Failure Modes and Effects Analysis." CRC Press, Boca Raton, FL, 2001.

McCollin, Chris, "Working Around Failure." Manufacturing Engineer, February 1999. Pages 37-40. 

Stamatis, D.H., Failure Mode and Effect Analysis: FMEA from Theory to Execution. American Society for Quality (ASQ), Milwaukee, Wisconsin, 1995.

End Article

 

ReliaSoft.com Footer

Copyright © HBM Prenscia Inc. All Rights Reserved.
Privacy Statement | Terms of Use | Site Map | Contact | About Us

Like ReliaSoft on Facebook  Follow ReliaSoft on Twitter  Connect with ReliaSoft on LinkedIn  Follow ReliaSoft on Google+  Watch ReliaSoft videos on YouTube