In terms of specifications, they are satisfactory. The initial frequency is once a year. I am hesitant to engage in trial and error methods to determine the frequency, if there are better alternatives available.
Is it possible to use the failure finding formula (as outlined in Vee's book) in this scenario: Test frequency (T) = -ln(2A-1)/lambda? With an availability of 97.5% for pressure protection, how can I determine lambda? Can MTBF (lambda) be calculated by adding the operating times of the 23 pressure switches and dividing by the number of failures, which is 8 in this instance? Alternatively, should I obtain the lambda value from external sources like OREDA?
When setting up a new instrument, we conduct a thorough evaluation to establish the appropriate calibration frequency. This includes reviewing manufacturer recommendations from manuals or other technical references. If no manufacturer recommendation is available, we assess if a similar instrument is already installed at the facility and refer to its calibration history to determine the frequency. In the absence of other sources, we rely on a general rule of thumb based on instrument criticality: every non-critical instrument is calibrated annually, while critical instruments are calibrated every six months. Periodically, we review these frequencies to see if adjustments are needed. Instruments that consistently fall within tolerance during calibration may have their frequency reduced from six months to twelve months. Conversely, instruments that repeatedly show deviations during calibration may see an increase in frequency from six months to three months.
Eugene, we can adjust the frequency using the trial and error method or age-exploration, as you mentioned. But is there a more efficient way to determine the frequency after only one calibration exercise? Will the results from subsequent calibrations differ significantly?
I believe it is important to maintain consistent calibration frequency, even if a single result is off. By conducting multiple calibration checks, we can determine if the current frequency is adequate or if adjustments are necessary based on observed trends.
When making decisions about instrument calibration programs, it is important to rely on historical or statistical data. Can the performance of a calibration program be accurately assessed based on a single calibration order? It is crucial to determine the amount of data required to make an informed decision. For instance, if an instrument fails calibration inspection, should the calibration interval be shortened, and conversely, should it be lengthened if the next inspection is successful? A guideline should be established for when a series of failures should prompt a change in calibration frequency. Similarly, if an instrument consistently meets calibration standards, is it advisable to extend the calibration interval based on past performance? It is recommended that the calibration history be reviewed annually and adjustments made to frequencies as necessary.
In order to analyze samples effectively, it is important to ensure that they are as identical as possible in terms of make, model, size, and rating, and that they operate in the same context independently. This means that one item in the sample should not affect the performance of the others. While pooling failure data to calculate MTBF or failure rates (lambda) is ideal, in reality it can be challenging to find samples that meet these strict criteria. Therefore, it is often acceptable to have largely similar items in similar duty conditions. For example, a group of 100 pumps may not all be exactly the same in make, model, size, rating, or context.
Despite these approximations, it is still possible to calculate MTBF data and determine test intervals based on the available information. However, it is important to note that having a small sample size can impact the confidence level of the results. It is advisable to be cautious and adjust the test intervals upwards, especially when dealing with limited data. Additionally, it can be helpful to consider age exploration methods when working with small sample sizes.
When considering manufacturer recommendations, it is important to take them with a grain of salt. Manufacturers tend to be conservative in their recommendations as they may not be aware of the specific operating context. It is always best to adjust these recommendations based on your own knowledge and experience to ensure the best results.
Thank you, Vee. I can enhance the data gathered by exploring additional fields to boost the confidence level. What is the ideal confidence level to aim for? Is a 90% confidence level sufficient, and how can we calculate it? How can we determine the amount of data needed to achieve a reliable confidence level?
For reliable statistical analysis regarding overall equipment effectiveness, refer to Robert Hansen's comprehensive book on the subject. Find the information on pages 216-218 for 90% confidence limits. For further insights on statistics, consider exploring books that cover the Chi-Square test method. This valuable resource can be found at Ind. Press in New York, with the ISBN 0-8311-3138-1.
I have always preferred using a confidence level of 90% or higher, typically sticking to the ranges of 90% to 95%.
When collecting field data, volume is a key challenge. The quest for higher confidence requires a larger dataset. While scientists in a controlled laboratory environment may find this task easier, it poses difficulties in the field. For instance, when measuring Fail-to-danger rates of Pressure Relief Valves, determining the composition of your sample can be complex. Should it include all PRVs in the Plant, only steam PRVs, or perhaps only specific types of steam PRVs? The options for narrowing down the sample are endless, but it ultimately leads to a compromise between confidence levels and statistical rigor. It's important to balance the desire for high confidence with the practical limitations of data collection, including the cost implications of testing larger samples.
One frequent issue I come across is the insufficiency of data or inadequate sample size for data validation. Although employing a Weibull analysis can mitigate the necessity for larger sample sizes in many scenarios, I ponder whether individuals (specifically those in manufacturing facilities) would be open to participating in a repository of equipment failures across different categories. Would it be viable to compile this data from various sources, Vee? Additionally, what reservations do others have regarding contributing to such a database?
It is indeed possible to gather failure data from multiple facilities for analysis. In the offshore Oil & Gas industry, this is done through OREDA, while the Chemical Manufacturers Association has a similar process. The National Power surveys, as well as those from Edison, serve as valuable pooled sources of information in the power industry. IEEE and RAC in Rome also create databases using a comparable approach, focusing on Non-electrical Parts Reliability Data. It is important to note that confidence is not solely dependent on statistical sample size; we must also ensure that the items being analyzed are essentially the same and operate in similar conditions. A taxonomy, such as ISO 14224 based on OREDA, can be used to achieve this. While industry groups can collaborate to pool data, it requires significant organization and coordination due to various challenges that may arise, such as differences in CMMS systems configurations and definitions used. Even within companies, discrepancies in definitions and protocols can make data collection and pooling difficult, especially when operating contexts vary from site to site. Ultimately, while some have successfully navigated these obstacles, others are still facing challenges. It is worth noting that top performers are able to effectively collect and utilize the data they gather.
Joe, it would be greatly beneficial if you could provide us with the links to the sources referenced by Vee promptly. By leveraging existing efforts, we can expedite our progress rather than reinventing the wheel. This proactive approach will help us achieve our goals efficiently.