The burn-in factor is a crucial test to assess the performance of the accelerometer under extreme conditions. This test helps determine the Mean Time Between Failures (MTBF) of the device. For everyday use, the MTBF in typical conditions is sufficient to ensure reliability. To prolong the lifespan of the accelerometer, handle it carefully, avoid extreme shocks and high temperatures, and pay special attention to the cables. Cables are a common point of failure for accelerometers, so proper care is essential. Regards.
- 09-07-2024
- Yvonne Mitchell
Thank you for your input, but we have not reached our desired MTBF figure yet. Our company supplies sensors to external clients, and we are unsure of where the suggested MTBF value of 50,000 hours originated from. In order to meet our customer's request, we are looking to adjust the MTBF figure accordingly. I am considering the possibility of implementing an 8 or 24-hour burn-in period to potentially achieve a 10% improvement. Any suggestions or calculations on this matter would be greatly appreciated. Best regards, Terry D.
Can Burn-In Testing Impact the MTBF of Electronic Components?
In simple terms, it is not possible to predict the exact impact of burn-in testing on the Mean Time Between Failures (MTBF) of electronic components. The MTBF is typically calculated by averaging the time to failure of multiple parts in a sample. For example, if a manufacturer's sample of 100 parts all failed between 30,000 and 50,000 hours, with an MTBF of 40,000 hours, performing a burn-in test by aging the components by 5000 hours may result in different outcomes.
One scenario could be that the sample used by the manufacturer consisted of two sub-populations - one with failure times under 5000 hours (sub-population A) and the other with failure times between 30,000 and 80,000 hours (sub-population B), resulting in an overall MTBF of 40,000 hours. If a burn-in test is performed on a new set of components representative of this population, the surviving components may have a longer MTBF compared to the initial set.
However, it is crucial to consider factors such as quality control during manufacturing, as poor quality control can lead to defects that cause premature failures (infant mortality). Without additional data from the manufacturer or your own operation, it is challenging to accurately predict how burn-in testing will influence the MTBF of electronic components.
I agree with Sean's response, as determining the Mean Time Between Failures (MTBF) requires conducting your own testing or having access to detailed data. The issue with MTBF for end users, as highlighted by Sean, is that a published MTBF of 40,000 hours does not provide insight into potential failures occurring at 5000 or 30,000 hours. This lack of specificity can have a significant business impact, as each replacement of a unit failing at 5000 hours may indicate a potential for further failures at the same timeframe. It is essential to consider this uncertainty when planning for maintenance and reliability.
I understand your perspective as a consumer and the importance of selecting a reliable sensor. It would make sense to provide a Mean Time To Failure (MTTF) rather than MTBF. How can you reassure customers about meeting the minimum MTBF requirement without conducting tests and providing data to validate it? Manufacturers may not include mixed failure modes in their MTBF calculation, but it is possible. Adjusting the quoted figure with data from a representative sample could be a solution. Overall, I agree with Sean's point of view.
We truly appreciate all the help you've provided. Thank you so much, guys! Your assistance means a great deal to us.
In discussing the reliability of sensors, Wally mentions that during the first 5 years, their history is very dependable. However, between years 5-9, the reliability begins to decrease, and after 9 years, it becomes significantly less reliable. The quoted Mean Time Between Failures (MTBF) is 47000 hours, equivalent to approximately 5 years. This translates to a 63% chance of the item failing before 5 years, leaving a 37% probability of it still functioning. In summary, it is likely that the item will fail long before reaching the 5-year mark.
Quote: The specified Mean Time Between Failures (MTBF) is 47,000 hours, equivalent to approximately 5 years. To clarify, there is a 63% chance that the item has failed by the 5-year mark, leaving a 37% chance that it is still operational. Contrary to popular belief, it is likely that the item has already malfunctioned well before the 5-year threshold. Assuming a consistent failure rate, the item will maintain its reliability over time, regardless of whether it is 2, 5, or even 100 years old - as long as it remains functional, replacing it with a new component will not enhance its reliability. According to Wally, there is a decline in reliability between years 5-9, hinting at components that become more prone to failures as they age. If we consider an escalating failure rate, such as a Weibull distribution with a shape parameter significantly above 1, a lower percentage of items may have failed after 5 years. This highlights the drawback of relying solely on MTBF figures from suppliers, as we lack certainty about the potential increase in failure rates with age based on the supplied data.
Sean's examples illustrate how Mean Time Between Failures (MTBF) is calculated by adding up the time to failure for various parts and then dividing by the total number of parts in the sample. For instance, if a manufacturer tests 100 parts and all fail between 30,000 and 50,000 hours, resulting in an MTBF of 40,000 hours. Another scenario could involve sub-populations within the sample, where one group fails under 5000 hours and another between 30,000 and 80,000 hours, yet still yielding an overall MTBF of 40,000 hours. This variation may be due to quality control issues during manufacturing, leading to defects in certain components (sub-population A). Both scenarios have the same MTBF of 40,000 hours, but in the second scenario, the risk of failures at 5000 hours is present. In such cases, how would you decide which population to choose based solely on MTBF data? Making informed decisions with limited information is crucial in these situations.
In discussions with Sean, it was noted that using an MTBF value is most appropriate when assuming a constant hazard rate or a Weibull shape factor of 1. However, in cases with other shape factors, it is more effective to utilize the Weibull scale factor. For instance, in scenarios where there is an increasing failure rate, such as a Weibull distribution with a shape parameter greater than 1, only a small percentage may have failed within 5 years. It is important to understand that regardless of the Weibull shape factor, survival probability curves will all intersect at the time equal to the scale factor (or MTBF). This concept is illustrated in Figure 3.17 on page 43 of the book "Effective Maintenance Management." Ultimately, approximately 63% or 2/3 of the population is expected to fail by the time the average life or scale factor (MTBF) value is reached. Performing maintenance after reaching about 10-20% of the MTBF value is likely to result in detecting failures that are already unacceptable.
In summary, regardless of the shape factor, approximately two thirds (63%) of the population will have perished by the time the average life or scale factor (MTBF) value is reached. Although the scale factor may be consistent, the MTBF is a different metric altogether. In a Weibull distribution, the MTBF is influenced by both the scale and shape factors. This means that two Weibull distributions with identical scale factors but different shape factors will have varied MTBF values. Consequently, a Weibull distribution may exhibit a failure percentage different from 63% at the MTBF point. The main takeaway is that the MTBF solely indicates the average time before a component fails, without providing insights on burn-in procedures or replacement timing based on failure rates. Essentially, it does not differentiate between a constant failure rate scenario, where burn-in and component replacement are ineffective, and an age-dependent failure rate scenario, where the impact of burn-in and replacement is uncertain. Therefore, MTBF alone cannot determine the specific situation at hand.
Sean's point in response to the original poster emphasizes that MTBF does not offer relevant information. I agree with Sean and also support points (a) and (b) made. Many believe that MTBF indicates when maintenance should be performed, but the shape of the failure distribution curve is actually the deciding factor. It is crucial for any maintenance program to ensure that maintenance intervals do not exceed 20% of the MTBF value.
The key factor influencing maintenance costs and availability is the coefficient r, calculated by dividing the number of corrective maintenance (CM) tasks by the sum of CM tasks and preventive maintenance (PM) tasks over a prolonged period. The Maintenance Time To Failure (MTTF) does not directly correlate with the coefficient r. Therefore, the statement made by Vee regarding setting the maintenance interval not to exceed 20% of the Mean Time Between Failures (MTBF) may not be applicable. It is essential to determine the maintenance interval based on the statistical distribution that represents the predominant failure mode of a specific system. For further insights, I recommend referring to MIL-HDBK-781A, paragraph 5.10.8, a valuable resource on reliability demonstration tests for MTBF. While it may not provide a straightforward solution to the complexities raised by Terry, it offers valuable clues to address the issue effectively.
Inquiring about the formula for calculating the ratio of CM tasks to the total of CM and PM tasks, Rui seeks clarification on the logic behind it. Specifically, he seeks to understand what tasks fall under the umbrella of CM in this context - such as whether it covers corrective actions stemming from PdM analysis, inspections, and tests. Additionally, Rui questions whether PM tasks in this formula refer solely to what is typically found in a CMMS or if there are other types included. Ultimately, he aims to determine if the sum of PM and CM tasks represents the total tasks completed within a specific time frame.
I didn't see details on how you intended to conduct the "BURN IN" process. Based on my experience, accelerometers commonly fail due to misalignment, damage, and wiring issues. Simply burning them in on a test bench may not accurately predict their failure rate in real-world conditions. It appears that the manufacturer may not test and calibrate the accelerometers. In this case, I would consider choosing a different brand rather than conducting the testing on my own.
You may be correct in your assessment, however, it is unlikely that the original equipment manufacturer or supplier would include installation errors in their MTBF calculation. These errors are considered external failure modes that can be addressed through proper training and inspection processes, rather than being inherent to the product itself.
Richard, you are correct once again. Those sensors were some of the most temperamental I have ever worked with. The most challenging one was an ultra-precise flowmeter used to measure the exact amount of water added to meat during the mixing process. Let's just say it was USDA required!
The coefficient r discussed in my previous post represents the cumulative probability of failure F in mathematical terms. Sometimes, when determining a replacement interval for a part with no critical consequences, the coefficient r (or F) can be set arbitrarily. This decision may lead you to question how many failures (requiring corrective maintenance tasks) you are willing to tolerate for every 100 maintenance tasks (combining corrective maintenance and preventive maintenance) in the future.
It is important to note that this is essentially the same as the cumulative probability of failure. Setting F = 0 or values close to 1 is not logical. For instance, consider a part with Weibull parameters such as location at 750 hours, shape at 2.8, and scale at 2,200 hours. The minimum planning interval is 150 hours, with costs of €2,000 per corrective maintenance task and €1,000 per preventive maintenance task.
Upon analysis, the mean time to failure (MTTF) is calculated as 2,709 hours, with a minimum maintenance hourly cost of €60.66 achieved when the replacement period (RP) is set at 2,250 hours or 83% of MTTF. The determined coefficient r at t = 2,250 hours is 0.29, indicating 29 corrective maintenance tasks for every 100 maintenance tasks (combining corrective and preventive maintenance).
For different RP scenarios, setting RP at 0.2 x 2,709 = 542 hours results in a cost of 184.57, approximately three times the minimum cost per hour. Similarly, setting RP at 0.1 x 2,709 = 271 hours yields a cost of 369.14, roughly six times the minimum cost per hour.
Best regards,
For more insights on MTTF, I highly suggest checking out this informative article on calculating MTTF: http://www.reliasoft.com/newsletter/2Q2000/mttf.htm Thank you.
Thank you, Rui, for elaborating on some earlier comments made in the discussion. This further emphasizes the points discussed in the thread.
Let me wrap up by discussing the significance of the reliability indicators MTTF/MTBF. In reality, MTTF (at the component level) or MTBF (at the equipment/system level) is crucial from a managerial perspective. It is essential to regularly recalculate MTTF/MTBF within a set time frame and monitor its trend over time. If there is a decrease, identifying and resolving the causes is imperative. Conversely, if there is an increase, investigating the reasons and strengthening them to enhance reliability is necessary. Additionally, the raw data used to determine MTTF/MTBF should be regularly analyzed to update the statistical distribution parameters and make appropriate adjustments to the replacement period in PM policy or the inspection calendar in PdM policy. Regards,
Rui, I appreciate your clarification. As you stated, the minimum maintenance hourly cost is 60,66 €/hour when the replacement period (RP) is set at 2.250 hours or 83% of the Mean Time To Failure (MTTF) of 2.709 hours. Could you please provide information on the survival probabilities at both 2.709 hours and 2.250 hours in the given example? Thank you.
Hi Vee,
I wanted to remind you about the Weibull probability distribution which is commonly used in reliability engineering. With parameters set at a location of 750 hours, a shape of 2.8, and a scale of 2,200 hours, this distribution can yield some interesting results. For example, the reliability at 2,250 hours is 0.710211, while at 2,709 hours it drops to 0.485480.
Best regards,
Rui, did you provide estimates for F(t) or R(t) statistics?
Hello Vee, I would like to address the concept of reliability R(t) which you inquired about, specifically in terms of survival probability. In this context, the calculation for reliability can be represented by the formula R = e^(-(((2.250-750)/2.200)^2,8)) = 0.710211. Another example of this calculation is R = e^(-(((2.709-750)/2.200)^2,8)) = 0.485480. If you have any further questions, please feel free to reach out. Thank you!
Rui, I appreciate you clarifying the maintenance intervention thresholds. From my understanding, you suggest conducting maintenance when the survival probability reaches 71% at 2250 hrs or 48% at 2900 hrs. In the former scenario, the item is deemed 30% at risk, and in the latter, it is 52% at risk. Such high levels of risk may raise concerns in various industries. Could you provide insight on the rationale behind selecting these specific thresholds?
Dear Terry, Attached you will find a document with some insights on burn-in tests, in response to your initial inquiry. Please review the information and let me know if it is helpful. Best regards, Attachment: Terry_D.doc (25 KB) - Version 1
Attached, please review the updated document outlining burn-in test procedures. Kindly disregard the previous document I shared in my previous post, as it was created late at night. Thank you. Attachment: Terry_D_1.doc (29 KB, 1 version)
Rui, it seems like you have a lot on your plate. A few days ago, I mentioned that I find it unacceptable to have such high mortality rates in any industry. I would appreciate it if you could elaborate on how these numbers were determined. I hope you can provide some insight into my concerns when you have a moment.
- 10-07-2024
- Jessica Freeman
Hello Vee, I apologize for my prolonged silence. Family issues have kept me occupied, but everything has now returned to normal. In response to your comment about "unacceptable high mortalities in any industry," I want to emphasize that the reliability issues were a result of specific data used in calculations. You may be familiar with the concept of optimal time intervals in preventive maintenance (PM). I shared a document on this topic in a previous discussion on December 13th. I have encountered similar situations before, and the outcomes often align with initial predictions, although they may sometimes seem to favor failures. Adjustments are necessary as more data is gathered to understand component failure over time. I recommend reading a case study on component reliability and cost analysis by Nicholas A. J. Hastings, which is one of many insightful cases in the book "Case Studies in Reliability and Maintenance." The study discusses an optimal preventive replacement policy based on age, with preventive replacements accounting for 60% of all replacements. It focuses on axle bushes in ore loaders used in underground mining. To reduce maintenance costs, it is essential to choose the most suitable maintenance policy and inspection intervals (for predictive maintenance) or replacement intervals (for preventive maintenance) to strike a balance between corrective maintenance and preventive maintenance costs. This approach is advocated in many textbooks on maintenance. Thank you, Rui.
I will review your provided links and documents at a later time due to time constraints. However, there is a critical issue at hand - when the maintenance interval exceeds 10-20% of the scale factor, achieving a reasonable survival probability during maintenance becomes unfeasible. It is essential to determine the desired survival probability for different scenarios, such as >95% for process safety risks and >85% for normal production risks. This target should be the focus, rather than simply balancing maintenance and downtime costs. I believe in prioritizing achieving the desired survival probability over cutting maintenance costs.
In the world of management, decisions are typically influenced by specific criteria that hold importance in a given situation. Cost effectiveness often plays a significant role in decision-making, leading to a balance between failure costs and preventive maintenance costs as outlined in standard textbooks. This balance is not subjective, but rather the outcome of precise calculations applied to the operational process. While empirical rules should not be the norm, they may be necessary in the absence of data for estimation purposes. The choice of specific numbers, such as those above 85% and 95%, may raise questions about the reasoning behind such limits and why other figures were not considered. It is crucial to understand the rationale behind these decisions.
In the realm of maintenance strategies, achieving a balance between failure costs and preventive maintenance costs is crucial. However, this simplistic approach overlooks key reliability factors. Imagine boarding an aircraft that followed a cost-based maintenance policy - it's simply unthinkable. Notably, equipment downtime can lead to substantial losses in production and safety, which aren't always directly proportional to the duration of downtime. While one hour of downtime may have minimal impact, extended periods of downtime can trigger significant repercussions. Similarly, the likelihood of an incident occurring due to a malfunctioning safety system increases with prolonged outages, showcasing a non-linear relationship. Maintenance costs, on the other hand, typically increase steadily with time. Understanding the dynamics of maintenance scheduling is essential, as a high failure rate during preventive maintenance implies a growing risk of component failures. This underscores the importance of adhering to threshold values, such as the 5-15% mortality range, to mitigate potential risks effectively.
One crucial consideration to keep in mind is that maintenance costs are significantly lower when a piece of equipment is still operational during preventive maintenance. Should the equipment fail prior to the scheduled maintenance, the resulting maintenance expenses can be double or even triple the usual costs. It is important to note that maintaining a failure rate of over 15% is not a sustainable strategy.
In essence, we share the same perspective, Vee. It is logical to establish a predetermined failure rate for maintenance tasks when costs are not a factor and no safety concerns are present. This applies to various auxiliary equipment in factories, temperature control systems, and elevator systems in buildings. While it may cause inconvenience, the economic impact is minimal. I have long followed the 10% rule in such situations, although the percentage can vary between 10% and 15%. However, if safety is a concern and predictive maintenance is not an option, a zero percent failure rate should be chosen to prevent accidents unless mandated by regulations. When economic costs are a factor, the time intervals for preventive maintenance can be determined by considering all relevant costs to minimize overall expenses. This is where mathematics comes into play, providing a clear framework for decision-making. Ultimately, it is up to management to approve the chosen criteria. I acknowledge that this analysis may lead to a shift from preventive maintenance to predictive maintenance after evaluating the equipment's remaining lifespan. Whether cost-driven or not, I adjust the maintenance frequency over time as more data becomes available, ensuring efficiency and peace of mind. I hope you agree with my approach this time, Vee. Rui
In our discussion, there are points where we agree but also areas where we have differing opinions. It is important to note that when it comes to HSE, acceptable failure rates are not determined empirically but rather by ALARP considerations. Obtaining data on the cost-effectiveness of Corrective Maintenance (CM) versus Preventive Maintenance (PM) policies is challenging due to the lack of control cases, making it impractical to determine an optimum approach based on field data alone. This is why I propose focusing on maintaining a reliability level of at least 85% during PM activities to minimize breakdowns which can cost significantly more than planned work.
When safety is a priority and Predictive Maintenance (PdM) is not an option, it is crucial to aim for a 0% Cumulative Distribution Function (CDF) to prevent accidents, although achieving 0% CDF is only possible at t=0. Moreover, when considering the economic costs associated with PM and CM tasks, as well as opportunity costs that may vary nonlinearly over time, finding the optimal PM intervals involves balancing all relevant costs to minimize overall expenses. However, obtaining field data to plot these cost curves remains a challenge.
It is worth exploring the possibility of transitioning from a PM policy to a PdM policy after conducting an economic evaluation based on the remaining equipment life. The choice between PM and PdM strategies is heavily influenced by the shape parameter (Beta) of the probability density function curve. When Beta is close to 1, PM may not be the most effective approach, while a Beta value significantly larger than 1 indicates that PM can be beneficial, although PdM could also be considered in such cases. In essence, our decision-making process should be guided more by the shape parameter Beta rather than just costs considerations.
- 10-07-2024
- Quentin Foster
In response to your first quote: Your comment seems to overlook the context in which I mentioned setting an acceptable percentage of failures for CM tasks based on empirical data. I also specified that this should be done when maintenance costs are not relevant and there are no safety implications. You mentioned that HSE considerations should be taken into account, which aligns with my point although worded differently. You also questioned why I suggested that breakdown costs are 2 times higher than planned work costs, emphasizing the need to empirically set proportions when actual costs can easily be determined. In reality, this proportion can vary significantly depending on the specific case.
Moving on to your second quote: You mentioned that 0% failure rate only exists at time t=0, but it's important to note that in certain failure probability distributions, the location parameter (representing the minimum life before failure) is indeed greater than zero. For example, degradation-type failure modes like erosion or fatigue. Only random failures would have a location parameter of 0.
Regarding your third quote: You inquired about the feasibility of plotting failure curves with field data, to which I can attest that I have done so numerous times. Field data is meticulously checked and statistically analyzed to determine the best-fit Weibull distribution. Additionally, costs for both corrective maintenance and preventive maintenance tasks are calculated to estimate an appropriate time interval for initiating maintenance activities, which may need adjustments as more field data becomes available.
Finally, in response to your fourth quote: While your observation about the beta parameter is valid, the selection between PM and PdM policies also plays a crucial role. In the realm of RCM, the decision-making process involves assessing the technical feasibility and cost-effectiveness of on-condition tasks before considering restoration or scheduled discard tasks. Therefore, the choice of maintenance strategy is influenced by both beta values and costs, contrary to your assertion that beta values hold more significance. Regards, Rui Assis
Great argument, Rui. Your English proficiency is impressive, it's hard to believe it's not your first language.
I believe that setting preventive maintenance (PM) time intervals should be based on failure modes rather than solely focusing on cost optimization. While the cost implications may vary, it is crucial to prioritize PM based on the risk associated with failure modes to ensure that it is As Low As Reasonably Practicable (ALARP) – technically acceptable and justifiable.
- 10-07-2024
- Vanessa Carter
In my experience, I often encounter situations where the consequences of failure can be quantified in financial terms. However, there are also instances where failures can result in significant hazards beyond just economic impacts. In such cases, it is crucial to prioritize safety measures and risk reduction strategies through the application of various analysis methods like PHA, SHA, IHA, SSHA, O&SHA, HAZOP/HAZID, FMEA, and FTA. These tools are essential for ensuring that the risks associated with equipment failures are minimized to As Low As Reasonably Practicable (ALARP) levels.
Recently, I encountered a scenario involving a heavily used office equipment under a maintenance contract that imposed fines for breakdowns. Despite the absence of immediate hazards, the financial implications of frequent repairs were significant. By analyzing the data and conducting a cost-benefit evaluation, we were able to determine an optimal preventive maintenance schedule that minimized overall maintenance costs. In such cases, it is essential to prioritize economic considerations when setting maintenance intervals, rather than solely relying on ALARP principles.
In conclusion, economic factors should play a key role in determining preventive maintenance schedules, especially when hazards are not a primary concern. It is important to strike a balance between safety and cost-effectiveness in maintenance decisions.
- 10-07-2024
- Rebecca Murphy
Rui, by narrowing down your approach to focus on simpler cases involving only costs (and omitting risky scenarios such as fatalities), it appears more reasonable. By the way, I assume you are aware of the importance of clearly outlining one's assumptions before conducting calculations. This is a common practice found in educational materials.
Rui, can you provide an analysis of the office equipment's breakdown? I'm interested in the calculations. Out of 100 interventions, there are 4 Corrective Maintenance (CM) incidents and 96 Preventive Maintenance (PM) incidents. Can you explain the differences between CM and PM?
Attached is a document regarding the office case we discussed earlier. In terms of ALARP principles versus financial considerations, my stance has remained consistent throughout our conversations. I apologize if there was any confusion in my previous communication. Thank you for your inquiry. Best regards, Rui Assis. Attachment: Example_of_OTI_in_PM_3.pdf (273 KB) - 1 version.
- 10-07-2024
- Gregory Hughes
Rui mentioned the importance of field data, which should be thoroughly checked and traced back to their origins before more data is collected. This data is not for plotting Weibull curves, as Rui already has experience with analyzing a large number of maintenance records and creating curves. The focus now is on data for the cost optimization curve, specifically looking at the costs associated with allowing items to fail versus maintaining them preventatively. This approach differs from traditional methods and emphasizes the importance of considering costs when making maintenance decisions.
The goal is to determine the feasibility and value of on-condition tasks before considering restoration or scheduled discard tasks. The emphasis on costs is crucial, as the chosen maintenance strategy should align with the level of risk associated with each item. For example, safety-critical items require a lower probability of failure on demand (PFD) compared to production-critical items.
The maintenance approach should be based on managing risks effectively, considering factors such as safety, environment, production loss, maintenance costs, and asset loss. While costs play a significant role, other risks must also be taken into account to make informed decisions. This approach aligns with industry standards and methodologies, such as the SIL methodology and RCM logic, which prioritize age-related tasks for age-related failures.
Ultimately, the key to effective maintenance decision-making lies in following a logical and universally applicable approach. This approach is not based on guesswork but on sound reasoning and data-driven insights.
When it comes to decision-making in Process Reliability or Maintenance Management, all textbooks discuss the optimal preventive maintenance time based on costs in a similar fashion. For instance, articles from Reliasoft provide insights on this topic, with one emphasizing the importance of replacing components preventively before they fail to save on costs. Factors like safety, environmental risks, production loss, maintenance expenses, and asset loss are crucial considerations for companies when making decisions. It is essential for practitioners to understand concepts like RCM (Reliability Centered Maintenance) and common failure rate behaviors over time. Historical data and mathematical tools can aid in interpreting and processing information effectively. For those interested in delving deeper into these concepts, resources like the article from Reliasoft linked above offer valuable insights. The field of Reliability and Maintainability continues to evolve, building on the knowledge accumulated over the years. Analyzing cost optimization curves and data is vital, and the column values provided in tables can aid in understanding the costs associated with preventive maintenance and corrective maintenance. Overall, making informed decisions based on data and analysis is crucial in ensuring efficient maintenance practices.
I'm curious how your calculations stack up against the manufacturer's recommended maintenance intervals for real-life equipment. While relying on actual failure data is ideal, starting with a baseline is necessary. Deciding on the initial frequency for preventive maintenance (PM) can be based on mathematical calculations, manufacturer suggestions, past experience with similar equipment, or the WAG method. It's worth exploring how these different methods may result in varied maintenance intervals for a specific type of equipment in practice.
Hello JW, I apologize for the delay, but I will only be able to respond to your question tomorrow. Thank you in advance for your understanding. Regards, Rui
Does anyone have input on comparing maintenance intervals to manufacturer's recommendations? I prefer utilizing actual failure data when possible, but understand the need to start somewhere. There are various options for setting preventive maintenance frequencies: mathematical calculations, manufacturer suggestions, past experience with similar equipment, and the WAG technique. I'm curious about how different these intervals could be in practice for each method when applied to a specific equipment type.
In the discussion about "time intervals or economic life limits in preventive maintenance (PM)", it is recommended to refer to points 3.5.6.3 and 3.5.6.4 along with Appendix B (B-7) of the document NAVAIR 00-25-403 from 01 July 2005, titled "GUIDELINES FOR THE NAVAL AVIATION RELIABILITY-CENTERED MAINTENANCE PROCESS". This document provides insights into the economic approach and conditions for adoption. Many organizations, including NAVAIR, follow a similar economic approach to PM in situations where safety and environmental regulations do not apply.
Regarding reliability information sources, accessing public databases can be costly and not always justifiable unless you are a consultant using them frequently. Manufacturers and system history records are primary sources of data due to the limited availability of failure data. Collaborative efforts like the OREDA database aim to pool data sources while maintaining source anonymity.
In the absence of historical data, experience knowledge and scientific knowledge play crucial roles in developing maintenance programs. Statistical methods can be used to analyze historical data, while elicitation methods outlined in publications like CRDT Vol. 41 from ASME can help in situations where data are scarce. Consulting with institutions with scientific expertise can provide valuable insights in cases where no data or previous experience is available.
Working with an engineering institution specializing in structural integrity of systems, particularly in process plants, is a common practice in situations requiring scientific knowledge. This collaborative approach helps in making informed decisions and ensuring the reliability of maintenance programs.
When introducing a new piece of equipment, I recommend following the manufacturer's preventive maintenance (PM) program before considering extending the maintenance intervals based on your own expertise and familiarity with similar equipment, at least during the warranty period. Manufacturers are typically cautious when it comes to maintenance to prevent equipment malfunctions that could damage their reputation. It is important for users to determine the most effective maintenance strategy as they gain experience over time. Regards, Rui.
Hello everyone, I am seeking advice on calculating the Mean Time Between Failures (MTBF) for a portable fire extinguisher and a household freezer. As a newcomer to the field of Reliability, I would appreciate any guidance on this topic. Thank you.
Are there specific regulations for inspecting fire extinguishers? Additionally, can you determine the Mean Time Between Failure (MTBF) for a fire extinguisher? It's important to refer to the manufacturer's guidelines. As for domestic freezers, what exactly do you mean by that term? We have freezers equipped with battery backup systems for temperature monitoring. How do you perform preventive maintenance on your freezer at home? One suggestion is to regularly clean the cooling coils to remove dust and lint buildup. Utilizing condition monitoring and timely corrective maintenance is key for optimal freezer performance. What are your thoughts on freezer maintenance practices? Once again, it is recommended to consult the manufacturer's guidelines.
Hello Wally Gator, thank you for your response. I was considering the fact that both the freezer and the portable fire extinguisher are repairable systems, which indicates that it may be possible to calculate their Mean Time Between Failures (MTBF). How can the MTBF be accurately measured for these systems?