New ✨ Introducing Oxmaint Asset Hub for Machine Builders and OEMs. Explore Now
I collaborate closely with the Maintenance/Engineering department in a food and beverage production setting, focusing on PLCs and their functionalities. Currently, we are facing a perplexing issue that our Maintenance manager is unable to pinpoint. I am reaching out for insights and suggestions to resolve this matter. Here's some background information: Click PLCs are utilized across various packaging equipment to manage data processing. The data is transmitted to Streamsheets, where I am developing trending and reporting tools. Primarily, the data involves "TRUE/FALSE" statuses of equipment operation, with additional calculations for finished goods production rate (per hour) and total finished goods produced. Data is collected every 500ms, with values displayed in a table. The problem arises when, at irregular intervals (approximately every 10-30 seconds), the production rate and total produced values spike to 2x, 3x, or 4x their actual values before reverting to the correct numbers. These spikes always occur in multiples of the correct value and often in clusters. The most common deviations are 2x, followed by 4x. Despite various attempts to troubleshoot by adjusting data reporting timing, the errors are only seen in Streamsheets and not in the PLC programming software. We have observed that the discrepancies only affect the active production shift, while the values for the inactive shift remain unaffected. The issue persists without a clear explanation, and we welcome any suggestions on why these spikes occur and how to address them effectively. Your insights and advice would be greatly appreciated. Thank you. -BDKPLCASAP
Just experienced a rare 5x event after not seeing one all day.
It can be quite challenging to diagnose issues without the help of a troubleshooting tool. With the right program, you can easily identify and resolve any issues that may arise.
A possible issue could be that the data scraping process from your PLC may interrupt the calculation halfway through. The PLC always provides a consistent value because it completes the calculation before reporting it. To resolve this, consider performing the calculation in a different register and then transferring it to the register accessed by the reporting system. This adjustment could potentially improve outcomes. Try this method to see if it resolves the issue.
ASF suggested that the issue could be caused by the data scraping process getting interrupted mid-calculation. The PLC will always return the same value because it completes the calculation before reporting. To address this issue, try storing the calculation in a separate register before transferring it to the reporting register. Test this solution to see if it resolves the issue. Additionally, consider adjusting the reporting rate of the PLC to potentially reduce the likelihood of calculation errors. It is possible that slowing down the reporting rate could help mitigate the issue, but further testing is needed to confirm.
BDKPLCASAP mentioned that they will give it a shot and appreciate the suggestion. If we adjust the reporting rate of the PLC, would it affect the occurrence of this issue? Slowing down the reporting frequency could potentially reduce the chance of calculation errors. However, it is important to consider the synchronization of messages as they occur asynchronously to the PLC scan. This issue may arise when the PLC tag changes during the calculation, possibly due to handling a roll over or intermediate value. To address this, it is recommended to modify the PLC to provide a final value within a single computational block for better accuracy.
In the case of data scraping from your PLC, it's possible that the calculations are getting interrupted halfway through. The PLC does not update its reported value until the calculation is complete. One solution to try is to perform the calculation in a separate register and then transfer the result to the register read by the reporting system. This may help prevent discrepancies in the reported values. Consider implementing the following in your STL code: L VariableThatWillBeReadBySCADA L ConversionFactor *R T VariableThatWillBeReadBySCADA
BDKPLCASAP stated their intention to explore a potential solution to their issue. They questioned whether adjusting the reporting rate of the PLC would impact the occurrence of calculation errors. While they believed that slowing down the reporting rate could reduce the likelihood of errors, they acknowledged their limited expertise in the matter. As a new programmer overseeing the development of an HMI for a food factory's forming line, the individual shared a personal experience. They explained how they incorporated multiple code rungs into the PLC to update an integer tag read by the HMI for displaying line status. Despite the assumption that the "last rung wins" in determining the final value of the tag, the individual encountered discrepancies in the status indicator's display during line operation. Reflecting on this experience, the individual emphasized the importance of finalizing data before transmitting it to other systems or users. Drawing an analogy to sandwich preparation in a café, they highlighted the need to complete tasks before presenting the end product to avoid misunderstandings or errors. Regarding the query about adjusting the PLC data read rate, the individual suggested that reducing the frequency of data retrieval could potentially lessen the occurrence of errors. However, they underscored the significance of addressing the root problem within the PLC system for a more effective resolution.
ASF suggests that by decreasing the speed at which data is read from the PLC, you will decrease the likelihood of receiving incorrect data. Essentially, reducing the rate of data reading by half will cut the number of potential errors in half as well. In simpler terms, this means you will also reduce the amount of correct data received. It is advisable to buffer data to be transmitted asynchronously as a best practice.
Are you performing computations as individual operations or as a series of mathematical calculations? If it's a series, are the results of each step stored in the same tag consistently? When I perform multiple mathematical operations, I make sure to store intermediate results in temporary tags, only saving the final computation in the designated tag. The same approach is taken for BOOLEAN tags that begin with XIC and are repeated as OTE in subsequent rungs, often spanning multiple steps.
Furthermore, this issue can also occur in specific communication protocols such as Profibus, which include dedicated functions for retrieving "consistent data". This is due to the potential for half of the bytes in a word to be altered during the reading process, resulting in the transmission of nonsensical information.
ASF pondered the complexity of their program, with numerous rungs and intricate AOIs, questioning the likelihood of errors occurring daily. Upon starting the line, they noticed the status indicator flashing with incorrect values, much like a lit Christmas tree. The frequency of the HMI catching tags in the wrong place was astonishing. ASF likened the situation to a chef preparing a sandwich, emphasizing the importance of timing in PLC programming. They highlighted the need to focus on when events occur rather than just what happens. In response to a query about spiking values in a process, users discussed potential reasons for the erratic behavior. They suggested considering how quantities and rates are calculated, the involvement of FIFO, and the possibility of batch production. To unravel the mystery, they recommended analyzing the production process and sharing code or flowcharts for further examination. Separating intermediate calculations from final results in the PLC was proposed as a potential solution to the issues at hand.
Answer: - The spikes occur at irregular intervals, approximately every 10-30 seconds, in multiples of the correct value and often in clusters. Despite troubleshooting efforts, the cause remains unidentified.
Answer: - The discrepancies only affect the active production shift, while the values for the inactive shift remain unaffected. Understanding the impact these spikes have on operations is crucial for mitigation strategies.
Answer: - Suggestions are welcomed on why these spikes occur and how to address them, particularly in the context of data reporting to Streamsheets. Insights and advice on effective solutions would be greatly beneficial.