Li Yi mentioned that when needing to repeat a test, they have to manually click several buttons in sequence. They opt not to code a program because they need to observe data step by step, or coding the test program takes longer than manual operation. In such cases, using a unit test product like Siemens' Professional PLCSIM can be beneficial. This tool allows users to write the sequence of events and specify the processor to advance by cycle or time. However, Li Yi also pointed out that the professional PLCSim might not offer control over timing. The ability to generate conditional step sequences is available but lacks timing control.
Additionally, Li Yi highlighted the challenge of recreating strange failures and the importance of collaboration in programming. They emphasized the significance of two individuals working together, with one writing the testing procedures and the other writing the program. This approach ensures that both parties align when it's time to run the software.
Regarding limited support for signals/tags and data length in TIA Portal Trace, Li Yi suggested using ServiceLab or setting traps for better functionality. They also recommended utilizing instructions that write to the event log of the processor for more advanced debugging.
In conclusion, Li Yi expressed a desire to gather insights and suggestions to enhance future Siemens products. They emphasized the importance of familiarizing oneself with all available products from Siemens before diving into development.
Before the advent of trace functionality, it was standard practice to capture rare events in software by setting up "traps". This method was crucial in diagnosing a profibus data consistency error that occurred infrequently. Understanding the intricacies of the system and process was key to identifying and resolving the issue. Kudos to Manglemender for mastering this challenging task!
- 20-08-2024
- AlfredoQuintero
Li Yi expressed his plan to develop an enhanced version of Trace that can continuously monitor the values of all tags at the end of each PLC cycle and track the subroutines invoked. The goal is to store the record file in TIA Portal PLCSIM for troubleshooting and debugging purposes. However, approval from his boss is necessary for the implementation of this idea.
The proposed system would essentially capture a snapshot of every tag and DB value per cycle, saving the data on the PLC until it is downloaded, similar to data logs. For example, a 1211 CPU is currently utilized to gather data from multiple temperature controllers within a network. With an average cycle time of 10ms and over 500 tags being used, assuming they are all UINT, approximately 2B would be stored directly or 4-5B as CSV. This program would require 6MB of storage per minute to operate effectively.
It was mentioned by Puddle that the program requires 6MB of storage per minute, which is why the manager decided not to proceed with it. With the advancement of computing, the fundamental principle of managing memory as a limited resource has been disregarded. This is due to the ability to easily obtain more memory or a larger hard drive. You may have observed web browsers and communication platforms exhibiting this behavior.
Li Yi suggested that errors are often caused by "corner cases" or "unforeseen user operations." To address this, it is crucial to interview operators about the error scenario and instruct them to capture photos or videos of the machine and HMI immediately following the error occurrence. By gathering this information, one can simulate and recreate the scenario to identify and resolve the issue. It is also recommended to explore various types of user errors during this process. Additionally, operators should be asked if they can replicate the error scenario for further investigation. Another helpful practice is to log all user inputs on the HMI, which can provide valuable data without the need to log every machine IO with milliseconds resolution.
AlfredoQuintero praised Manglemender for their impressive accomplishment, which required deep insight and understanding of data consistency in the system. Data consistency can be challenging to maintain, especially when communicating with a DP slave. It is crucial to close the data consistency window by reading/writing to the last byte defined in the communication structure.
I have been working with profibus since its inception during the S5 era, and data consistency issues have only occurred twice. The first instance necessitated Siemens' support to resolve an intermittent problem with an analog input. The second occurrence involved sending data to a Bosch Rexroth Motion controller, where a discrepancy between the GSD-defined 10 bytes and the engineer's recommendation of 9 bytes caused occasional duplication of parts.
After conducting thorough analysis and comparing sent data with received data, the root cause was identified as sending 10 bytes instead of 9. This discovery finally resolved the issue that had been causing occasional part duplication.
Manglemender highlighted the importance of data consistency, emphasizing that failing to properly close the data consistency window can lead to unexpected issues. I faced challenges with data consistency in the past, experiencing odd glitches in values. After consulting the manual, I discovered the significance of using the read consistent data function to ensure data integrity. It's crucial to properly manage data consistency when communicating with a DP slave to avoid errors.
Cardosocea inquired out of curiosity as to why it is necessary to learn about all Siemens products and their functionalities. Despite being a Siemens employee, it can be challenging to be familiar with every Siemens and partner's products. Seeking help from senior expert colleagues has provided some solutions, although not all problems have been resolved accurately. Debugging with Trace requires all input signals, which I do not have, making it difficult to simulate the process. Additionally, there is no automatic method found to populate the measured value in the IO forcing table.
Puddle inquired about the feasibility of creating a snapshot of every tag and DB value per cycle to be stored on the PLC until downloaded, similar to data logs. With a 1211 CPU tasked with collecting data from numerous temperature controllers, each cycle averaging 10ms and utilizing over 500 UINT tags, roughly 2B stored directly or 4-5B as CSV. This would result in 6MB of storage per minute, which is not efficient.
The idea proposed by cardosocea was not being pursued due to the need for manual operations and potential stops or pauses. The evolution of computing has shifted focus away from managing memory effectively, as seen in browsers and Teams constantly using up memory.
To optimize data storage, only changed values would be saved with a "cycle stamp" in binary form rather than CSV or txt format, reducing the amount of data. This method would require 1MB/s for 1024 4-byte signals sampled at 1024Hz, assuming only 1/4 signals change. While still too large, advancements like PC-based PLCs are gaining market share. Companies like Beckhoff and Keyence offer PLCs with fast cycles and playback functions, catering to experienced automation engineers. Collaboration with seasoned engineers is essential to test and refine these ideas.
JesperMP suggested verifying with the operators to see if they can replicate the error scenario. Additionally, consider logging all user inputs on the HMI, which may be more efficient than logging all machine IO with millisecond resolution. However, despite these efforts, the error scenario could not be reproduced.
Manglemender highlighted the importance of data consistency when communicating with a DP slave. It is crucial to close the data consistency window by accessing the last byte defined in the communication structure. This is why keeping detailed records for an extended period is essential. By importing input values into the program for simulation, it becomes easier to monitor and analyze the data.
Li Yi mentioned that one of the reasons why he wants to extensively document everything is to analyze the data over a long period of time. He prefers to import the input values into a program for simulation rather than observing them manually. However, it is important to consider whether to record the process image or the actual peripheral state, as they may not be the same at any given moment. Signal propagation delays in hardware, timing of mechanical components, and the selection of input with slow conversion time in PID loops are factors to take into account. The complexity of processing power and data involved may make this proposal seem impractical without a separate hardware piece connected via the backplane bus.
Furthermore, implementing such a system could dynamically affect the performance of any connected system and potentially mask certain issues. Local data poses its own challenges as dynamically allocated memory contents may not be reliable, leading to problems if a value is used without prior assignment. Changes in the order of execution can result in different values allocated to local data within a function, causing issues that may not have been apparent before. It is crucial to carefully navigate these potential pitfalls when considering a comprehensive data recording system.
Li Yi mentioned a method to reduce the amount of data by saving only the changed values with a "cycle stamp," rather than all data varying within two cycles. While saving all digital I/O with a timestamp is feasible, cyclic saving may suffice for Analog I/O without requiring high resolution. This task is achievable and could potentially be facilitated by existing commercial products, such as the one designed for S7-300/400 systems. However, the challenge lies in managing the program internal DB memory, as data can change multiple times within a single cycle. It may not be practical to reload the logged data into the PLC for simulation purposes. Instead, analysis of the logged data based on programming knowledge is necessary. In a similar incident, a PLC debugging process involved interpretation of a video showing I/O LEDs on the main PLC rack of a small machine. (*The commercial product referenced utilized additional blocks in the PLC program.)
JesperMP shared a fascinating story about debugging a PLC by using a video of the I/O LEDs on the main PLC rack of a small machine. This method, although requiring some effort, proved to be effective. When working with older machines, this unconventional approach can be a valuable tool. It may take some time to get the old laptop up and running, but the end results are worth the effort.
In discussing the recording process, Manglemender raised a valid point about capturing the actual peripheral state versus the processed image. Signal propagation delays in hardware can also impact the data. However, the main goal remains obtaining a timeseries data file documenting PLC initial values and incoming values. While signal propagation speed complexities are acknowledged, the focus is on logic operations rather than intricate process control, with a varying number of I/Os involved.
Regarding the challenge of saving data within the PLC's internal DB memory, JesperMP highlighted the difficulty of capturing every change reliably due to potential data fluctuations within a single cycle. While it may seem unachievable to load logged data back into the PLC for simulation purposes, a solution involving a high-performance CPU and specialized runtime programming could potentially address this issue. This approach would involve inserting additional instructions during program compilation and possibly utilizing spare capacity within the PLC.
In considering potential solutions, rewriting the PLC program in visual studio and utilizing SPS or IBA recorder data may offer a viable path forward. Despite the organizational challenges involved in this approach, the possibility of recording every variable change exists with the right tools and programming expertise. Ultimately, the focus is on finding a practical solution to effectively capture and analyze data, even if it involves unconventional methods. Appreciation is expressed for the valuable insights and suggestions provided in the discussion.
What are the common issues occurring in the testing process? To revisit the initial post, in order to run a test again, a series of buttons must be clicked manually. Developing a program is not feasible due to the need to monitor data in real-time, making manual operation more efficient. It is important to understand that testing a machine, identifying errors, and making adjustments takes more time than writing PLC code.
Strange failures may occur, with difficulty in reproducing them. By logging alarms, state machines states and transitions, essential process values, operator inputs, and operator statements, it is possible to identify and address any errors that may arise. Despite having the TIA Portal Trace function, limitations exist in the supported signals/tags and data length, making debugging challenging.
Utilizing various methods and tools available, such as trace functions and debugging techniques, along with implementing robust programming methods, can help in troubleshooting machine errors effectively. While the option to rewrite the PLC program in Visual Studio may be considered, it is essential to acknowledge the fundamental differences between PC programming and PLC programming. Ultimately, confidence in programming languages should dictate the approach taken to address any issues that arise.
Li Yi explained that when it comes to debugging, not having every input signal can make it challenging to simulate the process effectively. Analyzing every signal can lead to data overload and can be done in steps, especially when dealing with PLCs and complex issues. Sometimes it takes time to find the root cause of a problem, and even then, not all bugs can be fixed, especially in machines that are focused on profitability rather than being tested extensively.
As a Siemens employee, Li Yi admitted that it can be difficult to be fully knowledgeable about every Siemens and partner product. It's common not to know everything, especially considering the wide range of products related to electricity that Siemens offers.
Li Yi also mentioned that reducing the amount of data collected can be beneficial. Not all data changes within two cycles, so only saving changed values with a "cycle stamp" can help manage data more efficiently. However, adding too much logic and cycle time to store this data can sometimes create more problems rather than solving them.
When it comes to PC-based PLCs, Li Yi expressed some skepticism based on their licensing, reliance on Windows, and overall complexity compared to traditional "plug and play" PLCs. Despite being in the industry for 20 years, PC-based PLCs have not gained much traction in Li Yi's opinion.
In conclusion, Li Yi highlighted the importance of logging the state of certain databases when errors occur. This can help troubleshoot the issue effectively, especially if the operator's actions change the machine's state before a technician arrives. Having the ability to access these states quickly can be invaluable in resolving problems efficiently.
JesperMP brought up the question of troubleshooting issues, comparing it to the complexities of working with PLCs. He highlighted the importance of testbenches in automatic signal measurement and IT programming, emphasizing the significance of the PLC program in system operation. Errors in PLCs can often stem from program logic, requiring a systematic approach to debugging. Additionally, the challenges of data overload and the need for step-by-step analysis were discussed, showcasing the difficulties in diagnosing issues in PLCs. The conversation also touched on the limitations of simulating real-world data in TIA Portal PLCSIM and the frustrations of troubleshooting machines under heavy production duty. The insights shared by cardosocea, an automation veteran, were appreciated for their valuable perspective on PC-Based PLCs.
I understand that the machine has a short cycle time and sometimes encounters error situations. To resolve these errors, it is important to identify the root cause. In addition to the basic debugging techniques previously mentioned, consider implementing cyclic I/O logging in the PLC program. This logging can be set to stop when an error occurs, either by manual input from the operator or by detecting the error situation within the code. You can use the same code to reset the sequence as you described earlier, ensuring that the machine continues to function while capturing error information. Furthermore, a notification can be sent to a central SCADA system alerting the programmer to investigate the error promptly.
Li Yi discussed his experience working with testbenches, programming test procedures, and measuring signals, as well as pure IT programming. Testbenches are automated systems that can measure hundreds of signals at speeds faster than 1024 Hz. In IT programming, there are "automatic testing tools" available to test software, particularly application software, automatically. Li Yi envisions testing machines, such as assembly stations, as a process that involves initiating procedures before 5 PM, leaving the computer on overnight, and checking the report the next morning. This concept can be likened to a DAC-style PLC.
Regarding PLC error logs, Siemens offers various tools, but errors could potentially stem from issues in the program logic or unintended signals from devices like proximity switches. Li Yi emphasizes the importance of the PLC program, as it serves as the core of the system. With expertise in using Visual Studio and VB/C# to automatically feed timeseries sensor data, Li Yi is capable of writing control logics, including adaptive, model predictive, and fuzzy controls.
In a situation constrained by business-related limitations, Li Yi hints at a high-production machine that automatically resets and continues operation upon encountering errors, leading to frustrating scenarios. To address this, capturing the state of critical databases at the time of failure could prove beneficial. Additionally, Li Yi suggests exploring logging devices from NI instruments that can capture data at high rates from S7-319 systems.
Your insights on PC-Based PLCs are highly valued, as indicated by Li Yi, who plans to discuss them internally. It's essential to remember that opinions shared are subjective and not necessarily backed by concrete documentation. As an experienced professional in automation nearing retirement, Li Yi acknowledges the challenges posed by resistance to adopting better systems and standards in industrial automation. The goal is to avoid becoming stagnant and to embrace positive changes in the field. Good luck navigating these challenges.
I am truly grateful for the abundance of valuable suggestions provided. As part of my role, I strive not only to resolve issues but also to proactively prevent and enhance processes. Through interacting with seasoned automation engineers, I have gained insights into effective problem-solving methods and ways to optimize our existing portfolio. This learning experience has been invaluable, particularly in navigating uncommon failures and moving ahead. For now, I will wrap up this discussion but I plan to stay engaged with this forum, regularly sharing insights and exchanging viewpoints in the future.