I am facing an issue at my workplace, a large industrial plant with numerous control cabinets. One of these cabinets, housing one 5580 (1756-L83E), two 1756-L73, and seven 1756-EN2T modules in a 16-slot rack, is experiencing thousands of minor faults per second. The root cause of the repetitive (T:04 C:51) fault has been identified as data being pushed into an oversized array index. I am investigating whether these faults could be leading to the primary EN2T module, which connects to a cloud service, experiencing random communication losses. This has resulted in significant downtime occurring around five times in the last five months, with each incident lasting from an hour to a few hours. Although I understand the logic changes made six months ago may have triggered the array index overflow, I am exploring the possibility of the continuous faults contributing to the network issues. Despite plans to rectify the logic, there is a suggestion to replace the EN2T module, which I will postpone until after addressing the logic to determine the root cause of the communication disruptions. IT inspections have confirmed that the network infrastructure is in good condition, with no issues detected in the cables, switches, or other devices. This leads me to consider whether the ongoing faults, combined with the controller's heavy processing workload, could be attributing to the network problems.
Minor Fault T04:C51 - Program Fault: LEN value exceeds DATA limit may not be the root cause of your communication issues. It's possible that the controller is transitioning to a FAULT routine and resetting, or the array-related instruction is not writing the expected data. Unless you're experiencing watchdog errors, high execution times, or low memory, these two issues may not be connected. While the local wiring tested by IT appears to be in good condition, their diagnostics may have stopped at the cabinet door or the Internet router. If this machine were in my possession, I would consider installing a Raspberry Pi (or OnLogic / Kunbus version) near the switch to run a Node-Red instance for continuous communication with the cloud service through PING, TCP keepalive, or login, as well as polling the ControlLogix for tag values. Monitoring successes and failures is key in troubleshooting.
Encountering errors related to string manipulation, such as exceeding the string size or referencing non-existent elements, is common when using functions like MID, Insert, and Delete. I have observed this issue in older Modbus TCP AOIs, but it does not occur in newer versions of the software.
I hadn't realized that the integration of this large automation system with the "cloud provider" could involve Modbus/TCP communication protocol.
Thank you for the insightful responses. There are four prominent networking systems, each linked to a separate EN2T through an SST Profibus ETH module. I will delve deeper into this shortly, but I am grateful for the valuable feedback received thus far. Thank you!
If you are encountering issues with Modbus TCP AOI, you may want to refer to technote BF28611 (Access Levels: TechConnect). While this information may not be relevant to your specific situation, the root of the problem is likely related to string handling.
It sounds like you've done a thorough job identifying the potential root cause of your problem. While it's logical to consider that the persistent minor faults and heavy processing load might be causing your network issues, I wouldn't rule out the possibility that the EN2T module needs replacement. Though I agree that tackling the most likely cause - the logic change that led to the array index overflow - is necessary, my advice would be not to postpone a potential hardware issue if that still persists post-logic rectification. Until then, I would recommend monitoring performance and incident data closely to try to correlate specific activities or events with the communication losses.
✅ Work Order Management
✅ Asset Tracking
✅ Preventive Maintenance
✅ Inspection Report
We have received your information. We will share Schedule Demo details on your Mail Id.
Answer: Answer: The root cause of the repetitive (T:04 C:51) fault has been identified as data being pushed into an oversized array index, likely triggered by logic changes made six months ago.
Answer: Answer: The primary EN2T module has experienced random communication losses leading to significant downtime occurring around five times in the last five months, with each incident lasting from an hour to a few hours.
Answer: Answer: IT inspections have confirmed that the network infrastructure is in good condition, with no issues detected in the cables, switches, or other devices.
Join hundreds of satisfied customers who have transformed their maintenance processes.
Sign up today and start optimizing your workflow.