When using Logix 5000 v32 on the 1769-L16ERI PLC, I utilized the FBC instruction to scan through an array of DINTs for true bits. While the instruction successfully functions as intended, a peculiar observation arose that seems illogical. The Result Array of the FBC stores the memory locations of true bits from the source array, leading to a potential risk of overflowing if all bits are true. This means that the Result array must be significantly larger than the source array - for instance, a 100 DINT source array would require a 3200 DINT result array to prevent overflow and potential PLC faults. This raises the question of whether there is a specific reason for this design or if there may be a misunderstanding in using this instruction effectively. In reality, it would be more practical to have an array size based on the maximum number of expected true bits to avoid potential issues down the line.
The .LEN attribute in the CONTROL structure (and first FBC Length parameter) defines the number of bits to compare. On the other hand, the .LEN attribute in the RESULT structure (and second FBC Length parameter) determines the maximum number of non-matching bits that can be identified. The instruction will stop without error when the result.POS value equals the result.LEN value, even if the number of bits tested (control.POS) is less than the number of bits specified for testing (control.LEN). This ensures accurate comparison and detection of discrepancies within the specified bit range.
Yes, that solution addressed the issue. Although it may seem inefficient to need an additional array, it serves a specific function. The result array is not populated sequentially and is not cleared when all bits are set to zero. I appreciate your assistance in resolving this matter.
From my experience in working with PLCs, I believe that the design choice you've mentioned regarding the FBC instruction is indeed intentional. This is primarily due to the fact that the behavior of the arrays is bound by the architecture of Logix 5000 and how it manages memory. However, I do understand your concerns about the potential for overflow. In such cases, thorough planning and implementation is crucial. The ideal solution would be to design your program in a way that anticipates the maximum number of true bits and creates result arrays accordingly. Alternatively, the PLC code could be modified to employ conditional checks that would prevent potential overflow issues. As you've correctly pointed out, practical application necessitates flexibility and foresight to avoid potential issues down the line.
You've certainly done your homework on this and you make an excellent point! In a real-world application, as you stated correctly, the array size should ideally be based on the maximum expected cases of true bits. Unfortunately, this isn't the design incorporated within FBC instruction. The reason for this might be due to how information needs to be stored within a PLC. Rather than only identifying 'true' instances, it treats each bit with equal importance to have a comprehensive view of the entire array, hence the larger result array. You could possibly circumvent these design constraints by implementing additional logic to monitor the number of true bits and prevent overflow. It's not the most elegant solution, but it could potentially fix the mentioned practical issues.
Thatβs a really interesting point youβve raised! It does seem excessive for the Result Array to accommodate every possible true bit, especially when most of us are looking to optimize our memory use. I think the larger size might be a design consideration for ensuring compatibility with various applications, but it does feel like there should be a more flexible approach where you can define the size based on expected results. Have you thought about implementing some pre-check logic before the FBC to estimate how many true bits you might find? That could help manage the array sizes more effectively.
Great observation! It does seem counterintuitive that the result array size is so much larger than necessary for most practical applications. I think the design choice might stem from the need to handle worst-case scenarios where all bits could indeed be true, but I definitely agree that it can lead to inefficiencies. It would be more user-friendly if the instruction could dynamically size the result array based on actual input, or at least provide some guidance or defaults tailored to typical use cases. Have you considered implementing a check to limit the number of true bits found before they overflow? That might help balance performance with safety.
That's a really interesting point! I think the design of the FBC instruction probably focuses on being flexible rather than efficient, which can be a double-edged sword. It does seem counterintuitive that the Result Array requires such a large size, especially when you only expect a fraction of bits to be true. Your idea of dynamically sizing the Result Array based on expected true bits could definitely streamline the process and minimize overflow risks. Maybe it's worth bringing up to Rockwell for future updates, as seeing a more intuitive approach would benefit a lot of users. Have you thought about implementing a custom function to manage this more efficiently?
β Work Order Management
β Asset Tracking
β Preventive Maintenance
β Inspection Report
We have received your information. We will share Schedule Demo details on your Mail Id.
Join hundreds of satisfied customers who have transformed their maintenance processes.
Sign up today and start optimizing your workflow.