RetiQlum2 mentioned ensuring that all programs are running in the same thread to prevent interruptions during important processes like PID loops. By running PID loops cyclically, every 100ms for example, any interruptions could lead to a mix of old and new data. To avoid this, it's recommended to separate PID IO mapping within the PID task and main IO mapping within the main cycle to prevent issues. This approach ensures smoother operation and prevents potential data inaccuracies.
In order to ensure smooth operations and prevent interruptions, it is important to carefully schedule programs within the same thread. For example, if an I/O mapping program is called in the middle of a PID loop, it can cause disruptions. Tasks with higher priority can interrupt tasks with lower priority, especially if the I/O handling routine is set to run on a specific time basis while the PID loop is set to run continuously or on a time basis. To learn more about effectively managing tasks within the same thread, refer to the document provided on page 9 of Rockwell Automation's literature.
If you want to avoid interruptions during PID loops, it's crucial to carefully plan their scheduling. When an I/O handling routine operates on a time basis while the PID loop runs continuously or on a timed basis, it can lead to interruptions. Remember, high-priority tasks take precedence over low-priority tasks. You can find more information on this topic in the Rockwell Automation documentation at this link: https://literature.rockwellautomation.com/idc/groups/literature/documents/pm/1756-pm005_-en-p.pdfpage 9. Instead of using PID loops, consider utilizing "Conveyor Logic" for smoother operation.
In order to ensure that my IO mapping program isn't interrupted during critical calculations, I make sure all programs are running in the same thread. Additionally, utilizing the User Interrupt Disable/Enable (UID & UIE) instructions can help prevent interruptions while executing important logic rungs. These instructions have been effective for me when performing vital calculations across multiple rungs of code.
True multithreading involves tasks running simultaneously without blocking each other. This parallel execution can pose challenges like managing access to shared resources and variables.
RetiQlum2 inquired about the multi-threading capabilities of the L8X series. Specifically, they asked if individual tasks involving multiple programs are multi-threaded and whether the entire task is executed in one scan or if it is performed on a program-by-program basis. Understanding this is crucial to avoid any problems arising from asynchronous data transfers during task execution. The goal is for a task to efficiently run its list of programs in the order they are scheduled, all in a single scan.
In the realm of software development, it is commonly believed that tasks which halt other tasks during their execution are not truly representative of multithreading. Multithreading involves the simultaneous execution of tasks, leading to challenges such as coordinating access to shared resources and variables. However, it is important to note that multithreading can still occur on systems with a single processor, as seen in real-time operating systems (RTOS).
The issue of synchronized access to resources arises in multithreaded environments and is not necessarily linked to the number of processor cores. This issue stems from the need to optimize processor usage, which may involve pausing one task to allow others to run. If multiple tasks contend for the same resource during these paused states, complications and conflicts can arise.
Interested in utilizing multiple CPUs for multiprocessing? This approach may work well, unless different cores are dedicated to various tasks such as the backplane, Ethernet, HMI, and ladder logic. Lately, I have been exploring the possibilities of multiprocessing with Python or possibly Julia.
- 27-08-2024
- Peter Nachtwey
Cardosocea pointed out that the definition provided may not be entirely accurate. Many Real-Time Operating Systems (RTOS) are multithreaded even on a single processor. The issue of synchronized resource access is a multithread concern, not directly tied to the number of processor cores. This is because in order to optimize processor performance, certain states require yielding or releasing the processor to allow other threads to execute concurrently. If one of these threads needs the same resource, it can lead to problems. However, the association between cores and threads is controlled by the hardware and operating system, and remains transparent to programmers.
In Windows and Android, applications can launch numerous parallel threads simultaneously without requiring a corresponding number of processor cores. In the case of a Programmable Logic Controller (PLC), priority is given to factors such as stability, making it preferable to have a single thread. The concept of a "pre-emptive single-threaded" approach, as mentioned by Cheeseface in the second post, aligns well with the functioning of a PLC.
While discussing the association of cores with threads, it was mentioned that the hardware and operating system already handle this aspect, making it transparent for programmers. However, it was also noted that tasks that halt other tasks during execution may not qualify as true multithreading. This brings up the question of how tasks are defined in this context. Can a thread be considered a task?
There is a single thread that navigates between various code components, also known as tasks, based on their respective priorities. This seamless transition allows for efficient multitasking and prioritization within the program.
When it comes to programming, there is a crucial thread that navigates through various sections of code, also known as tasks, based on their respective priorities. Some may argue that it's just a matter of semantics, but isn't it essentially the same when a single thread switches between code pieces or when a processor switches between different threads, each executing its own set of instructions?
Let's consider two scenarios: A) Representing the entire PLC program as a single thread with internal priority management; B) Treating each Task in the Logix as an individual thread, where the PLC's scheduler interrupts a low-priority thread to run a high-priority one. In the end, both approaches yield similar outcomes. Despite not being truly "multithreaded" in the traditional sense of simultaneous execution, it all boils down to the terminology we use to describe these distinct processes.
My understanding of multitasking and multithreading has been challenged by different definitions and explanations. I used to believe that multitasking was when a CPU juggled various tasks to give the illusion of doing many things at once. However, upon further research, I discovered that multithreading can also involve multiple processors or cores to allow for truly simultaneous operations. The distinction between the two concepts can be confusing, but resources like Techtarget and Wikipedia have helped shed some light on the topic. It appears that simple multithreading may just be a form of multitasking, while using multiple CPUs or cores for executing multiple instructions simultaneously is referred to as "Simultaneous multithreading." This has deepened my interest in the subject, and I look forward to exploring it further.
Life noted that there is a single thread that switches between different sections of code, known as tasks, based on their priorities. This concept applies to operating systems as well. Does this imply that all computers do not support multithreading?
User cardosocea inquired about whether all computers are not multithread, prompting a discussion on the topic. While PLCs are not multithread, modern operating systems like Windows, Linux, and Android support multithreading and multiprocessing. For those curious about the number of threads running on a Windows computer, there are steps to check the number of cores and threads in the processor. This can provide insight into the CPU's capabilities, with typical numbers reaching into the thousands. If you're wondering "How to Check Number of Cores and Threads in My Processor?" read on for a step-by-step guide.
According to lfe, PLCs were traditionally not considered multithreaded. However, it is time to update that belief. PLCs nowadays are indeed multithreaded, running on operating systems that support threading. Despite this, users only have access to a single thread for their programs, as dealing with race conditions and shared resources can be complex and troublesome. Nevertheless, the hardware and internal OS of PLCs do utilize multiple threads for optimal performance. Stay updated with the latest technology trends in PLC programming.
Cardosocea mentioned that explaining race conditions can be a challenging task and may not be worth the effort to troubleshoot and manage shared resources or variables. I encountered an issue with my program in C/C++ and attempted to address it by incorporating threads. However, this has now introduced a new problem for me.
I have found great pleasure in utilizing multithreading in both C++ and Java, such as achieving a smooth user interface while running other resource-intensive tasks in the background. However, for optimal stability and simplicity, nothing compares to using a single thread.
The L80 ControLogix processor is equipped with a Communications coprocessor, enabling I/O changes mid-scan and preventing an HMI from slowing down the process scan, unlike earlier processor models. This effect is typically noticeable in large HMI/PLC projects. Many users "buffer" their I/O at a specific location in the program to streamline operations. I personally prefer consolidating all valve-related information into one routine, including mapping DI feedback, setting interlocks, managing common/HMI logic with an AOI, and mapping output to the DO. Alternatively, creating routines for mapping one routine per card can be useful for simulating purposes.
The logic of the processor is described as "pre-emptive single-threaded"; when a time-driven task begins, the currently scanned/executed logic is paused, and the new task's logic takes its place. Having multiple timed tasks with different periods can lead to similar challenges as asynchronous I/O scanning, creating the perception of multithreading. It's important to treat separate tasks as separate PLCs and use semaphore "messages" to communicate between them, even though they share the same processor. While the L80 processor may seem multithreaded, it operates on an interrupted basis, not true multithreading.
User drbitboy mentioned encountering a new problem that they can relate to. User Aardwizz discussed the issue of tasks running at different intervals and the potential for logic scan overlap. Rockwell's implementation of an internal latch and feedback system in their PlantPAX blocks addresses this issue, unlike older process blocks in PLC firmware. Aardwizz clarified that while the L80 processor may appear to be multithreaded, it is actually just interrupted. The discussion revolves around the distinction between the processor's actual functionality and the user's access to it, which can create confusion.
Aardwizz explained that the L80 ControLogix processor features a Communications coprocessor, allowing for I/O to be modified mid-scan without being affected by an HMI, unlike earlier processor models. This functionality is typically observed in larger HMI/PLC projects. To optimize the programming, it is recommended to centralize I/O mapping within a single routine, while also considering the use of individual routines per card for simulation purposes. The logic of the processor operates in a pre-emptive single-threaded manner, where tasks are executed in a sequential order. Managing multiple timed tasks with varying periods can lead to complexities similar to asynchronous I/O scanning, creating a perception of multithreading. It is important to treat separate tasks as independent PLCs and utilize semaphore messages for communication between them within the same processor. Despite misconceptions, the L80 processor is not truly multithreaded, operating on an Interrupted basis. Avoiding global oneshots and ensuring that oneshots are specific to their designated tasks can help maintain clarity and efficiency in programming.
One common issue in multitasking and interrupt handling is the occurrence of read-modify-write conflicts, where a thread reads a variable, gets interrupted, and then another thread writes over the variable before the original thread can resume. This can lead to data inconsistencies and errors. The solution to this problem is to implement mutual exclusion, often referred to as a mutex. A mutex is essential when multiple tasks are accessing a shared resource like a queue, FIFO, or shared memory. By using a mutex, tasks can lock and release access to the resource to prevent conflicts.
I have extensive experience in developing multitasking kernels, which all included functions like flag_set, flag_wait, delay, dec_delay, mx_access, and mx_release. These functions are crucial for handling interrupts, mutual exclusion, and timing within a multitasking system. While PLCs may have similar functionalities, they typically run a single task to the user's perception. In contrast, systems like RMC motion controllers operate multiple communication tasks in the background to synchronize with the main loop seamlessly.
Overall, the goal is to provide a seamless user experience where all tasks appear synchronous, even though background processes are ensuring proper coordination. This approach mirrors the functionality of PLCs, where complex multitasking operations are hidden from the user for simplicity and efficiency.
- 27-08-2024
- Peter Nachtwey
Multithreading and interrupts can lead to issues with read-modify-write scenarios, where a thread reads a variable, gets interrupted, and another thread writes to the same variable before the original thread resumes and writes over the changes. To address this issue, a mutual exclusion call, also known as a mutex, is necessary. When multitasking and accessing shared resources like queues, FIFOs, or shared memory, a mutex number must be used to lock and release access to prevent conflicts. The mutex call employs a special atomic instruction to ensure exclusive access, preventing multiple tasks from accessing the resource simultaneously. This issue is commonly observed in HMIs, where a single bit change can result in the entire Word being overwritten if not properly managed. Understanding and implementing mutexes is essential to avoid conflicts and ensure smooth operation in multitasking environments.