An overview of the relationship between process, threads, and the operating system.
An operating system is the first program loaded on a laptop/computer when it boots up. System applications are loaded during boot-up and will remain in memory.
A new instance of the program will be created in memory by the operating system when the user starts any application on the computer.
In this case, the instance is referred to as a process.
PIDs are unique to each process, so data(Heap) and resources used by one process will not be available to another process.
Processes have files they access, code they execute, data they need to execute, and at least one thread [the main thread].
Each thread will contain a stack and instruction pointer.
- There will be a stack and instruction pointer for each thread in a multi-threaded environment. All other process information, including data, code, and files, will be shared among all threads
Each thread will be executing different instructions or functions at any given instance so it makes sense for the thread to have data (local variables) in its stack and the instruction pointer as simple as the next instruction to be executed.
What is a thread?
Threads are lightweight independent paths of execution within processes.
The process data is shared across all threads that belong to the same process, making it lightweight.
Threads allow for the concurrent execution of code within a single process. This can improve performance by making better use of the available CPU cores. Additionally, it allows for more fine-grained control over how individual pieces of code are executed.
What is the life cycle of a thread?
Every thread has different stages during the execution:
New - As soon as a thread is created it will be in the new state.
Runnable - The thread that is eligible to execute or get scheduled
Blocked/Waiting/Timed waiting - The thread that was in a runnable state and got scheduled but was unable to continue due to resources not being available.
Terminate - Once the thread completes its execution.
What is concurrency?
In a real-life scenario, there are usually more processes than processors/cores. Each process may have one or more threads. These threads compete with each other to be scheduled and executed on the CPU.
We can use the terms "multitasking" and "concurrency" interchangeably in the context of threads.
Let's say two threads are executing concurrently, what the OS does is called context switching. This ensures that every thread that is runnable from the same or other process gets a chance to execute. This gives us the impression that execution is smooth and continuous, however, the OS switches the thread to be executed after a certain amount of time.
Few more points about context switching:
There can be a cost associated with context switching when concurrency is being managed at a lower level. Context switching is costly because it requires the processor to save the state of the current thread, load the state of the new thread, and then resume execution from the point where the new thread left off. This usually requires a significant amount of processing time, which can be costly in terms of both performance and resources.
There can be some downsides to having too many threads, such as what is known as thrashing. This is when the operating system spends more time managing the threads [context switching] than it does doing work.
Threads of the same process can context switch more efficiently than threads of different processes.
Scheduling:
When context switching, how will the operating system allocate CPU time to threads? Is there a specific order that the OS follows?
EPOCH: Operating systems use epoch as a basic unit of time to schedule threads, EPOCH is defined by the operating system.
Not all thread gets time to run or complete in each EPOCH.
Threads in an EPOCH will be given different time slices based on priority, which is determined by the operating system.
The priority of a thread is determined by the value that is assigned to it by the program, as well as the point's value provided by the OS to each thread in an epoch. This point's value is dynamic and can change over time.
The operating system will take into account thread starvation during time-slicing threads in each EPOCH.
Purpose of threads:
Threads are typically used for one or both of the following:
Responsiveness: A single core is sufficient to achieve responsiveness, and multiple threads can be executed concurrently. Context switching is an important aspect of this.
Performance: Context switching gives the illusion of true parallelism, but with a single core, we cannot truly execute threads in parallel.
To achieve parallel threads execution, As a general rule, you should have one thread per core or processor. This will help ensure that your threads are running optimally.
Debugging multi-threaded program:
Debugging the program and navigating the code is one way to help, but this might not always be enough. In some cases, the issue might not appear too frequently, or in other words, it may be hard to reproduce.
Due to this, we rely mainly on thread dumps.
Exactly what are thread dumps? How can they be generated?
A thread dump is a collection of stack traces representing the current state of all threads in the process, including demon threads.
The thread dump of a process can be collected in many ways.
One of the most commonly used tools is jstack. The JDK provides a utility called jstack which can be found in the java_home directory.
Syntax:
jstack <PID> > threaddump.txt