LanguagesAn Introduction to Go Scheduler

An Introduction to Go Scheduler

In Go and Golang programming, a scheduler is responsible for distributing jobs in a multiprocessing environment. When the available resources are limited, it is the task of the scheduler to manage the work that needs to be done in the most efficient way. In Go, the scheduler is responsible for scheduling goroutines, which is particularly useful in concurrency. Goroutines are like OS threads, but they are much lighter weight. However, goroutines always take the help of the underlying OS thread model and the scheduler it works on is at a much higher level than the OS scheduler. This Go programming tutorial provides a quick look at the concepts behind the Go scheduler.

If you need a refresher, you can learn more about goroutines in our tutorial: An Introduction to Goroutines.

What is a CPU Scheduler in Go?

CPUs today come with multiple cores – these multicore processors are optimized to handle simultaneous execution – also known as parallel processing. This occurs at the hardware level and it is nice to have multiprocessing ability imbibed into the core functionality of the processors. But the problem is that there must be something that manages the incoming multiple jobs and distributes them among the available processors. This is the job of the scheduler and the process is known as scheduling. A scheduler schedules jobs at the software level and is a core part of the operating system functionality. Being part of the operating system, a scheduler is well aware of the intricacies and working mechanisms of the operating system; also, the scheduler must be aware of the hardware layout it is running on. This makes the scheduler a complex piece of software. So, in a nutshell:

  • A scheduler’s job is to provide some sort of a control on work distribution over available resources.
  • Understand that, in this Go tutorial, we have been talking about process schedulers. There can be a scheduler for networks and containers as well. However, the basic idea of a scheduler remains the same.

Go’s Runtime Scheduler

The Go runtime scheduler schedules goroutines. A goroutine is a lightweight thread that has the ability to execute on a single OS thread. The OS threads run on single or multiple available processors. The runtime scheduler of Go distributes goroutines over multiple threads. The scheduler determines the state of the goroutine. A life cycle of the goroutine can be in one of three fundamental states : running, runnable, and not runnable (due to IO blocked or system call):

Go Scheduler Example

Go works on a type of scheduler called an m:n scheduler (M:N scheduler), which states that M number of goroutines can be distributed over N number of OS threads. Comparatively, OS threads have much more overhead than goroutines. Therefore, Go uses a limited number of threads to run a maximum number of goroutines.

Similar to kernel level threads managed entirely by the OS, goroutines are user-space threads managed entirely by the Go runtime and the runtime scheduler schedules them. This makes goroutines cheaper, more lightweight than kernel threads, and they run on a very small memory footprint (with initial stack size of 2kb, whereas the default stack size of a thread is 8kb).

The runnable goroutines (shown in the above figure) are picked from the queue to run over available OS threads, which, in turn, run on one or more available processors. The goroutines that are blocked are put into a not runnable state queue. Once unblocked, the goroutine is put back on the runnable queue and waits for its turn to run on the available OS thread.

Fork-join Concurrency Model in Go

Go uses the fork-join concurrent execution strategy to execute programs in parallel. This enables the Go program to branch its own execution path to be run with its main branch. This splitting branch can coincide at some later point and run as a single execution path. The fork part of the model states branching off of code at designated points and the join part states reuniting back to the caller after execution finishes. Let’s try to understand this with the help of an example. Here is a simple program showing how to perform fork-join concurrency in Go and Golang:

package main
import (
	"fmt"
    	"time"
)
func f1() {
	fmt.Println("func 1")
}
func f2() {
	fmt.Println("func 2")
}
func main() {
	fmt.Println("Start...")
	go f1()
	go f2()
	fmt.Println("End")
	time.Sleep(1 * time.Second)
}

This produces the follow output when run your integrated development environment (IDE):

Start...
End
func 2
func 1

Golang Scheduler tutorial

The main method begins with a linear execution path and the designated split point is the function call with the go keyword (go func1()). Similarly, another function call, go func2() splits into another execution path. Both the functions join back to the source after finishing their execution. The main program waits courtesy of time.Sleep(1*time.Second) for a constant time so that the child branches finish their execution in the meantime.

Try executing the same code by commenting out the the line: time.Sleep(1*time.Second). The output will be as follows:

Start...
End

The output from func1 and func2 will not be displayed. This is because the main program terminates before the child branches and rejoins. This means the main goroutine must wait till the forked child is able to rejoin its parent. The function time.Sleep makes force wait possible in this case. A better way to write concurrent execution code is with the help of WaiteGroup from the sync package.

Read: Understanding Garbage Collection in Go

Using WaiteGroup in Go

We can rewrite the above Go code using WaiteGroup. The WaiteGroup is a blocking mechanism used to wait for all the goroutines to finish their execution and, once more, wait until they rejoin their parent. So, we could rewrite the above code to this example:

package main

import (
	"fmt"
	"sync"
)

func f1(wg *sync.WaitGroup) {
	defer wg.Done()
	fmt.Println("func 1")
}

func f2(wg *sync.WaitGroup) {
	defer wg.Done()
	fmt.Println("func 2")
}

func main() {
	var wg sync.WaitGroup
	fmt.Println("Start...")
	wg.Add(2)
	go f1(&wg)
	go f2(&wg)
	fmt.Println("End")
	wg.Wait()
}

Observe that we have called Add to set the number of goroutines to wait for. Each goroutine calls Done when it finishes its execution. The Wait function ensures that all the execution rejoins back to its parent before final termination of the program.

The Fair Scheduling Strategy in Golang

One of the problems with concurrent execution is the underutilized processor. Although the fair scheduling strategy tries to share execution load to all available processors, it is not always the case, because most distributed tasks are dependent on other tasks. This makes load sharing among multiple available processors unequal. There is always a chance that some processors are actually more utilized than the others. Moreover, holding a global lock to manage goroutines is expensive. Heavy IO block programs are prone to constant preemption of OS threads which is a significant overhead. A simple workaround of the problem is work stealing.

The work stealing strategy the Go scheduler looks for any logical underutilized processor and steals some processing time for the runnable goroutines to execute.

Final Thoughts on Golang Scheduler

These are quick glimpses of the working process of the Go scheduler. Understand that Go scheduler is evolving very quickly and developers are constantly trying to improve the performance by making small to considerable changes on how it works. But the core principles remain the same. Here we have simply scratched the surface and tried to give a very high level overview of Go scheduler.

Read more Go and Golang programming tutorials.

Latest Posts

Related Stories