LanguagesIntro to Concurrency in Go

Intro to Concurrency in Go

Concurrency boosts performance by taking advantage of multiple processing cores. The API support in Go helps programmers to implement parallel algorithms in a very efficient manner. Most mainstream programming languages have concurrency support as an additional feature but Go has concurrency support built-in. This article provides an introduction to concurrent programming in Go.

Concurrent Programming in Go

Concurrent programming takes full advantage of the underlying multiple processing cores present in most modern computers. The concept has existed for quite some time, even when there was only one core built into the single processor. Using multiple threads was a common practice to achieve some sort of concurrency in many programming languages, including C/C++, Java, and more.

A single thread basically represents a small set of instructions scheduled to be executed independently. You can conceptually visualize it as a small task within a large job. Therefore, multiple threads of execution are grouped to task a complex process and get executed simultaneously. This coherence among multiple tasks gives a sense of concurrent execution. But note that any underlying limited hardware – such as a single-core processor – can only do as much by scheduling jobs in a time-shared fashion.

Today, processing hardware is powered by multiple cores. Because of this, a language that can utilize its full potential is always in demand. Mainstream programming languages are gradually realizing this fact and trying to embrace the idea of concurrency within their core features. However, the designers of Go thought why not build a language ground up with the concept of concurrency within its core features? Go is one such language that offers high-level APIs for writing concurrent programs.

Problems with Multithreading

Multithreaded programs are not only hard to write and maintain but also difficult to debug. Also, it is not always possible to split up just any algorithm using multiple threads to make it efficient as concurrent programming in terms of performance. Multithreading has its own overhead. Much of the responsibilities, such as inter-process communication or shared memory access, are handled by the environment. The developers are simply left to focus on the business at hand rather than get entangled with the nitty-gritty of parallel processing.

Keeping these problems in mind, an alternative solution is to completely rely on the operating system for multiprocessing. In this case, the responsibility lies with the developer to handle the intricacies of interprocess communication or the overhead of shared-memory concurrency. This technique may be highly tweakable in favor of performance but also easy to create a mess out of it.

Go Benefits to Concurrent Programming

Go offers a threefold solution with regard to concurrent programming.

  • High-level support makes it not only simpler to achieve concurrency but also easier to maintain.
  • Use of goroutines. Goroutines are much more lightweight than threads.
  • Go’s automatic garbage collection handles the complexities of memory management without developers’ intervention.

Handling Concurrency Issues in Go

The goroutines make it easy to create concurrency and form the basic primitives. Here the executing activity is called goroutine. Consider a program with two functions that do not call each other. In sequential execution, one function finishes execution, and then another one is called. With Go, however, the function can be active and running at the same time. This is simple to achieve if the functions are unrelated, but problems can occur when functions are intertwined and share one another’s execution timelines. Even with Go’s high-level support for concurrency, these pitfalls cannot be altogether avoided, especially if the main function finishes its execution prior to the functions that depend on it. Therefore, we must be careful to make the main goroutine wait until all the work is done.

Another problem is the deadlock situation, where more than one goroutine locks a certain resource to maintain exclusivity while another tries to acquire the same lock simultaneously. This type of risk is common with concurrent programming but Go has a workaround and can avoid the use of locks by using channels. Typically, a channel is created that signals the completion of execution as the work is done. Another way is to wait for the report using sync.WaitGroup. But in either case, deadlock still may occur and, at best, be avoided with careful design. Go simply provides the tools to scheme the right functioning of concurrency.

goroutine with WaitGroup Example

We can simply create a goroutine by prefixing any function call with the keyword go. The function then behaves like a thread by creating a goroutine containing the call frame to schedule it to run like a thread. Like normal functions, it can access any arguments, globals, and anything accessible within its reach.

Here is a simple code to check the status of any website, whether it is up or down. Next, we apply goroutine on the same code. Observe how the execution becomes faster as we apply concurrency.

package main
import (
  "fmt"
  "net/http"
  "time"
)
func main() {
  start := time.Now()
  sitelist := []string{
    "https://www.google.com//",
    "https://www.duckduckgo.com/",
    "https://www.developer.com/",
    "https://www.codeguru.com/",
    "https://www.nasa.gov/",
    "https://golang.com/",
  }
  for _, site := range sitelist {
    GetSiteStatus(site)
  }
  fmt.Printf("\n\nTime elapsed since %v\n\n", time.Since(start))

}
func GetSiteStatus(site string) {
  if _, err := http.Get(site); err != nil {
    fmt.Printf("%s is down\n", site)
  } else {
    fmt.Printf("%s is up\n", site)
  }
}

The above Go code example will result in the following output:

https://www.google.com// is up
https://www.duckduckgo.com/ is up
https://www.developer.com/ is up
https://www.codeguru.com/ is up
https://www.nasa.gov/ is up
https://golang.com/ is up

Time elapsed since 6.666198944s

Now if we apply concurrency in the above code with the WaitGroup synchronization mechanism the performance is increased manifold.

package main
import (
  "fmt"
  "net/http"
  "sync"
  "time"
)
func main() {
  var wg sync.WaitGroup
  start := time.Now()
  sitelist := []string{
    "https://www.google.com//",
    "https://www.duckduckgo.com/",
    "https://www.developer.com/",
    "https://www.codeguru.com/",
    "https://www.nasa.gov/",
    "https://golang.com/",
  }
  for _, site := range sitelist {
    go GetSiteStatus(site, &wg)
    wg.Add(1)
  }
  wg.Wait()
  fmt.Printf("\n\nTime elapsed since %v\n\n", time.Since(start))
}
func GetSiteStatus(site string, wg *sync.WaitGroup) {
  defer wg.Done()
  if _, err := http.Get(site); err != nil {
    fmt.Println("%s is down\n", site)

  } else {
    fmt.Printf("%s is up\n", site)
  }
}

Once more, here is the output from our Go program:

https://www.nasa.gov/ is up
https://www.google.com// is up
https://www.developer.com/ is up
https://www.duckduckgo.com/ is up
https://golang.com/ is up
https://www.codeguru.com/ is up

Time elapsed since 1.816887681s

WaitGroup is a synchronization mechanism. Observe that, for every goroutine created, the WaitGroup is incremented using wg.Add(1) and on completion of the routine it is decremented using wg.Done(). The wg.Wait() is a blocking command to make the main goroutine wait for all the goroutine tasks to be completed.

Synchronization with Mutex in Go

Apart from WaitGroup, Go provides other synchronization mechanisms on shared resources such as Mutex. It uses a locking mechanism whenever goroutines simultaneously want to access a shared resource using sync.Mutex.lock(). Similarly, there is a way to unlock using sync.Mutex.Unlock().

The following changes in the above code implements synchronization mechanism with Mutex.

//...
func main() {
  var wg sync.WaitGroup
  var mut sync.Mutex
  //...
  for _, site := range sitelist {
    go GetSiteStatus(site, &wg, &mut)
    wg.Add(1)
  }
  //...
}
func GetSiteStatus(site string, wg *sync.WaitGroup, mut *sync.WaitGroup) {
  defer wg.Done()
  if _, err := http.Get(site); err != nil {
    fmt.Println("%s is down\n", site)

  } else {
    mut.Lock()
    defer mut.Unlock()
    fmt.Printf("%s is up\n", site)
  }
}

Implementing Channels in Golang

The channels are the connection between goroutine activities. They serve as a communication mechanism between one goroutine to another by sending and receiving values much like the UNIX pipe where we can put data at one end and receive at another. A channel therefore has two principle operations, send and receive. Unlike the pipe channel, which has an associated value of a particular type, therefore channels that have an element type must be defined during its creation. For example, a channel whose element type is int is written as:

cha := make(chan int)

The element type determines the type of value that will be passed through the channel. Declaring an empty interface type with the channel enables us to pass values of any type and determine at the receiving end. A channel of the same type can be compared with the equality operator “==” and an empty channel can be compared with nil. A channel supports buffering with configurable buffer size.

Here is a quick example of how to implement a channel in Go:

package main
import "fmt"
func main() {
  odd := make(chan int)
  oddsquared := make(chan int)
  //odd
  go func() {
    for x := 1; x < 10; x++ {
      if x%2 != 0 {
        odd <- x
      }
    }
    close(odd)
  }()
  //oddsquared
  go func() {
    for x := range odd {
      oddsquared <- x * x
    }
    close(oddsquared)
  }()

  for x := range oddsquared {
    fmt.Println(x)
  }
}

The example above is simple where the odd goroutine finishes its loop after 10 elements, it closes the odd channel. This causes oddsquared to finish its loop and close the oddsquared channel. Finally, the main goroutine finishes its loop and the program exits.

Final Thoughts on Programming Go Concurrency

Apart from high-level concurrency support Go also provides the low-level functionality with sync/atomic package. They are not typically used by regular programmers and are primarily designed to support advanced stuff such as thread-safe synchronization and data structure implementation. For concurrent programming high level facilities such as goroutines and channels are used mostly. The extent of high-level support APIs in Go can rarely be matched with any other mainstream programming languages. This clearly indicates that the designers of Go put forth solid effort to imbibe the support of concurrency right into its core facilities.

Latest Posts

Related Stories