CHAPTER 24 · CONCURRENCY

Goroutines

Concurrency is Go's superpower. A goroutine is a lightweight unit of concurrent execution, scheduled by the Go runtime. You can spin up thousands, even millions, cheaply, because each starts with just a small stack that grows as needed.

Learning objectives

  • Launch concurrent work with the go keyword.
  • Wait for goroutines to finish with sync.WaitGroup.
  • Protect shared state with sync.Mutex / sync.RWMutex.
  • Use -race to detect data races early.
  • Avoid goroutine leaks.

What's a goroutine?

A goroutine is an independently-running function, not an OS thread. The Go runtime multiplexes millions of them onto a handful of OS threads (controlled by GOMAXPROCS). Key properties:

  • Starts with ~2 KB of stack (grows/shrinks as needed).
  • Switching between them is orders of magnitude cheaper than thread switching.
  • Communication is via channels (Chapter 25) or explicit shared memory with locks.

The go keyword

func say(msg string) { fmt.Println(msg) }

func main() {
    go say("hello")         // runs concurrently
    say("main")              // runs on main goroutine
    time.Sleep(100 * time.Millisecond)  // give the goroutine time to print
}

The time.Sleep is a hack to wait for the goroutine before main exits (exiting main kills all goroutines). Real code uses WaitGroup or channels.

sync.WaitGroup

package main

import (
    "fmt"
    "sync"
)

func main() {
    var wg sync.WaitGroup

    for i := 1; i <= 3; i++ {
        wg.Add(1)                            // register one goroutine
        go func(n int) {
            defer wg.Done()                   // mark it done when finished
            fmt.Println("worker", n)
        }(i)
    }

    wg.Wait()                                // block until counter reaches 0
    fmt.Println("all done")
}

Rules:

  • Call Add before starting the goroutine.
  • Done in a defer so it fires even on panic.
  • Don't pass WaitGroup by value, it must be shared. Use *sync.WaitGroup or close over it with a closure.

sync.Mutex & RWMutex

When multiple goroutines touch the same data, you need a lock:

type SafeCounter struct {
    mu sync.Mutex
    n  int
}

func (c *SafeCounter) Inc() {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.n++
}

sync.RWMutex is a refined version: many readers can hold it simultaneously, one writer at a time. Use it when reads vastly outnumber writes.

Data races and the -race flag

A data race is two goroutines touching the same memory with at least one writer. Undefined behavior territory.

Go ships with a world-class race detector:

go run -race .
go test -race ./...

The detector instruments your binary to catch races at runtime. Run your tests with -race in CI, it has caught incomprehensibly many real bugs.

Goroutine leaks

A goroutine "leaks" when it's launched but never returns, typically blocked on a channel that'll never send or receive. Each leaked goroutine costs memory and CPU cycles.

func leaky(work <-chan int) {
    for v := range work {
        // if the caller forgets to close(work), this goroutine lives forever
        fmt.Println(v)
    }
}

Prevention: use context.Context (Chapter 26) to propagate cancellation, and always have an unambiguous termination condition.

sync.Once

Runs a function exactly once, even if called from many goroutines:

var (
    cfg     *Config
    cfgOnce sync.Once
)

func Config() *Config {
    cfgOnce.Do(func() {
        cfg = loadConfig()   // runs at most once
    })
    return cfg
}

Best practices

  • Prefer channels over locks when you can. "Don't communicate by sharing memory; share memory by communicating."
  • Always know who closes a channel. Usually the sender.
  • Every goroutine needs an exit path. Cancellation via context is the Go-native way.
  • Run with -race in tests. It's almost free for small tests.

Check your understanding

Practice exercises

EXERCISE 1

Parallel URL fetcher

Given a slice of URLs, fetch them concurrently and print each URL with its HTTP status code. Use sync.WaitGroup. (We'll use net/http in Chapter 30: for now http.Get is enough.)

Show solution
package main

import (
    "fmt"
    "net/http"
    "sync"
)

func main() {
    urls := []string{
        "https://go.dev",
        "https://example.com",
    }

    var wg sync.WaitGroup
    for _, u := range urls {
        wg.Add(1)
        go func(url string) {
            defer wg.Done()
            resp, err := http.Get(url)
            if err != nil {
                fmt.Println(url, "err:", err)
                return
            }
            defer resp.Body.Close()
            fmt.Printf("%-30s %d\n", url, resp.StatusCode)
        }(u)
    }
    wg.Wait()
}

Note that we pass u as an argument to the closure. In Go 1.22+ this isn't strictly needed (loop variables are per-iteration), but it makes intent explicit.

Further reading

Concurrent Go, unlocked.