TL;DR

  • Some of the wisdom contained in Josh Bloch’s Effective Java book is relevant to Go.
  • panic and recover are best reserved for exceptional circumstances.
  • panic and recover are slow, incur heap allocations, and preclude inlining.
  • Internal handling of failure cases via panic and recover is tolerable and sometimes beneficial.

Abusing Java exceptions for control flow

Even though my Java days are long gone and Go has been my language of predilection for a while, I still occasionally revisit Effective Java, Joshua Bloch’s seminal and award-winning book, and I never fail to rediscover nuggets of wisdom in it. In item 69 (entitled Use exceptions only for exceptional conditions) of the book’s third edition, Bloch presents an example of abusing Java exceptions for control flow. I’m hesitant to quote the content of that section in full here for fear of a copyright strike from Bloch’s publishing company, but it—and, in fact, the whole book—is well worth a read.

Bloch opens with the following code snippet, which demonstrates a rather peculiar way of iterating over an array (named range) of objects of some Mountain class so as to invoke their climb method:

try {
  int i = 0;
  while (true)
    range[i++].climb();
} catch (ArrayIndexOutOfBoundsException e) {
}

Note that variable i eventually gets incremented up to the length of the array, at which point an attempt to access the array at i raises an ArrayIndexOutOfBoundsException, which gets caught and promptly ignored. Of course, a functionally equivalent but far clearer and more idiomatic approach consists in relying on a “for-each” loop, which itself amounts to a classic three-clause loop:

for (int i = 0; i < range.length; i++) {
  range[i].climb();
}

Bloch patiently proceeds to explain why some misguided practitioners may favour the exception-based approach over the more idiomatic one: not only do they perceive the termination test (i < range.length) as costly, but they deem it superfluous. Why? Because they believe that the Java compiler introduces a bounds check for every array access (range[i]). If memory safety is guaranteed by those systematic bounds checks, they reason, why even bother checking whether the index variable goes out of bounds?

Bloch then debunks this theory via three counterarguments:

  1. Because exceptions are designed for exceptional circumstances, there is little incentive for JVM implementors to make them as fast as explicit tests.
  2. Placing code inside a try-catch block inhibits certain optimizations that JVM implementations might otherwise perform.
  3. The standard idiom for looping through an array doesn’t necessarily result in redundant checks. Many JVM implementations optimize them away.

Follows this empirical observation:

[…] the exception-based idiom is far slower than the standard one. On my machine, the exception-based idiom is about twice as slow as the standard one for arrays of one hundred elements.

How is this relevant to Go?

The designers of Go deliberately shied away from equipping the language with an exception system like Java’s:

We believe that coupling exceptions to a control structure, as in the try-catch-finally idiom, results in convoluted code. It also tends to encourage programmers to label too many ordinary errors, such as failing to open a file, as exceptional.

Go takes a different approach. For plain error handling, Go’s multi-value returns make it easy to report an error without overloading the return value. A canonical error type, coupled with Go’s other features, makes error handling pleasant but quite different from that in other languages.

Go also has a couple of built-in functions to signal and recover from truly exceptional conditions. The recovery mechanism is executed only as part of a function’s state being torn down after an error, which is sufficient to handle catastrophe but requires no extra control structures and, when used well, can result in clean error-handling code.

However, some newcomers to Go may, at least at first, struggle to adopt the language’s idiom of communicating anticipated failure cases as values rather than as exceptions; they may be tempted to abuse Go’s panic and recover built-in functions for communicating even benign failure cases.

Go’s ecosystem (language, compiler, runtime, etc.) may be vastly different from Java’s, but transposing Bloch’s experiment from Java to Go is nonetheless an instructive and playful way to discuss the cost of panic and recover, and perhaps stifle newcomers’ urge to unduly rely on that mechanism in their programmes.

Abusing Go’s panic/recover for control flow

In the remainder of this post, I’ll assume Go 1.24 semantics and use the Go compiler (gc) of the same version.

Roughly translated to Go and molded into a self-contained package, Bloch’s code snippet becomes the following programme (available on GitHub):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
package main

type Mountain struct{
  climbed bool
}

func (m *Mountain) Climb() {
  m.climbed = true
}

func main() {
  mountains := make([]Mountain, 8)
  ClimbAllPanicRecover(mountains)
}

func ClimbAllPanicRecover(mountains []Mountain) {
  defer func() {
    recover()
  }()
  for i := 0; ; i++ {
    mountains[i].Climb() // panics when i == len(mountains)
  }
}

func ClimbAll(mountains []Mountain) {
  for i := range mountains {
    mountains[i].Climb()
  }
}

(playground)

As its name suggests, function ClimbAllPanicRecover abuses panic and recover for iterating over the input slice, whereas function ClimbAll stands for the more idiomatic reference implementation.

Bloch never reveals what his Mountain class is made of or what its climb method does. To forestall any dead-code elimination by the compiler, I’ve opted to make my (*Mountain).Climb method mutate the climbed field of its receiver.

The overhead of panic and recover is non-negligible

Below are some benchmarks pitting ClimbAllPanicRecover against ClimbAll:

package main

import (
  "fmt"
  "testing"
)

var cases [][]Mountain

func init() {
  for _, size := range []int{0, 1, 1e1, 1e2, 1e3, 1e4, 1e5} {
    s := make([]Mountain, size)
      cases = append(cases, s)
  }
}

func BenchmarkClimbAll(b *testing.B) {
  benchmark(b, "idiomatic", ClimbAll)
  benchmark(b, "panic-recover", ClimbAllPanicRecover)
}

func benchmark(b *testing.B, impl string, climbAll func([]Mountain)) {
  for _, ns := range cases {
    f := func(b *testing.B) {
      for b.Loop() {
        climbAll(ns)
      }
    }
    desc := fmt.Sprintf("impl=%s/size=%d", impl, len(ns))
    b.Run(desc, f)
  }
}

(Incidentally, if you’re not yet familiar with the new (*testing.B).Loop method, do check out the Go 1.24 release notes.)

Let’s run those benchmarks on a relatively idle machine and feed the results to benchstat:

$ go version 
go version go1.24.0 darwin/amd64
$ go test -run '^$' -bench . -count 10 -benchmem > results.txt
$ benchstat -col '/impl@(idiomatic panic-recover)' results.txt
goos: darwin
goarch: amd64
pkg: github.com/jub0bs/panicabused
cpu: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
                       │  idiomatic  │              panic-recover              │
                       │   sec/op    │    sec/op      vs base                  │
ClimbAll/size=0-8        2.239n ± 8%   193.900n ± 1%  +8560.12% (p=0.000 n=10)
ClimbAll/size=1-8        2.638n ± 1%   196.400n ± 2%  +7346.45% (p=0.000 n=10)
ClimbAll/size=10-8       5.424n ± 1%   199.300n ± 2%  +3574.41% (p=0.000 n=10)
ClimbAll/size=100-8      44.69n ± 1%    238.65n ± 4%   +434.01% (p=0.000 n=10)
ClimbAll/size=1000-8     371.6n ± 0%     565.8n ± 1%    +52.27% (p=0.000 n=10)
ClimbAll/size=10000-8    3.646µ ± 1%     3.906µ ± 0%     +7.15% (p=0.000 n=10)
ClimbAll/size=100000-8   36.27µ ± 0%     36.54µ ± 1%     +0.73% (p=0.000 n=10)
geomean                  95.10n          759.9n        +699.03%

                       │  idiomatic  │        panic-recover         │
                       │    B/op     │    B/op     vs base          │
ClimbAll/size=0-8        0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=1-8        0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=10-8       0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=100-8      0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=1000-8     0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=10000-8    0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=100000-8   0.00 ± 0%     24.00 ± 0%  ? (p=0.000 n=10)
geomean                            ¹   24.00       ?
¹ summaries must be >0 to compute geomean

                       │  idiomatic   │        panic-recover         │
                       │  allocs/op   │ allocs/op   vs base          │
ClimbAll/size=0-8        0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=1-8        0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=10-8       0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=100-8      0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=1000-8     0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=10000-8    0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
ClimbAll/size=100000-8   0.000 ± 0%     1.000 ± 0%  ? (p=0.000 n=10)
geomean                             ¹   1.000       ?
¹ summaries must be >0 to compute geomean

The results are plain to see: ClimbAllPanicRecover is lumberingly slow in comparison to ClimbAll in the case of small enough input slices, for which the cost of panic and recover appears to dominate execution time. This observation echoes Bloch’s first counterargument: panic and recover, because their use is intended for truly exceptional circumstances, have no reason to be particularly fast.

Moreover, each call to ClimbAllPanicRecover incurs an allocation of 24 bytes (on my 64-bit system, at least); although details are scarce, this heap allocation can be attributed to a runtime.boundsError with which the Go runtime eventually panics when the value of variable i reaches len(mountains). In comparison, ClimbAll never allocates and, therefore, doesn’t exert any unnecessary pressure on the garbage collector.

The performance gap between the two implementations only closes as the length of the input slice increases and the cost of panic and recover drowns out in the rest of the workload.

Recover precludes inlining

At this stage, astute readers may suggest that ClimbAllPanicRecover’s disadvantage can be explained, at least in part, by inlining. Inlining is a compiler strategy that can be roughly described as “replacing a function call by the body of that function”. In many cases, inlining results in a speedup of execution. However, functions that contain defer statements cannot be inlined, and neither can functions that contain calls to recover. Therefore, contrary to ClimbAll, neither ClimbAllPanicRecover nor the anonymous function whose call it defers can be inlined. Close inspection of the optimisation decisions made by the compiler while building our programme confirms that much:

$ go build -gcflags '-m=2' .
# github.com/jub0bs/panicabused
./main.go:7:6: can inline (*Mountain).Climb with cost 4 as: method(*Mountain) func() { m.climbed = true }
./main.go:17:8: cannot inline ClimbAllPanicRecover.func1: call to recover
./main.go:16:6: cannot inline ClimbAllPanicRecover: unhandled op DEFER
./main.go:11:6: can inline main with cost 66 as: func() { mountains := make([]Mountain, 8); ClimbAllPanicRecover(mountains) }
./main.go:25:6: can inline ClimbAll with cost 14 as: func([]Mountain) { for loop }
-snip-

This observation echoes Bloch’s second counterargument: relying on panic and recover inhibits certain optimisations that the Go compiler might otherwise perform.

Is the lack of inlining to blame for ClimbAllPanicRecover’s lacklustre performance, though? Evidently not: I selectively disabled inlining for ClimbAll by slapping a go:noinline directive on it and re-ran the benchmarks, but found that ClimbAll still vastly outperformed ClimbAllPanicRecover for all but large input slices. But an impossibility to inline a function can noticeably harm performance in more realistic scenarios.

No bounds-check elimination for the unidiomatic implementation

Like Java, Go is said to be memory-safe; in particular, per the language specification, implementations must trigger a run-time panic if a slice-indexing operation is ever out of bounds. Such bounds checks are relatively cheap, but they are not free. When the compiler can prove, perhaps via some heuristics, that some slice access cannot be out of bounds, it may omit, for better performance, the corresponding bounds check from the resulting executable. Besides, advanced programming techniques exist for gently nudging the compiler towards more bounds-check elimination.

In the specific case of our little programme, the compiler can eliminate the bounds checks in ClimbAll’s loop, but not in ClimbAllPanicRecover’s:

$ go build -gcflags="-d=ssa/check_bce/debug=1"
# github.com/jub0bs/panicabused
./main.go:17:12: Found IsInBounds

This observation echoes Bloch’s third counterargument: the idiomatic approach is more conducive to bounds-check elimination.

What about internal handling of failure cases?

At this stage, my facetious example may have convinced you that abusing panic and recover for control flow is not only unidiomatic but also detrimental to performance. More seriously, though, you may come across open-source projects that rely on panic and recover for handling internal failure cases. In fact, look no further than the standard library: this style is in full display in packages such as text/template, encoding/json, encoding/gob, and regexp/syntax.

Expediency seems to be the primary motivation. Indeed, when the call stack is deep (perhaps on account of numerous recursive calls), relying on panic and recover obviates the need for much boilerplate; the error-handling logic can be centralised further up the stack, at the point of panic recovery, and the happy path can remain in focus.


Panics should not be recovered too indiscrimately, though; a bug that triggers a panic will remain masked if a call to recover inadvertently swallows that panic:

func ClimbAllPanic(mountains []Mountain) {
  defer func() {
    recover()
  }()
  for i := 0; ; i++ {
    mountains[i-1].Climb() // off-by-one error
  }
}

(playground)

See issue 23012 for an example of such a problem in package encoding/json.


But another, more surprising motivation for such a style is… performance! For instance, Max Hoffman and Raphael Poss separately report impressive speedups (on the happy path of their programme, at least) thanks to this style. Explanations range from a decreased need for intermediate function results and code that is comparatively friendlier to the CPU’s branch predictor. So it seems that panic and recover can be beneficial to performance in at least some situations.

Should you try to emulate this style? Up to you. If you go down that road, though, do justify your design decision by a clarifying comment and perhaps some benchmark results; if you cannot provide such justification, you’re perhaps being too clever. Also, make sure to keep this design decision as an implementation detail of your package; don’t let panics that should remain internal leak through your package’s API, as your clients would then regrettably be forced to deal with them.

Acknowledgements

Thanks to the members of the Gophers Slack workspace who lurk in the #performance channel for an enlightening discussion, which fed into this post.