Bisection Method

by Martin D. Maas, Ph.D

The bisection method is the simplest rootfinding algorithm. It starts from two points where a given continuous function has opposite signs, to iteratively to approximate a root.

According to the Bolzano theorem (see Wiki), a continuous function has a root in an interval, if it has values of opposite signs inside that interval.

Let’s assume that f(a)>0f(a)>0 and f(b)<0f(b)<0. Then there must be a root in (a,b)(a,b).

Now let c=b+a2c=\frac{b+a}{2}. If f(c)>0f(c) > 0 then cc is an improvement of the left-hand bound, and if f(c)<0f(c) < 0 is an improvement over the right-hand side bound.

This process is repeated after a certain error tolerance is met.

function bisection(f, a, b, tol)
    
    if sign(f(a)) == sign(f(b))
        error("function has the same sign at given endpoints")
    end

    mid = (a + b)/2

    while abs(f(mid)) > tol

        sign(f(mid)) == sign(f(a)) ? a=mid : b=mid
            
        mid = (a + b)/2

    end

    return mid
    
end

Let’s test our code in a simple example, like solving for x22=0x^2 - 2 = 0.

x = bisection( x -> x^2-2, 1, 2, 1e-5)
err = abs(x - sqrt(2))
print(err)
1.5255175298545254e-6

Final remarks

The algorithm written above is a straightforward implementation of the bisection algorithm. For improved performance, however, there are a few things we could do.

For example, we notice that the code evaluates the function ff more than once at the same point mid. Precomputing the value once, by doing something like fmid = f(mid), and then relying on fmid is a performance optimization called memoization.

The bisection algorithm, in spite of being the simplest root-finding algorithm, is very robust, because convergence is guaranteed when very basic conditions hold. For this reason, it is used as the basis of more advanced algorithms, that will also converge to a root much faster.