Sunday, 27 January 2013

Computing Fibonacci

The Fibonacci sequence is often used to introduce the concept of recursion to new programmers. It has also been given a lot of interest because of his relationship to φ (phi), the mathematical constant that has often been observed in nature such as a ratio describing the arrangement of a pine cone, the spiral of snail's shells and other natural phenomena. Here, I will share an experiment I made where different algorithms computing the Fibonacci sequence were implemented in Python and compared.

The Naive recursion

The mathematical definition of Fibonacci is the following:

\[ f(n)=\begin{cases}
0 & if\ n=0\\
1 & if\ n=1\\
f(n-1)+ f(n-2) & if\ n\text{>1}
\end{cases} \]

The first terms of the sequence are: \(0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ...\)

To compute this beautiful sequence, a programmer will be tempted to write a naive implementation that is very similar to the mathematical definition of the sequence. I wrote a sample implementation of this algorithm in Python:

def fibo_rec(n):
    """Naive implementation using the mathematical definition of
       Fibonacci numbers. This is by far the slowest method.

    """
    if n <= 1:
        return n
    else:
        return fibo_rec(n - 1) + fibo_rec(n - 2)
As you can see, this implementation has the advantage of being really close to it's mathematical English counterpart and the code is an elegant recursion.

EDIT.Some people suggested an implementation similar to the one on the Python tutorial (except for the fact that the one on the tutorial is iterative). According to a comment from Reddit, the "more Pythonic" recursive implementation (see comments) might even be twice as fast. I chose the implementation presented here because it felt more readable to non Pythonists. It is also interesting to note that, in theory, the difference in speed is asymptotically negligible as \( O(n) \in O(2 \cdot n) \).

Unfortunately, this is pretty much the worst implementation one could write. Some even argue that it shouldn't be the de facto standard for teaching recursion in programming or computer science classes, as it currently is the case [1].
The following Figure representing the recursive calls to fibo_rec when fibo_rec(5) is called. It is easy to notice the problem with this solution: the same Fibonacci numbers are computed again and again.
Figure 1. The recursion tree with fibo_rec(5) as the initial call.
In fact, the number of recursive calls for this example is described in Table I.

Table I. Number of recursive calls to various fibo_rec(n).
n Number of calls
0 3*
1 5
2 3
3 2
4 1
5 1
* Note that if f(1) called f(0), as it is the case in a similar recursive implementation of this algorithm, we would count a total of 5 + 3 = 8 calls for f(0).

As the careful reader will notice, the number of recursive calls to different values of n follows a Fibonacci sequence. In this case, this results in a 9 redundant calls, but this number would quickly grow as we compute larger numbers. Nobody likes redundant calls, so we will look into a better solution.

EDIT. Some people were curious about a recursive implementation that memorised the previously generated numbers to avoid call redundancy. I quickly implemented an algorithm exploiting this idea:
def fibo_rec_mem(n, i = 0, j = 1):
    """Recursive version using tail-recursion

    """
    if n <= 1:
        return j
    else:
        return fibo_rec_mem(n - 1, j, i + j)
This algorithm's performance is probably very similar to the iterative version that will be discussed shortly. The only problem is that it reaches the maximum recursion depth at n = 994, which makes it very impractical to generate Fibonacci numbers. Also note that the bitbucket repo has been updated with this version.

The intuitive iteration

To improve the previous algorithm's performance, we will memorize the 2 previously generated Fibonacci numbers and compute the sum at every iteration, until we end up with the desired number. The algorithm is the following:
def fibo_iter(n):
    i = 1
    j = 0
    for k in xrange(n):
        j = i + j
        i = j - i
    return j
I chose this elegantly written implementation even if it could be harder to understand. The first statement of the loop computes the Fibonacci number (j) and the second statement memorizes the previous number (i). After n iterations, we return j which is the desired number. This method apparently runs in O(n), but no formal proof will be attempted here. We will now show another interesting property that will allow O(log n) time complexity.

EDIT. Some comments mentioned the dynamic programming solution. This solution, from what I read about it, fills an array (e.g. Python list) with the Fibonacci numbers iteratively using the previously generated values to compute every subsequent number. This solution is very similar to the one presented here, but instead of using O(n) space to store the values in an array, I store what is needed for the computations (i and j) which is O(1) space. The dynamic programming versions is probably more useful if you need to memorize all of the numbers for later use. Jeff Erickson's interesting discussion on the subject is available online [6].

Fun with matrices

The interesting mathematical property we are all so excited about is the following:
\[ \left({
\begin{array}{c} {1} & {1} \\ {1} & {0} \\ \end{array} }\right)^n = \left({ \begin{array}{c} {f_{n+1}} & {f_n} \\ {f_n} & {f_{n-1}} \\ \end{array} }\right) \]
The proof of this identity probably requires mathematical knowledge that is beyond my current capacity, but you can empirically see for yourself using WolframAlpha:
or see fancy maths in Ly et al. [2].

EDIT. Some people mentioned an easy proof by induction of the aforementioned property. I attempted such a proof:
The basis of induction, for n = 1 is the following:
\[ \left(\begin{array}{cc}
1 & 1\\
1 & 0
\end{array}\right)^{1}=\left(\begin{array}{cc}
f_{\text{1+1}} & f_{1}\\
f_{1} & f_{1-1}
\end{array}\right)=\left(\begin{array}{cc}
f_{\text{2}} & f_{1}\\
f_{1} & f_{0}
\end{array}\right)
\]
Which does not invalidate our hypothesis.
Let's assume, for all n that our induction hypothesis is true and prove for n + 1:
\[ \left(\begin{array}{cc}
1 & 1\\
1 & 0
\end{array}\right)^{n+1}=\left(\begin{array}{cc}
1 & 1\\
1 & 0
\end{array}\right)^{n}\left(\begin{array}{cc}
1 & 1\\
1 & 0
\end{array}\right)
=\left(\begin{array}{cc}
f_{\text{n+1}} & f_{n}\\
f_{n} & f_{n-1}
\end{array}\right)\left(\begin{array}{cc}
1 & 1\\
1 & 0
\end{array}\right)
\]
by the induction hypothesis
\[ =\left(\begin{array}{cc}
1\cdot f_{n+1}+1\cdot f_{n} & 1\cdot f_{n}+1\cdot f_{n-1}\\
1\cdot f_{n+1}+0\cdot f_{n} & 1\cdot f_{n}+0\cdot f_{n-1}
\end{array}\right)=\left(\begin{array}{cc}
f_{n}+f_{n+1} & f_{n-1}+f_{n}\\
f_{n+1} & f_{n}
\end{array}\right)
\]
by matrix multiplication
\[ =\left(\begin{array}{cc}
f_{n+2} & f_{n+1}\\
f_{n+1} & f_{n}
\end{array}\right)
\] by the definition of the Fibonacci sequence.
Since the final matrix corresponds to what would be expected for n + 1, our induction hypothesis holds and the proof is complete.

For us, that means that a better algorithm can be implemented using these properties:
def fibo_log(n):
    """This method uses interesting algorithmic strategies
       and is supposed to be O(log n). 

    """
    i = 1
    j = 0
    k = 0
    h = 1
    while n > 0:
        if n % 2 != 0:
            t = j * h
            j = i * h + j * k + t
            i = i * k + t
        t = h ** 2
        h = 2 * k * h + t
        k = k ** 2 + t
        n = int( n / 2 )
    return j
I won't go in the details of this method, as it is visibly more complex than the previous solutions, but I will compare the performance of my different implementations empirically.

Empirical comparison of those beautiful algorithms

In order to compare the algorithms, I wrote a Python function that uses the cProfile module (available in the standard library). The function that allowed me to do so takes a function as an argument, and passes the other arguments and named arguments to the given function.
def time_call(f, *args, **kwargs):
    """Function that measures execution time of a given function f
       with args and kwargs as arguments.

    """

    code = "{0}(".format(f.__name__)
    if args:
        for i in xrange(len(args)):
            if i != 0:
                code += ", "
            if type(args[i]) is str:
                code += "\"{0}\"".format(args[i])
            else:
                code += "{0}".format(str(args[i]))

    if kwargs:
        base = ", " if args else ""
        i = 0
        for k, v in kwargs.iteritems():
            if i != 0:
                base = ", "
            if type(v) is str:
                code += "{0}{1}=\"{2}\"".format(base, k, v)
            else:
                code += "{0}{1}={2}".format(base, k, str(v))
            i += 1
            
    code += ")"
    cProfile.run(code)
This can be used to easily (but probably not optimally) compare different implementations. In this case, it allowed me to compile execution time, denoted T(n) for different Fibonacci numbers fn. The result is shown in Figure 2.
Figure 2. Graph comparing the execution time T(n) in seconds of different implementations of algorithms that compute Fibonacci numbers at different indexes n (notice the log-scale). The observation of Figure 2 show that the first algorithm that was described, the naive recursion is by far the slowest. It becomes unusable (in reasonable time) around n=30 where T(n) = 15.255s. The second slowest algorithm is the iterative version, but still is a better choice than the first implementation. It takes around 37s to compute the 786 432th Fibonacci number. Finally, the best implementation (shown here) is the one based on the matrix identity. This final implementation took 32s to compute the 9 000 000th Fibonacci number. Note that tests were ran on a virtual machine (using VirtualBox with 2048MB of RAM and 4 accessible CPU cores.

Finally, I just wanted to briefly mention another method that is really fast, but was too complicated to implement correctly for this brief article. The Fibonacci recurrence can be expressed as a simple expression known as Binet's formula: \[f_n=\frac{1}{\sqrt{5}}(\varphi^n-\varphi'^n)\] where \( \varphi=\frac{1+\sqrt{5}}{2},\ \varphi'=\frac{-1}{\varphi} \).
A variation of this formula can be used to compute Fibonacci numbers:
SQRT5 = 5 ** 0.5
PHI = (1 + SQRT5) * 0.5
def fibo_lin(n):
    """This method uses Binet's formula, which is mathematically
       correct and very fast, but is not easily implemented
       because of overflow errors at the PHI ** n step.

       Also, floating number arithmetic seems to introduce
       errors when n > 70.
       (http://bosker.wordpress.com/2011/07/27/computing-fibonacci-numbers-using-binet%E2%80%99s-formula/)
       
    """
    if n == 0: return n
    return round(PHI ** n / SQRT5)
This is actually very, very fast. The only problem comes from the handling of large floating point numbers, which makes it hard to compute numbers over n ≥ 70 without rounding errors. A solution to this problem exists, as described in Robin Houston's blog [3]. This is probably the best way to compute the Fibonacci numbers.

To conclude, I wrote this article to give a real-life perspective on different approaches to the computation of the Fibonacci sequence. If you're interested in reading more about Fibonacci, you can look at the Cited Works section, or these excellent Wikipedia articles on the subject [4, 5]. I hope it helped some of you and, I'm open to your feedback or suggestions. Also note that all the code that is described here is available on my Bitbucket page.

Cited Works


19 comments:

  1. What about the dynamic programming solution as an iteration on the recursive one?

    ReplyDelete
    Replies
    1. Yeah, the dynamic programming solution is missing. Also, I wonder what the speed of a simple while loop would be where the numbers are added together every iteration.

      Delete
  2. Your python code could look better if you were playing with the variables in a more pythonic way :

    def fibo_iter(n):
    i,j = 1,0
    for k in xrange(n):
    i,j = j,i + j
    return j

    I haven't tried this but I expect it to work (and yes, it is always a bit surprising the first time).

    ReplyDelete
    Replies
    1. If it's that surprising then it's neither better looking nor more Pythonic.

      Delete
    2. it is surprising because it does what one wants to do which might not be the case in other programming languages.

      If you want the new value of i to be the old value of j and the new value of j to be the sum of old i and old j, I do find "i,j = j, i+j" pretty good looking.

      Also, this solution is twice as fast as the one you used. Because you might be interested, here's a performance test and the result.

      #!/usr/bin/python
      from profilehooks import profile, coverage, timecall

      # http://atgcio.blogspot.com.au/2013/01/computing-fibonacci.html

      @timecall
      def fibo_iter_sum(n):
      i = 1
      j = 0
      for k in xrange(n):
      j = i + j
      i = j - i
      return j

      @timecall
      def fibo_iter_swap(n):
      i,j = 1,0
      for k in xrange(n):
      i,j = j,i + j
      return j

      @timecall
      def fibo_iter_tmp(n):
      i = 1
      j = 0
      for k in xrange(n):
      tmp = i
      i = j
      j += tmp
      return j

      n=1000000
      sum = fibo_iter_sum(n)
      swap = fibo_iter_swap(n)
      tmp = fibo_iter_tmp(n)
      assert(sum == tmp and sum == swap)

      # OUTPUT :
      # fibo_iter_sum (./fibo.py:6):
      # 110.036 seconds
      #
      #
      # fibo_iter_swap (./fibo.py:15):
      # 58.593 seconds
      #
      #
      # fibo_iter_tmp (./fibo.py:22):
      # 58.742 seconds

      Delete
    3. It is true that this implementation is twice as fast. In my defense, as someone pointed it out on Reddit (http://www.reddit.com/r/programming/comments/17od6o/real_life_comparison_of_algorithms_to_compute/), this algorithm has linear complexity so multiplying the execution time by a constant (2) does not make a significant difference when considering the asymptotic performance. O(n) = O(2n). I do understand the point you are making though, as in a real life implementation, twice as fast is not negligible.

      Delete
  3. Nice write up. The recursive function call is how we were asked to calculate the numbers in a recent Advanced Programming (class I am in) assignment.

    ReplyDelete
  4. What about a recursive solution with memoization? That's always seemed like a good way to solve the problem to me. You get the expressiveness of being close to the mathematics, with the benefit of only calculating each number once.

    ReplyDelete
  5. Swizec: Asymptotically it should be the same as the iterative solution, in practice it'll be several times slower. A reasonable additive cost for allocating memory, a linear cost to initialize it if you aren't careful, and plenty of branching, function overhead and cache misses while it runs.

    ReplyDelete
  6. "The proof of this identity probably requires mathematical knowledge that is beyond my current capacity,"

    No it does not,

    Only consider matrix multiplication. the matrix [[1,1][1,0]] becomes operation of adding below row to above, and shifting above row to below, just as that. It is trivial, from the definition of fibonacci numbers. Dont go so harsh on yourself. keep it up!

    ReplyDelete
  7. There are several Python versions on Rosetta code here: http://rosettacode.org/wiki/Fibonacci_sequence#Python .

    Versions in other languages are also on the page.

    ReplyDelete
  8. This comment has been removed by the author.

    ReplyDelete
  9. This comment has been removed by the author.

    ReplyDelete
  10. This comment has been removed by the author.

    ReplyDelete
  11. What are the incidents of "Math Processing Error" in the original article and in the comments?

    ReplyDelete
    Replies
    1. The math processing error is probably due to the browser you are using. Because I use Mathjax to render the maths on the page, it might be incompatible to mobile browsers (or there might be a weird bug with Mathjax for any other reason).

      Delete
  12. Holly shit! I found your blog by searching fibonacci on twitter to check out mentions on twitter. I wrote a VERY similar article yesterday. I actually had the same introduction at first ! (And I thought the matrix exponentiation stuff was not famous T_T) Sorry for that.
    http://fulmicoton.com/posts/fibonacci/

    ReplyDelete
    Replies
    1. I just read your post and thought it was really well written. Good job! Also I'm a bit jealous that you got more karma than me on Reddit ;)
      Keep up the good work.

      Delete
  13. This comment has been removed by the author.

    ReplyDelete