Questions:

  • Which numerical errors are unavoidable in a Python programme?
  • How do I choose the optimum step size $h$ when using the finite difference method?
  • How can I measure the speed of my code?

Objectives:

  • Understand that there are unavoidable rounding errors when working with floats
  • Write code for testing if two floats are equivalent (to within machine accuracy)
  • Calculate the optimum step size $h$ for finite difference methods
  • Measure the length of time a Notebook cell takes to run using the %time magic.

Finite difference methods have two sources of error

The error in a method's solution is defined as the difference between the approximation and the exact analytical solution. The two sources of error in finite difference methods are round-off error, the loss of precision due to computer rounding of decimal quantities, and truncation error or discretization error, the difference between the exact solution of the original differential equation and the exact quantity assuming perfect arithmetic (that is, assuming no round-off).

Computers have inherent limitations that lead to rounding errors

  • We have seen how computer programming can be used to model physical systems. However computers have inherent limitations - they cannot store real numbers with an infinite number of decimal places.
  • In many cases this is not a problem, but it is something to be aware of. For example, take the following piece of code:
def add_numbers(a,b):
    return a+b

def test_add_numbers():
    assert add_numbers(0.1,0.2) == 0.3
  • add_numbers is a function for adding two Python objects a and b.
  • test_add_numbers is a function for testing is the add_numbers function is working as expected (we will see more on testing later in the course). This function will raise an error if $0.1 + 0.2$ does not equal 0.3.
add_numbers(1,2)
3

The add_numbers function works as expected if we pass it two integers. However when we run the test function we raise an assertion error:

test_add_numbers()
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-3-fee3d9bc4a88> in <module>
----> 1 test_add_numbers()

<ipython-input-1-61c3c4878185> in test_add_numbers()
      3 
      4 def test_add_numbers():
----> 5     assert add_numbers(0.1,0.2) == 0.3

AssertionError: 

This rounding error is given because $0.1+0.2$ does not equal 0.3 exactly:

0.1+0.2

This is because floating point numbers (floats) are represented on the computer to a certain precision. In Python the standard level of precision is 16 significant digits.

Note: The largest value you can give a floating point variable is about $10^{308}$. The smallest is -$10^{308}$. If you exceed these values (which is unlikely) then the computer will return an Overflow error. In contrast, PYthon can represent integers to any precision - limited only by the memory of the machine.

Do not test for the equality of two floats

As we have seen in the previous example, we should not test for the equality of two floats. Instead we should ask if they are equal up to a given precision:

def add_numbers(a,b):
    return a+b

epsilon = 1e-12

def test_add_numbers():
    assert abs(add_numbers(0.1,0.2) - 0.3) < epsilon
test_add_numbers()

The finite difference discretisation error is associated with step size $h$

  • The finite difference method is only exact in the limit that $h$ is zero. This is not possible, so the differences we calculate do not give exact derivatives.
  • However one way of improving the finite-$h$ approximation is to decrease the step size in space (use a higher number of points on our real space grid).
  • To demonstrate this, consider the Taylor expansion of $f(x)$ about $x$:
\begin{equation} f(x+h) = f(x) + hf'(x) +\frac{1}{2}h^2f''(x) + \ldots \end{equation}
  • Re-arranging gives an expression for the forward difference method:
\begin{equation} f'(x) = \frac{f(x+h) - f(x)}{h} - \frac{1}{2}hf''(x) + \ldots \end{equation}
  • When we calculate the forward difference method we ignore all terms $f''(x)$ and higher. The size of the neglected terms gives the approximation error. We can see from above that the error us linear in $h$ so, as we would expect, the approximation generally improves as we decrease step size $h$.
  • Note however that when the step size is decreased the programme will run more slowly.

It is possible to make the step size $h$ too small

  • We also need to think about the rounding errors associated with finite differences. Counter-intuitively, these errors can increase as we decrease the step size $h$.

A computer can typically store a number $f(x)$ to an accuracy of 16 significant figures, or $Cf(x)$ where $C=10^{-16}$. In the worst case, this makes the error $\epsilon$ on our derivative:

\begin{equation} \epsilon = \frac{2C|f(x)|}{h} + \frac{1}{2}h|f''(x)|. \end{equation}

We want to find the value of $h$ which minimises this error so we differentiate with respect to $h$ and set the result equal to zero.

\begin{equation} -\frac{2C|f(x)|}{h^2} + h|f''(x)| \end{equation}\begin{equation} h = \sqrt{4C\lvert\frac{f(x)}{f''(x)}\rvert} \end{equation}

If $f(x)$ and $f''(x)$ are order 1, then $h$ should be order $\sqrt{C}$, or $10^{-8}$.

Similar reasoning applied to the central difference formula suggests that the optimum step size for this method is $10^{-5}$.

Use the %time magic to measure the length of time a Jupyter Notebook cell takes to run

It is often possible to use a range of numerical methods to achieve the same level of accuracy. In this case, we may want to consider code speed - which method will run the fastest? This is a particularly important question when writing code that is computationally intensive. To measure the length of time it takes for a Jupyter Notebook cell to run we can use the % time magic

def sum_integers(max_integer):
    count = 0
    for i in range(max_integer):
        count += max_integer + 1
        
    return count
        
%time sum = sum_integers(1000000)

Keypoints:

  • Finite difference methods have two sources of error
  • Computers have inherent limitations that lead to rounding errors
  • Do not test for the equality of two floats
  • The finite difference discretisation error is associated with step size ℎ
  • It is possible to make the step size ℎ too smal
  • Use the %time magic to measure the length of time a Jupyter Notebook cell takes to run