Questions:

  • Which numerical errors are unavoidable in a Python programme?
  • How do I choose the optimum step size $h$ when using the finite difference method?
  • What do the terms first-order accurate and second-order accurate mean?
  • How can I measure the speed of my code?

Objectives:

  • Understand that there are unavoidable rounding errors when working with floats
  • Write code for testing if two floats are equivalent (to within machine accuracy)
  • Calculate the optimum step size $h$ for finite difference methods
  • Measure the length of time a Notebook cell takes to run using the %time magic.

Computers have inherent limitations that lead to rounding errors

  • We have seen how computer programming can be used to model physical systems. However computers have inherent limitations - they cannot store real numbers with an infinite number of decimal places.
  • In many cases this is not a problem, but it is something to be aware of. For example, take the following piece of code:
def add_numbers(a,b):
    return a+b

def test_add_numbers():
    assert add_numbers(0.1,0.2) == 0.3
  • add_numbers is a function for adding two Python objects a and b.
  • test_add_numbers is a function for testing is the add_numbers function is working as expected (we will see more on testing later in the course). This function will raise an error if $0.1 + 0.2$ does not equal 0.3.
add_numbers(1,2)
3

The add_numbers function works as expected if we pass it two integers. However when we run the test function we raise an assertion error:

test_add_numbers()
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-5-fee3d9bc4a88> in <module>
----> 1 test_add_numbers()

<ipython-input-3-61c3c4878185> in test_add_numbers()
      3 
      4 def test_add_numbers():
----> 5     assert add_numbers(0.1,0.2) == 0.3

AssertionError: 

This rounding error is given because $0.1+0.2$ does not equal 0.3 exactly:

0.1+0.2
0.30000000000000004

This is because floating point numbers (floats) are represented on the computer to a certain precision. In Python the standard level of precision is 16 significant digits.

Note: The largest value you can give a floating point variable is about $10^{308}$. The smallest is -$10^{308}$. If you exceed these values (which is unlikely) then the computer will return an Overflow error. In contrast, PYthon can represent integers to any precision - limited only by the memory of the machine.

Do not test for the equality of two floats

As we have seen in the previous example, we should not test for the equality of two floats. Instead we should ask if they are equal up to a given precision:

def add_numbers(a,b):
    return a+b

epsilon = 1e-12

def test_add_numbers():
    assert abs(add_numbers(0.1,0.2) - 0.3) < epsilon
test_add_numbers()

Finite difference methods have two sources of error

  • There are two sources of errors for finite difference methods. The first is the rounding error introduced at the start of this tutorial. The second is from the approximation that the step size $h$ is small but not zero.
  • One way of improving the finite-$h$ approximation is to decrease the step size in space (use a higher number of points on our real space grid). However when the step size is decreased the programme will run more slowly.
  • We also need to think about the rounding errors associated with finite differences. Counter-intuitively, these errors can increase as we decrease the step size $h$.

To demonstrate this, consider the Taylor expansion of $f(x)$ about $x$:

\begin{equation} f(x+h) = f(x) + hf'(x) +\frac{1}{2}h^2f''(x) + \ldots \end{equation}

Re-arrange the expression to get the expression for the forward difference method:

\begin{equation} f'(x) = \frac{f(x+h)}{h} - \frac{1}{2}hf''(x)+\ldots \end{equation}

A computer can typically store a number $f(x)$ to an accuracy of 16 significant figures, or $Cf(x)$ where $C=10^{-16}$. In the worst case, this makes the error $\epsilon$ on our derivative:

\begin{equation} \epsilon = \frac{2C|f(x)|}{h} + \frac{1}{2}h|f''(x)|. \end{equation}

We want to find the value of $h$ which minimises this error so we differentiate with respect to $h$ and set the result equal to zero.

\begin{equation} -\frac{2C|f(x)|}{h^2} + h|f''(x)| \end{equation}\begin{equation} h = \sqrt{4C\lvert\frac{f(x)}{f''(x)}\rvert} \end{equation}

If $f(x)$ and $f''(x)$ are order 1, then $h$ should be order $\sqrt{C}$, or $10^{-8}$.

Similar reasoning applied to the central difference formula suggests that the optimum step size for this method is $10^{-5}$.

Euler's method is a first-order method accurate to order $h$.

  • As we have seen in previous tutorials, numerical methods (such as Euler's method or the Runge-Kutta method) give approximate solutions.
  • Euler's method neglected the term in $h^2$ and higher: \begin{equation} x(t+h) = x(t)+hf(x,t)+\mathcal{O}(h^2) \end{equation}
  • This tells us the error introduced on a single step of the method is proportional to $h^2$ - this makes Euler's method a first-order method, accurate to order $h$.
  • However the cumulative error over several steps is proportional to $h$
  • So to make our error half as large we need to double the number of steps (halve the step size) and double the length of the calculation.

The second-order Runge-Kutta method is accurate to order $h^2$.

  • The error term for one step of the Runge-Kutta method is ${O}(h^3)$ - this makes the Runge-Kutta method accurate to order $h^2$ which is why this is called the second-order Runge Kutta method (RK2).
  • With the RK2 can use a fewer number of steps whilst getting the same accuracy as Euler's method.
  • There are higher order Runge-Kutta methods which increase the accuracy further.

Use the %time magic to measure the length of time a Jupyter Notebook cell takes to run

def sum_integers(max_integer):
    count = 0
    for i in range(max_integer):
        count += max_integer + 1
        
    return count
        
%time sum = sum_integers(1000000)
CPU times: user 100 ms, sys: 3.79 ms, total: 104 ms
Wall time: 110 ms

Keypoints:

  • Computers have inherent limitations that lead to rounding errors
  • Do not test for the equality of two floats
  • Finite difference methods have two sources of error
  • The relaxation method for PDEs is limited by the accuracy of the finite difference method.
  • Euler's method is a first-order method accurate to order $h$.
  • The second-order Runge-Kutta method is accurate to order $h^2$.