Scientific Computing Then and Now

This afternoon I was in charge of another Computational Physics laboratory session. This one went better than last week, when we had a lot of teething problems, and I’m glad to say that the students are already writing bits of Python code and getting output – some of it was even correct!

After this afternoon’s session I came back to my office and noticed this little book on my shelf:

Despite the exorbitant cost, I bought it when I was an undergraduate back in the 1980s, though it was first published in 1966. It’s an interesting little book, notable for the fact that it doesn’t cover any computer programming at all. It focusses instead on the analysis of accuracy and stability of various methods of doing various things.

This is the jacket blurb:

This short book sets out the principles of the methods commonly employed in obtaining numerical solutions to mathematical equations and shows how they are applied in solving particular types of equations. Now that computing facilities are available to most universities, scientific and engineering laboratories and design shops, an introduction to numerical method is an essential part of the training of scientists and engineers. A course on the lines of Professor Wilkes’s book is given to graduate or undergraduate students of mathematics, the physical sciences and engineering at many universities and the number will increase. By concentrating on the essentials of his subject and giving it a modern slant, Professor Wilkes has written a book that is both concise and that covers the needs of a great many users of digital computers; it will serve also as a sound introduction for those who need to consult more detailed works.

Like any book that describes itself as having “a modern slant” is almost bound to date very quickly, and so this did, but its virtue is that it does complement current “modern” books which don’t include as much about the issues covered by Wilkes because one is nowadays far less constrained by memory and speed than was the case decades ago (and which circumstances I recall very well).

The Course Module I’m teaching covers numerical differentiation, numerical integration, root-finding and the solution of ordinary differential equations. All these topics are covered by Wilkes but I was intrigued to discover when I looked that he does numerical integration before numerical differentiation, whereas I do it the other way round. I put it first because I think it’s easier, and I wanted the students do do actually coding as quickly as possible, but I seem to remember doing e.g. Simpson’s Rule at school but don’t recall ever being taught about derivatives as finite differences.

Looking up the start of numerical differentiation in Wilkes I found:

This is a far less satisfactory method than numerical integration, as the following considerations show.

The following considerations indeed talk about the effect of rounding errors on calculations of finite differences (e.g. the forward difference Δf = [f(x+δ)-f(x)]/δ or backward difference Δf = [f(x)-f(x-δ)]/δ) with relatively large step size δ. Even with a modest modern machine one can use step sizes small enough to make the errors negligible for many purposes. Nevertheless I think it is important to see how the errors behave for those cases where it might be difficult to choose a very small δ. Indeed it seemed to surprise the students that using a symmetric difference Δf=[f(x+δ)-f(x-δ)]/2δ is significantly better than a forward or backward difference. Do a Taylor series expansion and you’ll understand why!

This example with δ=0.1 shows how the symmetric difference recovers the correct derivative of sin(x) far more accurately than the forward or backward derivative:

7 Responses to “Scientific Computing Then and Now”

  1. Enjoyed this post, Peter, not least because my first module as a fresh-faced new lecturer many, many moons ago was a numerical methods course (in C).

    Interesting that Wilkes says “This is a far less satisfactory method than numerical integration, as the following considerations show.”

    I would argue that numerical integration is as an even better example of the importance of numerical errors. Applying the lowest level approach to numerically integrating a differential equation, i.e. the Euler method, to a really simple system like a simple pendulum (without damping or driving) is a salutary lesson in how errors build up. Energy isn’t conserved so the pendulum eventually ends up spinning rapidly around its pivot…

    • Indeed. Euler’s method is another demonstration of how the obvious thing isn’t actually very good. It’s astonishing that the simple modification that produces the much better Euler-Cromer method was pointed out as recently as Cromer (1981).

      • That Euler-Cromer method is such an elegant (and virtually effortless!) way of compensating for the deficiencies in the Euler method alone.

        More generally, we need more computing in undergraduate physics. A lot more. Too often it’s seen as an optional add-on by students. I would argue that computing/numerical methods are just as important as analytical maths in modern physics and should therefore have equal footing in UG courses.

      • At Maynooth we really want to move Computational methods earlier in the Physics course so that we can set coursework that involves numerics.

  2. Hmm, that’s two “I would argue…”s in the space ot two comments. It seems that I like to argue…

  3. I had a few computer analysis courses as an engineering undergraduate in 1973, 1974. (Yeah, I’m old). We would punch our program on hollerith cards, take them to the card reader, and then a few seconds later, tear our output off the line printer. Very hands on. Computers were slow, so not a very small delta.

    My fondest memory was when I decided to beautify my Fortran program listing by adding unusual characters I found on the key punch machine. I put them in my comments after the “C” in column 1. One card had a string of 60 characters that turned out to be “form feed”, I think in EBCDIC. The line printer spewed paper across the room when it echoed the input, ruining my output. Thinking it was just a computer glitch, I reran my program; same thing happened, and I was promptly kicked out of the computer room for the rest of the day.

Leave a comment