hn-classics/_stories/2009/15901251.md

12 KiB
Raw Blame History

Source

Approximating a solution that doesn't exist | diff eqn

John D. Cook

Skip to content

(832) 422-8646

Contact

Approximating a solution that doesnt exist

Posted on 11 August 2009 by John

The following example made an impression on me when I first saw it years ago. I still think its an important example, though Id draw a different conclusion from it today.

Problem: Let y(t) be the solution to the differential equation y = _t_2 + _y_2 with y(0) = 1. Calculate y(1).

If we use Eulers numerical method with a step size h = 0.1, we get y(1) = 7.19. If we reduce the step size to 0.05 we get y(1) = 12.32. If we reduce the step size further to 0.01, we get y(1) = 90.69. Thats strange. Lets switch over to a more accurate method, Runge-Kutta. With a step size 0.1 the Runge-Kutta method gives 735.00, and if we use a step size of 0.01 we get a result larger than 1015. Whats going on?

The problem presupposes that a solution exists at t = 1 when in fact no solution exists. General theory (Picards theorem) tells that a unique solution exists for some interval containing 0, but it does not tell us how far that interval extends. With a little work we can show that a solution exists for t at least as large as π/4. However, the solution becomes unbounded somewhere between π/4 and 1.

When I first saw this example, my conclusion was that it showed how important theory is. If you just go about numerically computing solutions without knowing that a solution exists, you can think you have succeeded when youre actually computing something that doesnt exist. Prove existence and uniqueness before computing. Theory comes first.

Now I think the example shows the importance of the interplay between theory and numerical computation. It would be nice to know how big the solution interval is before computing anything, but thats not always possible. Also, its not obvious from looking at the equation that there should be a problem at t = 1. The difficulties we had with numerical computation suggested there might be a theoretical problem.

I first saw this problem in an earlier edition of Boyce and DiPrima. The book goes on to approximate the interval over which the solution does exist using a combination of analytical and numerical methods. It looks like the solution becomes unbounded somewhere near t = 0.97.

I wouldnt say that theory or computation necessarily come first. Id say you iterate between them, starting with the approach that is more tractable. Theoretical results are  more satisfying when theyre available, but theory often doesnt tell us as much as wed like to know. Also, people make mistakes in theoretical computation just as they do in numerical computation. Its best when theory and numerical work validate each other.

The problem does show the importance of being concerned with existence and uniqueness, but theoretical methods are not the only methods for exploring existence. Good numerical practice, i.e. trying more than one step size or more than one numerical method, is also valuable. In any case, the problem shows that without some diligence — either theoretical or numerical — you could naively compute an approximate “solution” where no solution exists.

RelatedConsulting in differential equations

Categories : Math

Tags : Differential equations

Bookmark the permalink

Post navigation

Previous PostReasoning about code

Next PostPhysical explanation of median

10 thoughts on “Approximating a solution that doesnt exist”

  1. Keith

11 August 2009 at 11:10

I thought the article below had an interesting comment about theory vs. practice from an engineering standpoint.

http://www.americanheritage.com/articles/magazine/it/1997/3/1997_3_20.shtml

Q: You were developing transonic theory after the sound barrier had already been broken. Hasn't much of your historical study also involved engineering problems that were "solved" in a practical sense before they were understood theoretically?

A: Yes, and I think that's a typical situation in technology. You have to look hard to find cases in which the theory is well worked out before the practice. Look at the steam engine and thermodynamics; that whole vast science got started because people were trying to explain and calculate the performance of the reciprocating steam engines that had been built.

  1. Daniel Lemire

11 August 2009 at 20:25

While the term thermodynamics came about after the first engines, we already had a lot of theory worked out before. Pressure was a well known concept. So, I wouldnt go so far as to say that “practical application” predates theory.

I suspect it is more of an entanglement… theory without practice is barren… practice without theory is crude…

  1. ekzept

11 August 2009 at 23:41

Very nice illustration! Just to finish off the scholarship, the Boyce and DiPrima (first) edition I have (INTRODUCTION TO DIFFERENTIAL EQUATIONS, 1970, SBN 471-09338-6) puts the illustration of this problem in section 7.7, out at pages 277-278. It is interesting it is dumped in a section introducing numerical methods rather than being placed far ahead, in the Introduction. That Introduction uses Newtons Second Law as a start, and follows with the general form of ODEs. I think this shows, in part, how heavily computation has influenced our collective view of this material.

To balance, however, Id say there are some algorithms and processes which give answers with phenomena and data theory is quite hopeless at dicing. (Think Navier-Stokes.) Sure, they have and need parts to be robustly built, and existence solutions are critical. And, sure, people — including me — could do with more careful attention to theoretical underpinnings in nearly everything I do. I try. We try. But, for example, theres a lot of good engineering that can be done with Laplace transforms that doesnt really need an understanding of the proof of Lerchs Theorem. Id say thats good!

  1. Pingback: Carnival of Mathematics #56 « Reasonable Deviations

  2. Jan Van lent

15 December 2011 at 14:05

Note that the location of the singularity is given by t(1), where t(w) is the solution of the initial value problem
t'(w) = 1/((1-w)^2*t(w)^2+1), t(0) = 0.
The value of t(1) = 0.96981072 can easily be found numerically.

The solution to the original problem can be written using Bessel functions.

  1. Pingback: Sensitive dependence on initial conditions | John D. Cook

  2. Pingback: Three views of differential equations | John D. Cook

  3. Pingback: Boundary conditions are the hard part

  4. Pingback: Life lessons from differential equations

  5. Paul Abbott

30 January 2017 at 00:45

There is no problem at t=1. You can continue the exact solution involving Bessel functions (easily obtained using, say, Mathematica), past the (first) singularity (which is at 0.930564508526…), and find that y(1)=-14.379106277…

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Comment

Notify me of followup comments via e-mail

Name *

Email *

Website

Search for:

John D. Cook

John D. Cook, PhD

Latest Posts

Categories

CategoriesSelect CategoryBusinessClinical trialsComputingCreativityGraphicsMachine learningMathMusicPowerShellPythonScienceSoftware developmentStatisticsTypographyUncategorized

| ----- | | Subscribe to blog by email | Subscribe via email | | RSS icon | Subscribe via RSS | | Newsletter icon | Monthly newsletter |

John D. Cook

© All rights reserved.

Search for: