13 KiB
created_at | title | url | author | points | story_text | comment_text | num_comments | story_id | story_title | story_url | parent_id | created_at_i | _tags | objectID | year | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2017-12-11T22:32:40.000Z | Approximating a solution that doesn't exist (2009) | https://www.johndcook.com/blog/2009/08/11/approximating-a-solution-that-doesnt-exist/ | kawera | 52 | 13 | 1513031560 |
|
15901251 | 2009 |
Approximating a solution that doesn't exist | diff eqn
(832) 422-8646
Approximating a solution that doesn’t exist
Posted on 11 August 2009 by John
The following example made an impression on me when I first saw it years ago. I still think it’s an important example, though I’d draw a different conclusion from it today.
Problem: Let y(t) be the solution to the differential equation y‘ = _t_2 + _y_2 with y(0) = 1. Calculate y(1).
If we use Euler’s numerical method with a step size h = 0.1, we get y(1) = 7.19. If we reduce the step size to 0.05 we get y(1) = 12.32. If we reduce the step size further to 0.01, we get y(1) = 90.69. That’s strange. Let’s switch over to a more accurate method, Runge-Kutta. With a step size 0.1 the Runge-Kutta method gives 735.00, and if we use a step size of 0.01 we get a result larger than 1015. What’s going on?
The problem presupposes that a solution exists at t = 1 when in fact no solution exists. General theory (Picard’s theorem) tells that a unique solution exists for some interval containing 0, but it does not tell us how far that interval extends. With a little work we can show that a solution exists for t at least as large as π/4. However, the solution becomes unbounded somewhere between π/4 and 1.
When I first saw this example, my conclusion was that it showed how important theory is. If you just go about numerically computing solutions without knowing that a solution exists, you can think you have succeeded when you’re actually computing something that doesn’t exist. Prove existence and uniqueness before computing. Theory comes first.
Now I think the example shows the importance of the interplay between theory and numerical computation. It would be nice to know how big the solution interval is before computing anything, but that’s not always possible. Also, it’s not obvious from looking at the equation that there should be a problem at t = 1. The difficulties we had with numerical computation suggested there might be a theoretical problem.
I first saw this problem in an earlier edition of Boyce and DiPrima. The book goes on to approximate the interval over which the solution does exist using a combination of analytical and numerical methods. It looks like the solution becomes unbounded somewhere near t = 0.97.
I wouldn’t say that theory or computation necessarily come first. I’d say you iterate between them, starting with the approach that is more tractable. Theoretical results are more satisfying when they’re available, but theory often doesn’t tell us as much as we’d like to know. Also, people make mistakes in theoretical computation just as they do in numerical computation. It’s best when theory and numerical work validate each other.
The problem does show the importance of being concerned with existence and uniqueness, but theoretical methods are not the only methods for exploring existence. Good numerical practice, i.e. trying more than one step size or more than one numerical method, is also valuable. In any case, the problem shows that without some diligence — either theoretical or numerical — you could naively compute an approximate “solution” where no solution exists.
Related: Consulting in differential equations
Categories : Math
Tags : Differential equations
Bookmark the permalink
Post navigation
Previous PostReasoning about code
Next PostPhysical explanation of median
10 thoughts on “Approximating a solution that doesn’t exist”
- Keith
I thought the article below had an interesting comment about theory vs. practice from an engineering standpoint.
http://www.americanheritage.com/articles/magazine/it/1997/3/1997_3_20.shtml
Q: You were developing transonic theory after the sound barrier had already been broken. Hasn't much of your historical study also involved engineering problems that were "solved" in a practical sense before they were understood theoretically?
A: Yes, and I think that's a typical situation in technology. You have to look hard to find cases in which the theory is well worked out before the practice. Look at the steam engine and thermodynamics; that whole vast science got started because people were trying to explain and calculate the performance of the reciprocating steam engines that had been built.
While the term thermodynamics came about after the first engines, we already had a lot of theory worked out before. Pressure was a well known concept. So, I wouldn’t go so far as to say that “practical application” predates theory.
I suspect it is more of an entanglement… theory without practice is barren… practice without theory is crude…
- ekzept
Very nice illustration! Just to finish off the scholarship, the Boyce and DiPrima (first) edition I have (INTRODUCTION TO DIFFERENTIAL EQUATIONS, 1970, SBN 471-09338-6) puts the illustration of this problem in section 7.7, out at pages 277-278. It is interesting it is dumped in a section introducing numerical methods rather than being placed far ahead, in the Introduction. That Introduction uses Newton’s Second Law as a start, and follows with the general form of ODEs. I think this shows, in part, how heavily computation has influenced our collective view of this material.
To balance, however, I’d say there are some algorithms and processes which give answers with phenomena and data theory is quite hopeless at dicing. (Think Navier-Stokes.) Sure, they have and need parts to be robustly built, and existence solutions are critical. And, sure, people — including me — could do with more careful attention to theoretical underpinnings in nearly everything I do. I try. We try. But, for example, there’s a lot of good engineering that can be done with Laplace transforms that doesn’t really need an understanding of the proof of Lerch’s Theorem. I’d say that’s good!
-
Pingback: Carnival of Mathematics #56 « Reasonable Deviations
-
Jan Van lent
Note that the location of the singularity is given by t(1)
, where t(w)
is the solution of the initial value problem
t'(w) = 1/((1-w)^2*t(w)^2+1), t(0) = 0
.
The value of t(1) = 0.96981072
can easily be found numerically.
The solution to the original problem can be written using Bessel functions.
-
Pingback: Sensitive dependence on initial conditions | John D. Cook
-
Pingback: Three views of differential equations | John D. Cook
-
Pingback: Boundary conditions are the hard part
-
Pingback: Life lessons from differential equations
-
Paul Abbott
There is no problem at t=1. You can continue the exact solution involving Bessel functions (easily obtained using, say, Mathematica), past the (first) singularity (which is at 0.930564508526…), and find that y(1)=-14.379106277…
Leave a Reply Cancel reply
Your email address will not be published. Required fields are marked *
Comment
Notify me of followup comments via e-mail
Name *
Email *
Website
Search for:
John D. Cook, PhD
Latest Posts
- Fibonacci / Binomial coefficient identity
- Painless project management
- New animation feature for exponential sums
- Quantifying normal approximation accuracy
- Ordinary Potential Polynomials
Categories
CategoriesSelect CategoryBusinessClinical trialsComputingCreativityGraphicsMachine learningMathMusicPowerShellPythonScienceSoftware developmentStatisticsTypographyUncategorized
| ----- | | | Subscribe via email | | | Subscribe via RSS | | | Monthly newsletter |
John D. Cook
© All rights reserved.
Search for: