hn-classics/_stories/2010/2991867.md

416 lines
18 KiB
Markdown
Raw Permalink Normal View History

---
created_at: '2011-09-13T15:27:47.000Z'
title: Surprises from numerical linear algebra (2010)
url: http://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/
author: th0ma5
points: 153
story_text: ''
comment_text:
num_comments: 49
story_id:
story_title:
story_url:
parent_id:
created_at_i: 1315927667
_tags:
- story
- author_th0ma5
- story_2991867
objectID: '2991867'
2018-06-08 12:05:27 +00:00
year: 2010
---
2018-02-23 18:19:40 +00:00
[Source](https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/ "Permalink to Ten surprises from numerical linear algebra")
# Ten surprises from numerical linear algebra
[ ![John D. Cook][1] ][2]
[Skip to content][3]
* [ABOUT][4]
* [SERVICES][5]
* [APPLIED MATH][6]
* [STATISTICS][7]
* [COMPUTATION][8]
* [WRITING][9]
* [BLOG][2]
* [TECHNICAL NOTES][10]
* [JOURNAL ARTICLES][11]
* [TWITTER][12]
* [PRESENTATIONS][13]
* [NEWSLETTER][14]
* [CLIENTS][15]
* [ENDORSEMENTS][16]
[(832) 422-8646][17]
[Contact][18]
# Ten surprises from numerical linear algebra
Posted on [20 January 2010][19] by [John][20]
Here are ten things about numerical linear algebra that you may find surprising if youre not familiar with the field.
1. Numerical linear algebra applies very advanced mathematics to solve problems that can be stated with high school mathematics.
2. Practical applications often require solving enormous systems of equations, millions or even billions of variables.
3. The heart of Google is an enormous linear algebra problem. PageRank is essentially an eigenvalue problem.
4. The efficiency of solving very large systems of equations has benefited at least as much from advances in algorithms as from Moores law.
5. Many practical problems — optimization, differential equations, signal processing, etc. — boil down to solving linear systems, even when the original problems are non-linear. Finite element software, for example, spends nearly all its time solving linear equations.
6. A system of a million equations can sometimes be solved on an ordinary PC in under a millisecond, depending on the structure of the equations.
7. Iterative methods, methods that in theory require an infinite number of steps to solve a problem, are often faster and more accurate than direct methods, methods that in theory produce an exact answer in a finite number of steps.
8. There are many theorems bounding the error in solutions _produced on real computers_. That is, the theorems dont just bound the error from hypothetical calculations carried out in exact arithmetic but bound the error from arithmetic as carried out in [floating point arithmetic][21] on computer hardware.
9. It is hardly ever necessary to compute the [inverse][22] of a matrix.
10. There is remarkably mature [software][23] for numerical linear algebra. Brilliant people have worked on this software for many years.
![Click to find out more about consulting for numerical computing][24]
 
**Related posts**:
* [Dont invert that matrix][22]
* [Searching for John Francis][25]
* [Applying PageRank to biology][26]
* [Numerical computing consulting][27]
Categories : [Computing][28] [Math][29]
Tags : [Math][30] [Programming][31]
Bookmark the [permalink][32]
# Post navigation
Previous Post[Dont invert that matrix][22]
Next Post[How to compute the soft maximum][33]
## 27 thoughts on “Ten surprises from numerical linear algebra”
1. Sean Devlin
[ 20 January 2010 at 10:53 ][34]
No love for ATLAS? I thought this was use in favor of lapack nowadays.
<http://math-atlas.sourceforge.net/>
2. Giles Warrack
[ 20 January 2010 at 11:31 ][35]
Re No 3, Herbert Wilf (Penn) has a nice handout on the Kendall-Wei algorithm and Google
<http://www.math.upenn.edu/~wilf/>
3. joeyo
[ 20 January 2010 at 12:27 ][36]
@Sean: LAPACK is built on top of BLAS/ATLAS.
4. Evan
[ 20 January 2010 at 22:48 ][37]
So.. you have convinced me to learn Linear Algebra. Do you have any suggestions for books to get started? Thanks for the inspiration!
5. John
[ 21 January 2010 at 00:11 ][38]
Evan, there are a lot of linear algebra books out there. I got a review copy of Cheney and Kincaids [ linear algebra book][39] a few weeks ago and I thought it was a nice text. Its easy to read and has a good mix of theory and application.
If you know basic linear algebra and want to jump into numerical linear algebra, I recommend Demmels [Applied Numerical Linear Algebra][40]. Also, theres the classic [Matrix Calculations][41] by Golub and Van Loan.
6. Evan
[ 21 January 2010 at 14:54 ][42]
John, thanks for your quick reply. Ill definitely check those out!
7. Matt
[ 24 January 2010 at 06:46 ][43]
By the way, has anyone here played with GotoBLAS?
Is its performance really as good as the benchmarks on its website suggest or is it just advertisement?
The source code is freely available, though only for research applications (commercial is not free). Here are relevant links:
<http://www.tacc.utexas.edu/resources/software/gotoblasfaq/>
<http://en.wikipedia.org/wiki/Kazushige_Goto>
<http://www.tacc.utexas.edu/tacc-projects/>
<http://web.tacc.utexas.edu/~kgoto/>
<http://www.pathscale.com/building_code/gotoblas.html>
8. Michael Holmes
[ 1 April 2010 at 08:33 ][44]
John suggested I post this here
We recently released a MATLAB package called QLA that dramatically accelerates core linear algebra routines on dense matrices. It includes SVD, linear systems, least squares, PCA, etc., and speedups range from 10x to 1,000x+ on benchmarks. The gains come by trading a minuscule bit of accuracy for a mammoth increase in speed.
You can find more details and the free beta version here:
<http://massiveanalytics.com>.
9. Ron Modesitt
[ 26 July 2010 at 18:46 ][45]
What a thoroughly interesting blog! Glad I found you, with thanks to Stumble.
Ron
10. [Luis][46]
[ 19 July 2011 at 17:45 ][47]
Another interesting surprise is that, often, the matrices involved are very sparse.
Books: I love “Matrix algebra useful for statistics” by Searle.
11. fish
[ 13 September 2011 at 11:32 ][48]
Ha, good to know re: matrix inversion — I often use inverts in calculating color transforms (see <http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html>) but hey. Thanks!
12. [Utkarsh Upadhyay][49]
[ 13 September 2011 at 11:02 ][50]
Some surprises indeed!
However, a small note for point 4:
> The efficiency of solving very large systems of equations has benefited at least as much from advances in algorithms as from Moore's law.
At least solutions for spare system of equations have benefited [two orders of magnitude more from algorithmic developments than hardware advancements][51] (slide 7) to a total of _10 orders of magnitude improvement_ from 1970 to 2001.
For other kinds of linear equations, the improvements [ indeed have been approximately the same.][52].
13. Juan Solano
[ 13 September 2011 at 11:33 ][53]
Let me recommend the videos in Khan Academy, is a very nice place to start and refresh the key concepts.. and then digest the Linear Algebra beauty…
<http://www.khanacademy.org/#browse>
Juan
14. J
[ 13 September 2011 at 12:18 ][54]
Charles Van Loan has a sweet pair of custom Nike kicks. Not even joking.
15. [martin][55]
[ 13 September 2011 at 15:53 ][56]
Could you elaborate on number 6? I would have thought it would have taken close to a millisecond to just read a million values.
16. mg
[ 13 September 2011 at 17:07 ][57]
@Evan my favorite text is Janichs Linear Algebra.
17. Evan
[ 13 September 2011 at 17:16 ][58]
Thanks mg. Ill check that out as well.
18. AV
[ 14 September 2011 at 04:20 ][59]
this article is a trivality.
any indian schoolkid knows this.
of course linear algebra is balzing fast for FEA and related computations.
but the writer should state that the speed in fast lin algebra comes not from inherent properties of lin-algebra, but from the fact that we MASSIVELY approximate the actual behavior of the beams/trusses/etc. we approximate reality. and thats becoz we dont have good computational algos to handle the true mathematical representations.
if that was overhead transmission then heres a simplification try to find a fast solution to the Navier Stokes equations using linear algebra in their pure form.
19. Anselmo
[ 19 November 2011 at 20:48 ][60]
For me, the greatest achievement of linear algebra is the simplex method. Linear optimisation problems with million variables may be solved in home computers in a few minutes. It is remarkably efficient, though it has an exponential worst case performance. In addition, most exact non-linear optimisation algorithms use the simplex method for solving the linear approximation model.
20. babita
[ 19 May 2012 at 09:02 ][61]
i want to solve the equation Ax=B,where A is sparse given in CSC format.A and B are known.
Can you give me some references to solve it???
Reply as soon as possible…its urgent..
21. Pingback: [Ten surprises from numerical linear algebra | mathcsteacher][62]
22. Federico Poloni
[ 13 February 2013 at 12:49 ][63]
“The heart of Google is an enormous linear algebra problem.”
This is often heard, and the algorithm itself is a nice one to tell around, but I think its role in the architecture of a search engine is overrated.
1- While PageRank constitutes a part of the algorithm that is used to rank sites (allegedly its not like its open source), there must be much more than that. PageRank returns a universal ranking, that does not depend on the query. Clearly the ordering has to be tweaked to the specific search. Pagerank is not computed on-the-fly for every web search, it would be too slow to do that.
2- Ranking isnt the only part of the problem. There is crawling, indexing, storing data, and recalling results efficiently. PageRank might be a nice idea that constitutes 1% of that, but the bulk of the computation is many boring details on storing and retrieving compressed data.
23. John
[ 13 February 2013 at 13:37 ][64]
Frederico: The basic idea that made Google a better search engine than competitors was an eigenvalue problem. Thats not to say that the algorithm hasnt been tweaked, or that its easy to implement, or that many other factors dont matter.
24. Pingback: [Iterative linear solvers as metaphor | John D. Cook][65]
25. Pingback: [Dont invert that matrix][66]
26. Pingback: [Three surprises with the trapezoid integration rule][67]
27. Pingback: [Linear solvers are behind lattice QCD PRACE Summer Of HPC][68]
### Leave a Reply [Cancel reply][69]
Your email address will not be published. Required fields are marked *
Comment
Notify me of followup comments via e-mail
Name *
Email *
Website
Search for:
![John D. Cook][70]
**John D. Cook, PhD**
# Latest Posts
* [Fibonacci / Binomial coefficient identity][71]
* [Painless project management][72]
* [New animation feature for exponential sums][73]
* [Quantifying normal approximation accuracy][74]
* [Ordinary Potential Polynomials][75]
# Categories
CategoriesSelect CategoryBusinessClinical trialsComputingCreativityGraphicsMachine learningMathMusicPowerShellPythonScienceSoftware developmentStatisticsTypographyUncategorized
| ----- |
| ![Subscribe to blog by email][76] | [Subscribe via email][77] |
| ![RSS icon][78] | [Subscribe via RSS][79] |
| ![Newsletter icon][80] | [Monthly newsletter][81] |
### [John D. Cook][2]
© All rights reserved.
Search for:
[1]: https://www.johndcook.com/blog/wp-content/themes/ThemeAlley.Business.Pro/images/Logo.svg
[2]: https://www.johndcook.com/blog/
[3]: https://www.johndcook.com#content "Skip to content"
[4]: https://www.johndcook.com/blog/top/
[5]: https://www.johndcook.com/blog/services-2/
[6]: https://www.johndcook.com/blog/applied-math/
[7]: https://www.johndcook.com/blog/applied-statistics/
[8]: https://www.johndcook.com/blog/applied-computation/
[9]: https://www.johndcook.com/blog/writing/
[10]: https://www.johndcook.com/blog/notes/
[11]: https://www.johndcook.com/blog/articles/
[12]: https://www.johndcook.com/blog/twitter_page/
[13]: https://www.johndcook.com/blog/presentations/
[14]: https://www.johndcook.com/blog/newsletter/
[15]: https://www.johndcook.com/blog/clients-new/
[16]: https://www.johndcook.com/blog/endorsements/
[17]: tel:8324228646
[18]: https://www.johndcook.com/blog/contact/
[19]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/ "08:54"
[20]: https://www.johndcook.com/blog/author/john/ "View all posts by John"
[21]: https://www.johndcook.com/blog/2009/04/06/numbers-are-a-leaky-abstraction/
[22]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/
[23]: http://www.netlib.org/lapack/
[24]: https://www.johndcook.com/math_computing3.png
[25]: https://www.johndcook.com/blog/2009/03/31/searching-for-john-francis/
[26]: https://www.johndcook.com/blog/2008/08/14/applying-page-rank-idea-to-biology/
[27]: https://www.johndcook.com/blog/statistical_consulting/
[28]: https://www.johndcook.com/blog/category/computing/
[29]: https://www.johndcook.com/blog/category/math/
[30]: https://www.johndcook.com/blog/tag/math/
[31]: https://www.johndcook.com/blog/tag/programming/
[32]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/ "Permalink to Ten surprises from numerical linear algebra"
[33]: https://www.johndcook.com/blog/2010/01/20/how-to-compute-the-soft-maximum/
[34]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10135
[35]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10136
[36]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10137
[37]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10138
[38]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10139
[39]: http://www.amazon.com/gp/product/0763750204?ie=UTF8&tag=theende-20&linkCode=xm2&camp=1789&creativeASIN=0763750204
[40]: http://www.amazon.com/gp/product/0898713897?ie=UTF8&tag=theende-20&linkCode=xm2&camp=1789&creativeASIN=0898713897
[41]: http://www.amazon.com/gp/product/0801854148?ie=UTF8&tag=theende-20&linkCode=xm2&camp=1789&creativeASIN=0801854148
[42]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10140
[43]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10141
[44]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10142
[45]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10143
[46]: http://apiolaza.net
[47]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10144
[48]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10146
[49]: http://musicallyut.blogspot.com
[50]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10145
[51]: http://www.cs.washington.edu/education/courses/cse312/11wi/slides/14p.pdf
[52]: http://science.energy.gov/~/media/ascr/pdf/research/am/docs/Multiscale_math_workshop_2.pdf
[53]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10147
[54]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10148
[55]: http://mudeltasigma.blogspot.com
[56]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10149
[57]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10150
[58]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10151
[59]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10152
[60]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10153
[61]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10154
[62]: http://mathcsteacher.wordpress.com/2012/06/16/ten-surprises-from-numerical-linear-algebra/
[63]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10156
[64]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#comment-10157
[65]: http://www.johndcook.com/blog/2014/06/25/iterative-linear-solvers/
[66]: http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/
[67]: http://www.johndcook.com/blog/2010/12/02/three-surprises-with-the-trapezoid-rule/
[68]: https://summerofhpc.prace-ri.eu/linear-solvers-are-behind-lattice-qcd/
[69]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/#respond
[70]: https://www.johndcook.com/jdc_20170630.jpg
[71]: https://www.johndcook.com/blog/2018/02/22/fibonacci-binomial-coefficient-identity/ "Permanent link to Fibonacci / Binomial coefficient identity"
[72]: https://www.johndcook.com/blog/2018/02/20/painless-project-management/ "Permanent link to Painless project management"
[73]: https://www.johndcook.com/blog/2018/02/19/new-animation-feature-for-exponential-sums/ "Permanent link to New animation feature for exponential sums"
[74]: https://www.johndcook.com/blog/2018/02/19/quantifying-normal-approximation-accuracy/ "Permanent link to Quantifying normal approximation accuracy"
[75]: https://www.johndcook.com/blog/2018/02/17/ordinary-potential-polynomials/ "Permanent link to Ordinary Potential Polynomials"
[76]: https://www.johndcook.com/contact_email.svg
[77]: https://feedburner.google.com/fb/a/mailverify?uri=TheEndeavour&loc=en_US
[78]: https://www.johndcook.com/contact_rss.svg
[79]: https://www.johndcook.com/blog/feed
[80]: https://www.johndcook.com/newsletter.svg
[81]: https://www.johndcook.com/blog/newsletter