255 lines
10 KiB
Markdown
255 lines
10 KiB
Markdown
---
|
||
created_at: '2016-05-12T07:20:22.000Z'
|
||
title: Don’t invert that matrix (2010)
|
||
url: http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/
|
||
author: egjerlow
|
||
points: 133
|
||
story_text:
|
||
comment_text:
|
||
num_comments: 42
|
||
story_id:
|
||
story_title:
|
||
story_url:
|
||
parent_id:
|
||
created_at_i: 1463037622
|
||
_tags:
|
||
- story
|
||
- author_egjerlow
|
||
- story_11681893
|
||
objectID: '11681893'
|
||
|
||
---
|
||
[Source](https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/ "Permalink to Don't invert that matrix")
|
||
|
||
# Don't invert that matrix
|
||
|
||
[ ![John D. Cook][1] ][2]
|
||
|
||
[Skip to content][3]
|
||
|
||
* [ABOUT][4]
|
||
* [SERVICES][5]
|
||
* [APPLIED MATH][6]
|
||
* [STATISTICS][7]
|
||
* [COMPUTATION][8]
|
||
* [WRITING][9]
|
||
* [BLOG][2]
|
||
* [TECHNICAL NOTES][10]
|
||
* [JOURNAL ARTICLES][11]
|
||
* [TWITTER][12]
|
||
* [PRESENTATIONS][13]
|
||
* [NEWSLETTER][14]
|
||
* [CLIENTS][15]
|
||
* [ENDORSEMENTS][16]
|
||
|
||
[(832) 422-8646][17]
|
||
|
||
[Contact][18]
|
||
|
||
# Don’t invert that matrix
|
||
|
||
Posted on [19 January 2010][19] by [John][20]
|
||
|
||
There is hardly ever a good reason to invert a matrix.
|
||
|
||
What do you do if you need to solve _Ax_ = _b_ where _A_ is an _n_ x _n_ matrix? Isn’t the solution _A_-1 _b_? Yes, theoretically. But that doesn’t mean you need to actually find _A_-1. Solving the equation _Ax_ = _b_ is faster than finding _A_-1. Books might write the problem as __x_ = A_-1 _b_, but that doesn’t mean they expect you to calculate it that way.
|
||
|
||
What if you have to solve _Ax_ = _b_ for a lot of different _b_‘s? Surely then it’s worthwhile to find _A_-1. No. The first time you solve _Ax_ = _b_, you factor _A_ and save that factorization. Then when you solve for the next _b_, the answer comes much faster. (Factorization takes O(_n_3) operations. But once the matrix is factored, solving _Ax_ = _b_ takes only O(_n_2) operations. Suppose_ n_ = 1,000. This says that once you’ve solved _Ax_ = _b_ for one _b_, the equation can be solved again for a new _b_ 1,000 times faster than the first one. Buy one get one free.)
|
||
|
||
What if, against advice, you’ve computed _A_-1. Now you might as well use it, right? No, you’re still better off solving _Ax_ = _b_ than multiplying by _A_-1, even if the computation of _A_-1 came for free. Solving the system is more numerically accurate than the performing the matrix multiplication.
|
||
|
||
It is common in applications to solve _Ax_ = _b_ even though there’s not enough memory to store _A_-1. For example, suppose _n_ = 1,000,000 for the matrix _A_ but _A_ has a special sparse structure — say it’s banded — so that all but a few million entries of _A_ are zero. Then _A_ can easily be stored in memory and _Ax_ = _b_ can be solved very quickly. But in general _A_-1 would be dense. That is, nearly all of the 1,000,000,000,000 entries of the matrix would be non-zero. Storing _A_ requires megabytes of memory, but storing _A_-1 would require terabytes of memory.
|
||
|
||
![Click to find out more about consulting for numerical computing][21]
|
||
|
||
|
||
|
||
**Related post**: [Applied linear algebra][22]
|
||
|
||
Categories : [Math][23]
|
||
|
||
Tags : [Math][24]
|
||
|
||
Bookmark the [permalink][25]
|
||
|
||
# Post navigation
|
||
|
||
Previous Post[The disappointing state of Unicode fonts][26]
|
||
|
||
Next Post[Ten surprises from numerical linear algebra][27]
|
||
|
||
## 108 thoughts on “Don’t invert that matrix”
|
||
|
||
# Comment navigation
|
||
|
||
←[ Older Comments][28]
|
||
|
||
1. N. Sequitur
|
||
|
||
[ 5 January 2017 at 16:18 ][29]
|
||
|
||
I have systems of equations that fit nicely in an Ax=b matrix format and usually solve them by inverting A. They’re relatively small (also sparse but not banded and not necessarily positive definite), so this works well. However, I have to make incremental changes in the values of A (say, change two of the three non-zero values in one row) and find the corresponding x and have to do this many, many times. Can factoring help me preserve the work from previous iterations and reduce my ridiculous run times?
|
||
|
||
2. John
|
||
|
||
[ 5 January 2017 at 16:23 ][30]
|
||
|
||
If you change A then you have a new problem and so you can’t reuse the old factorization.
|
||
|
||
On the other hand, if your change to A is small, say A’ = A + E where E has small norm, then a solution x to Ax = b may be approximately a solution to A’x = b. If you’re solving the latter by an iterative method, then you could give it a head start by using the old x as your starting point.
|
||
|
||
3. [Alan Wolfe][31]
|
||
|
||
[ 5 January 2017 at 16:24 ][32]
|
||
|
||
Someone recently gave me some code on reddit that solves Ax=b in fewer steps than general matrix inversion, using gaussian elimination. You can check it out here:
|
||
<https://www.reddit.com/r/programming/comments/5jv6ya/incremental_least_squares_curve_fitting/dbjt9zx/>
|
||
|
||
4. Guillaume
|
||
|
||
[ 15 May 2017 at 12:48 ][33]
|
||
|
||
Great Post!
|
||
|
||
Here is a follow-up question: what if I have a matrix A and vector b and I need the quantity b’A^{-1}b. Of course I can get A^{-1} explicitly and compute the product. This is very unstable, even in small examples. What would be the “solving the system” formulation of this problem?
|
||
|
||
5. John
|
||
|
||
[ 15 May 2017 at 13:02 ][34]
|
||
|
||
Guillaume, you could solve Ax = b, then form the inner product of x and b.
|
||
|
||
6. Guillaume
|
||
|
||
[ 15 May 2017 at 13:09 ][35]
|
||
|
||
very nice! Thank you!
|
||
|
||
7. Sam
|
||
|
||
[ 30 July 2017 at 00:50 ][36]
|
||
|
||
Hi John, thanks for the great post.
|
||
|
||
I have a question: Assuming that M and N have the same size, which would be faster?
|
||
|
||
(I). Solving (Mx = b) 100 times with the same b but different Ms,
|
||
(II). Solving (Nx = c) and (Nx = d) 100 times with the same N but different c,d.
|
||
|
||
As you pointed out, N^-1 could be calculated and used repeatedly in case (II). This way, my guess is that case (II) may be faster.
|
||
|
||
Thanks in advance.
|
||
|
||
8. Brando
|
||
|
||
[ 11 January 2018 at 14:30 ][37]
|
||
|
||
How do you solve Ax = b if A is under-constrained? (i.e. if I need the minimum norm solution or the equivalent as the pseudo-inverse. I assume you don’t compute the pseudo-inverse according to you post)
|
||
|
||
# Comment navigation
|
||
|
||
←[ Older Comments][28]
|
||
|
||
### Leave a Reply [Cancel reply][38]
|
||
|
||
Your email address will not be published. Required fields are marked *
|
||
|
||
Comment
|
||
|
||
Notify me of followup comments via e-mail
|
||
|
||
Name *
|
||
|
||
Email *
|
||
|
||
Website
|
||
|
||
Search for:
|
||
|
||
![John D. Cook][39]
|
||
|
||
**John D. Cook, PhD**
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
# Latest Posts
|
||
|
||
* [Fibonacci / Binomial coefficient identity][40]
|
||
* [Painless project management][41]
|
||
* [New animation feature for exponential sums][42]
|
||
* [Quantifying normal approximation accuracy][43]
|
||
* [Ordinary Potential Polynomials][44]
|
||
|
||
# Categories
|
||
|
||
CategoriesSelect CategoryBusinessClinical trialsComputingCreativityGraphicsMachine learningMathMusicPowerShellPythonScienceSoftware developmentStatisticsTypographyUncategorized
|
||
|
||
| ----- |
|
||
| ![Subscribe to blog by email][45] | [Subscribe via email][46] |
|
||
| ![RSS icon][47] | [Subscribe via RSS][48] |
|
||
| ![Newsletter icon][49] | [Monthly newsletter][50] |
|
||
|
||
### [John D. Cook][2]
|
||
|
||
© All rights reserved.
|
||
|
||
Search for:
|
||
|
||
[1]: https://www.johndcook.com/blog/wp-content/themes/ThemeAlley.Business.Pro/images/Logo.svg
|
||
[2]: https://www.johndcook.com/blog/
|
||
[3]: https://www.johndcook.com#content "Skip to content"
|
||
[4]: https://www.johndcook.com/blog/top/
|
||
[5]: https://www.johndcook.com/blog/services-2/
|
||
[6]: https://www.johndcook.com/blog/applied-math/
|
||
[7]: https://www.johndcook.com/blog/applied-statistics/
|
||
[8]: https://www.johndcook.com/blog/applied-computation/
|
||
[9]: https://www.johndcook.com/blog/writing/
|
||
[10]: https://www.johndcook.com/blog/notes/
|
||
[11]: https://www.johndcook.com/blog/articles/
|
||
[12]: https://www.johndcook.com/blog/twitter_page/
|
||
[13]: https://www.johndcook.com/blog/presentations/
|
||
[14]: https://www.johndcook.com/blog/newsletter/
|
||
[15]: https://www.johndcook.com/blog/clients-new/
|
||
[16]: https://www.johndcook.com/blog/endorsements/
|
||
[17]: tel:8324228646
|
||
[18]: https://www.johndcook.com/blog/contact/
|
||
[19]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/ "05:12"
|
||
[20]: https://www.johndcook.com/blog/author/john/ "View all posts by John"
|
||
[21]: https://www.johndcook.com/math_computing3.png
|
||
[22]: https://www.johndcook.com/blog/applied-linear-algebra/
|
||
[23]: https://www.johndcook.com/blog/category/math/
|
||
[24]: https://www.johndcook.com/blog/tag/math/
|
||
[25]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/ "Permalink to Don’t invert that matrix"
|
||
[26]: https://www.johndcook.com/blog/2010/01/16/disappointing-state-of-unicode-fonts/
|
||
[27]: https://www.johndcook.com/blog/2010/01/20/ten-surprises-from-numerical-linear-algebra/
|
||
[28]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/comment-page-2/#comments
|
||
[29]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/comment-page-3/#comment-921700
|
||
[30]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/comment-page-3/#comment-921702
|
||
[31]: http://blog.demofox.org
|
||
[32]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/comment-page-3/#comment-921703
|
||
[33]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/comment-page-3/#comment-930930
|
||
[34]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/comment-page-3/#comment-930931
|
||
[35]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/comment-page-3/#comment-930933
|
||
[36]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/comment-page-3/#comment-934409
|
||
[37]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/comment-page-3/#comment-936061
|
||
[38]: https://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/#respond
|
||
[39]: https://www.johndcook.com/jdc_20170630.jpg
|
||
[40]: https://www.johndcook.com/blog/2018/02/22/fibonacci-binomial-coefficient-identity/ "Permanent link to Fibonacci / Binomial coefficient identity"
|
||
[41]: https://www.johndcook.com/blog/2018/02/20/painless-project-management/ "Permanent link to Painless project management"
|
||
[42]: https://www.johndcook.com/blog/2018/02/19/new-animation-feature-for-exponential-sums/ "Permanent link to New animation feature for exponential sums"
|
||
[43]: https://www.johndcook.com/blog/2018/02/19/quantifying-normal-approximation-accuracy/ "Permanent link to Quantifying normal approximation accuracy"
|
||
[44]: https://www.johndcook.com/blog/2018/02/17/ordinary-potential-polynomials/ "Permanent link to Ordinary Potential Polynomials"
|
||
[45]: https://www.johndcook.com/contact_email.svg
|
||
[46]: https://feedburner.google.com/fb/a/mailverify?uri=TheEndeavour&loc=en_US
|
||
[47]: https://www.johndcook.com/contact_rss.svg
|
||
[48]: https://www.johndcook.com/blog/feed
|
||
[49]: https://www.johndcook.com/newsletter.svg
|
||
[50]: https://www.johndcook.com/blog/newsletter
|
||
|