Skip to content

Commit

Permalink
six easy pieces
Browse files Browse the repository at this point in the history
  • Loading branch information
statespacedev committed Sep 15, 2024
1 parent 3a32e88 commit faf9987
Show file tree
Hide file tree
Showing 2 changed files with 32 additions and 0 deletions.
16 changes: 16 additions & 0 deletions _drafts/six-easy-pieces.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
layout: post
title: "six easy pieces"
---

forsythe's review of the linear algebra state of the art [in the fifties and sixties](docs/inverse/1966%20forsythe.pdf) is a nice bit of nostalgia. six core topics are discussed. the first five, up through least squares, match perfectly with a linear algebra course in summer 2001. the sixth, simplex, with an astronomical data analysis course a few years later. those were some of the most positive experiences with math as a subject in its own right, and the fact that it's all on the extreme applied end of the spectrum is no accident.

the first two are closely related. solving simultaneous linear equations, and the inverse matrix. there's a variety of routes to solutions, for example the mechanistic 'gaussian elimination' taught and performed by hand in courses. the author comments that even when the inverse appears, it's usually more practical to deal only with its parts, rather than the full and complete matrix.

the second two are eigenvalues and vectors for symmetric and non-symmetric matrices, in others words root finding for simultaneous equations. it's unfortunate that the jargon mystifies things, even awkwardly wedging in some germanic fog. this is algebra and finding roots. here vibration, resonance, and stability are mentioned, with for example flutter in aircraft and criticality of reactors. these are at the core of 'difference equations' / 'finite difference methods', which are the discrete / non-continuous / digital computing way to deal with the analog world's 'continuous differential equations' / 'infinitesimal difference methods'.

the fifth and sixth are least squares and simplex. here concepts of uncertainty and optimization are increasingly important, and the reader is referred onward to golub or dantzig. least squares and simplex are reaching out to the complexities of the real world. they're making a leap forward, and maybe even a bit too applied for the author. in any case, he focuses on the first four and doesn't say much at all about simplex beyond mentioning linear programming and dantzig's name. possibly one distinction is that simplex is concerned with sparse matrices, whereas the author is concerned with dense matrices.

on the other hand, he mentions von neumann quite a bit, especially in the context of matrix inversion. and even that von neumann's work on the topic was based on fixed point arithmetic. no floating point. here's a curious comment concerning the early days, circa 1953.

_in any case, it could only handle scaled fixed point arithmetic. because of the size of their error bounds, von neumann and goldstine were unnecessarily pessimistic about the possibility of inverting general matrices of orders over fifteen on machines with the twenty-seven bit precision of the ibm 7090._
16 changes: 16 additions & 0 deletions _posts/2024-09-15-six-easy-pieces.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
layout: post
title: "six easy pieces"
---

forsythe's review of [linear algebra's state of the art in the fifties and sixties](docs/inverse/1966%20forsythe.pdf) is a nice bit of nostalgia. six core topics are discussed. the first five, up through least squares, match perfectly with a linear algebra course in summer 2001. the sixth, simplex, with an astronomical data analysis course a few years later. those were some of the most positive experiences with math as a subject in its own right, and the fact that it's all on the extreme applied end of the spectrum is no accident.

the first two are closely related. solving simultaneous linear equations, and the inverse matrix. there's a variety of routes to solutions, for example the mechanistic 'gaussian elimination' taught and performed by hand in courses. the author comments that even when the inverse appears, it's usually more practical to deal only with its parts, rather than the full and complete matrix.

the second two are eigenvalues and vectors for symmetric and non-symmetric matrices, in others words root finding for simultaneous equations. it's unfortunate that the jargon mystifies things, even awkwardly wedging in some germanic fog. this is algebra and finding roots. here vibration, resonance, and stability are mentioned, with for example flutter in aircraft and criticality of reactors. these are at the core of 'difference equations' / 'finite difference methods', which are the discrete / non-continuous / digital computing way to deal with the analog world's 'continuous differential equations' / 'infinitesimal difference methods'.

the fifth and sixth are least squares and simplex. here concepts of uncertainty and optimization are increasingly important, and the reader is referred onward to golub or dantzig. least squares and simplex are reaching out to the complexities of the real world. they're making a leap forward, and maybe even a bit too applied for the author. in any case, he focuses on the first four and doesn't say much at all about simplex beyond mentioning linear programming and dantzig's name. possibly one distinction is that simplex is concerned with sparse matrices, whereas the author is concerned with dense matrices.

on the other hand, he mentions von neumann quite a bit, especially in the context of matrix inversion. and even that von neumann's work on the topic was based on fixed point arithmetic. no floating point. here's a curious comment concerning the early days, circa 1953.

_in any case, it could only handle scaled fixed point arithmetic. because of the size of their error bounds, von neumann and goldstine were unnecessarily pessimistic about the possibility of inverting general matrices of orders over fifteen on machines with the twenty-seven bit precision of the ibm 7090._

0 comments on commit faf9987

Please sign in to comment.