Odlyzko's Lemma
Published: 07/26/2020
Subscribe to Ja3k
That's right this blog is now a newsletter.
Thanks for subscribing!
I fell of the blogging wagon the past two months partly because I felt like everything was terrible and I didn't have much to say about that and partly because I was preparing for candidacy. Nothing has really changed on the first front except my increasing acclimation. I've now completed candidacy and while it's definitely more of a formality than a test that one could reasonably fail there was something pretty stressful at staring my ignorance in the face and collecting the things I've learned in the last couple years and the things I hope to accomplish in the next couple into one document.
To celebrate I thought I'd write up a small thing from my field. I tried to make it understandable but I don't think it'll make sense to anyone whose not already comfortable with linear algebra [1] and probability [2]. Odlyzko's Lemma is a very useful estimate in random matrix theory. In this blog post I'll give its statement and proof along with a simple corollary and a little bit of context.
Odlyzko himself is a very impressive figure. He worked at Bell Labs (where everyone cool seemed to be) for 26 years and is now at the University of Minnesota. You can see his list of publications here. Let me just highlight a few things:
- His first publication was actually in Analytic Biochemistry.
- His 2001 article 'Content is not King' is interesting to read in 2020. It turned out to be a very prescient view of the internet.
- His recent writings are mostly on economics and are good reads. His thoughts on cyber security and the south sea bubble are particularly fascinating to me.
- The famed lemma is from his 1988 paper 'On Subspaces Spanned by Random Selections of Vectors' itself a worthwhile read if you want to learn more about random matrix theory. I was looking through it to refresh my memory for this post and I was reminded of Babai's (who I was very lucky to take a class with at UChicago) conjecture that the characteristic polynomial of a matrix is irreducible with high probability. So there's something in it for everyone.
The lemma is the following.
Lemma: Let be a random vector of length with entries with independent and even probability. Let be any subspace in of codimension . Then,
Note that can be any commutative ring in which . For our purposes you might as well assume it is the real numbers. Though for me it is very often a finite field.
Proof. Let be linearly independent vectors which span . Write them as the columns of a matrix,
.By assumption that has codimension this matrix has a full rank minor so without loss of generality we can assume the first rows are linearly independent. Restricting to the first coordinates of our vector space we therefore have unique coefficients such that . In order for we need more equalities given by the bottom rows in the following matrix equation.
Because is no given value with probability greater than we get the desired bound.
▅
The same proof technique applies to more general . If instead of having entries simply has entries which aren't equal to any specific value with probability greater than we get the bound . We also don't need our entries to be independent. We just need the entries to remain unconcentrated after conditioning on any value of the other entries.
Why is this lemma useful? Through its iterated application one can show that for fixed an (tall and skinny, but not that tall and skinny) -matrix is full rank with high probability. Why is that important? As a first step in showing that square matrices are nonsingular with high probability. Why is that important? Many physical systems are best understood as random matrices. For instance the Hamiltonian in Quantum Mechanics and hidden layers in Neural Nets. Amazingly the Wigner Semicircular Law was first observed experimentally. Though I myself don't understand well what physical system encoded a matrix or why the eigenvalues of that matrix were physically significant.
I'll close with a corollary of the lemma (is there any more bad math writing cliche than putting a corollary after a lemma? Where was the Theorem?). This corollary is a first step in showing a square matrix is nonsingular with high probability. One observes the first couple columns are linearly independent with high probability. One then needs more involved techniques to expose the remaining columns. How involved depends on how tight a bound you're shooting for. Perhaps later I'll follow up with a blog post on this.
Corollary: Fix . With probability at least an matrix with independent entries is full rank.
Proof. If the matrix is not full rank then there exists a column which lies in the span of the previous columns. The codimension of those columns is at least so by Odlyzko's Lemma the probability lies in the span of the preceding columns is less than . We can take a union bound over the columns where the rank may drop and throw out a factor of to get the desired bound.
▅
[1] If you want to learn more about linear algebra I imagine 3blue1brown's series is best. Full disclosure I haven't watched this particular series. But I've watched some of Grant Sanderson's other stuff and I'm sure he wouldn't let me down here.
[2] The best way to understand probability better is just to play a lot of mtg and poker.