Avi Wigderson wins Turing Award!
Back in 2006, in the midst of an unusually stupid debate in the comment section of Lance Fortnow and Bill Gasarch’s blog, someone chimed in:
Since the point of theoretical computer science is solely to recognize who is the most badass theoretical computer scientist, I can only say:
GO HOME PUNKS!
WIGDERSON OWNS YOU!
Avi Wigderson: central unifying figure of theoretical computer science for decades; consummate generalist who’s contributed to pretty much every corner of the field; advocate and cheerleader for the field; postdoc adviser to a large fraction of all theoretical computer scientists, including both me and my wife Dana; derandomizer of BPP (provided E requires exponential-size circuits). Now, Avi not only “owns you,” he also owns a well-deserved Turing Award (on top of his well-deserved Nevanlinna, Abel, Gödel, and Knuth prizes). As Avi’s health has been a matter of concern to those close to him ever since his cancer treatment, which he blogged about a few years ago, I’m sure today’s news will do much to lift his spirits.
I first met Avi a quarter-century ago, when I was 19, at a PCMI summer school on computational complexity at the Institute for Advanced Study in Princeton. Then I was lucky enough to visit Avi in Israel when he was still a professor at the Hebrew University (and I was a grad student at Berkeley)—first briefly, but then Avi invited me back to spend a whole semester in Jerusalem, which ended up being one of my most productive semesters ever. Then Avi, having by then moved to the IAS in Princeton, hosted me for a one-year postdoc there, and later he and I collaborated closely on the algebrization paper. He’s had a greater influence on my career than all but a tiny number of people, and I’m far from the only one who can say that.
Summarizing Avi’s scientific contributions could easily fill a book, but Quanta and New Scientist and Lance’s blog can all get you started if you’re interested. Eight years ago, I took a stab at explaining one tiny little slice of Avi’s impact—namely, his decades-long obsession with “why the permanent is so much harder than the determinant”—in my IAS lecture Avi Wigderson’s “Permanent” Impact On Me, to which I refer you now (I can’t produce a new such lecture on one day’s notice!).
Huge congratulations to Avi.
Comment #1 April 10th, 2024 at 3:43 pm
Congratulations but why did he win Abel before Turing? CS injustice? BPP if E is tough is a remarkable theorem.
Comment #2 April 10th, 2024 at 4:00 pm
QMA(2) #1: The timing of awards like Nobel, Abel, and Turing is always wildly unpredictable and can involve decades-long delays. Indeed, these awards are best thought of as biathlons, where the first task is science and the second task is longevity! 😀
Comment #3 April 10th, 2024 at 5:45 pm
Here’s the link you were looking for
Comment #4 April 10th, 2024 at 6:53 pm
very good news!
mazeltov Avi!
Comment #5 April 11th, 2024 at 10:46 am
Lance Fortnow #3: Err, thanks! With some reticence, I fixed the quote and linked to the source. 🙂
Comment #6 April 11th, 2024 at 12:58 pm
Another giant in our midst indeed, it always blows my mind that modern programming is so removed from theoretical development. Like using a random library is just a few lines of code, but the deep thought that went into developing and redeveloping what is now just seemingly given, is truly astonishing. One could be a great programmer and run many experiments that have a high impact all without knowing how any of the underlying code is executed. Just import…
Scott I was reading the talk you posted a link to above and I was curious what you think about this paper talking about using quantum computers to “fastly” calculate the permanent of a matrix: https://arxiv.org/pdf/2205.01328.pdf
Is there anything in that direction worth looking into outside of complexity science questions, the authors mention: “We provide a new tool to investigate computational complexity questions about the relationships among the BPP, BQP, and #P. We believe that our new findings in classical and quantum algorithms for computing matrix permanents would play an essential role in identifying the unclear relationship between the BQP and PH”
They also note it can be useful for boson sampling too, so I suppose there is that.
Comment #7 April 11th, 2024 at 1:16 pm
Sean Feeney #6: I somehow hadn’t seen that paper. While the abstract is unclear, if you read further, you see that they’re only claiming to approximate the permanent in quantum polynomial time for a special subclass of matrices A, those for which a quantity they call H(A) can be bounded. So now the question will be: assuming the result is correct, is that subclass at all interesting? Does it even go beyond the matrices for which we could approximate the permanent in classical polynomial time, for example using Gurvits’s algorithm? I wildly, wildly doubt that this class is going to be #P-complete (thereby implying BQP=P#P and the end of complexity theory as we know it), although of course I can’t prove it. 🙂
Are there any readers who’ve looked at this paper, or are willing to look at it, and who’d like to chime in?
Comment #8 April 11th, 2024 at 1:55 pm
I looked at that paper by Joonsuk Huh when it came out. The main statement is Theorem 1. When talking about multiplicatively approximating permanents of matrices of size N, the real figure of merit is how well you can approximate is the nth root of the permanent, |Per(A)|^(1/N), and this usually gives simpler expressions. Then Theorem 1 essentially states that there is a quantum algorithm to approximate |Per(A)|^(1/N) whenever that permanent root is at least as big as ‖H(A)‖ / N. The “Hamiltonian norm” H(A) is not defined really, but it appears(?) to just be equal to the entry-wise 1-norm, the sum of |A_{i,j}|.
This is sort of similar to Gurvits’ algorithm, which gives a good approximation when |Per(A)|^(1/N) is at least as big the operator norm, ‖A‖. Of course there are times where one norm is bigger or the other, and this is what leaves the room for quantum advantage.
It’s a nice result I think. But personally I don’t think it is likely to have complexity-theoretic consequences; I would be quite surprised if someone found an interesting class of matrices that was “hard” and had small 1-norm but wasn’t also (easily turned into an equivalent matrix with) small operator norm.
Comment #9 April 11th, 2024 at 1:56 pm
On a separate note, I was interested to see two papers with “A critique…” on arxiv recently:
https://scirate.com/arxiv/2404.00006
https://scirate.com/arxiv/2404.04395
People bothering to debunk proofs that P=NP! Is it a new era of anti-crackpottery?
Comment #10 April 11th, 2024 at 9:56 pm
Sean #6,
“it always blows my mind that modern programming is so removed from theoretical development. Like using a random library is just a few lines of code, but the deep thought that went into developing and redeveloping what is now just seemingly given, is truly astonishing.”
This is how scientific development, and more so engineering has made tremendous progress in the past. The person designing aircraft seats has no idea how the sophisticated jet engine located few meters away works, they need only take care of its impact in terms of vibration etc. Same is true with software, except due to the unique nature of its ecosystem – open source, libraries etc the value seems to be greatly diminished for the original designers. The layers and layers of abstractions is what allows one to stand on shoulders of giants and become one
Comment #11 April 12th, 2024 at 8:29 am
Sean #6, Prasanna #10
“it always blows my mind that modern programming is so removed from theoretical development. Like using a random library is just a few lines of code, but the deep thought that went into developing and redeveloping what is now just seemingly given, is truly astonishing.”
An important question is whether the libraries in question are open source or not.
And then you can also extend this to the hardware itself, the processors, GPUs, etc. Even for these there’s always been an effort to “open source” them, even if it often means a huge loss in performance (https://en.wikipedia.org/wiki/OpenRISC)
Comment #12 April 12th, 2024 at 8:37 am
Alex Meiburg #8
“Is it a new era of anti-crackpottery?”
Looking for mistakes in papers, that’s indeed a groundbreaking idea! 😛
More seriously, I guess you could also decide to just ban/ignore any paper claiming to prove P = NP, but:
1) that still leaves you with about as many papers claiming to prove that P != NP, which are probably even harder to debunk (at least many P = NP claims can be implemented, by definition, and one can hope to find examples where they don’t work).
2) some experts do believe that P = NP (e.g. Don Knuth), so there’s that.
Comment #13 April 12th, 2024 at 10:49 am
Alex Meiburg #9 The entire subfield of extended formulations was built from attempt to debunk a P=NP claim. And P=NP has not yet been disproved.
Comment #14 April 13th, 2024 at 7:20 am
Oh, the irony! The proof or disproof of P = NP should be in P and “easily” checked, right? If only we had a mechanical and algorithmic way of checking said proofs in a quick and formulaic manner if the proof/disproof were written in a suitable language…
Comment #15 April 15th, 2024 at 7:21 am
In your 2016 IAS lecture linked here you mention a conjecture that the distribution of |Per(A)|^2 tends to a lognormal distribution as n goes to infinity, for A an n x n iid matrix of standard complex gaussians. Has there been any progress on this problem since then?
Comment #16 April 15th, 2024 at 10:25 am
Anonymous #15: Yes, there has been! (But alas, still not a resolution.) See in particular this 2021 paper by Sepehr Nezami.
Comment #17 April 17th, 2024 at 4:26 pm
“Wigderson owns you”? Oh! Oh!! Ah! Ah!! I always thought Christos Papadimitriou is the universally acknowledged genius of the profession!