AlphaFold @ CASP13: “What just happened?”

Update: An updated version of this blogpost was published as a (peer-reviewed) Letter to the Editor at Bioinformatics, sans the “sociology” commentary.

I just came back from CASP13, the biennial assessment of protein structure prediction methods (I previously blogged about CASP10.) I participated in a panel on deep learning methods in protein structure prediction, as well as a predictor (more on that later.) If you keep tabs on science news, you may have heard that DeepMind’s debut went rather well. So well in fact that not only did they take first place, but put a comfortable distance between them and the second place predictor (the Zhang group) in the free modeling (FM) category, which focuses on modeling novel protein folds. Is the news real or overhyped? What is AlphaFold’s key methodological advance, and does it represent a fundamentally new approach? Is DeepMind forthcoming in sharing the details? And what was the community’s reaction? I will summarize my thoughts on these questions and more below. At the end I will also briefly discuss how RGNs, my end-to-end differentiable model for structure prediction, did on CASP13.

Continue reading

What Hinton’s Google Move Says About the Future of Machine Learning

Earlier this week TechCrunch broke the news that Google had acquired Geoff Hinton’s recently founded deep learning startup. Soon thereafter Geoff posted on his Google+ page an announcement confirming the news and his (part-time) departure to Google from the University of Toronto. From the details that have emerged so far, it appears that he will split his time between UoT and the Google offices in Toronto and Mountain View. What does Geoff’s move, and other recent higher profiles departures, say about the future of machine learning research in academia? A lot, I think.

Continue reading