I just came back from CASP13, the biennial assessment of protein structure prediction methods (I previously blogged about CASP10.) I participated in a panel on deep learning methods in protein structure prediction, as well as a predictor (more on that later.) If you keep tabs on science news, you may have heard that DeepMind’s debut went rather well. So well in fact that not only did they take first place, but put a comfortable distance between them and the second place predictor (the Zhang group) in the free modeling (FM) category, which focuses on modeling novel protein folds. Is the news real or overhyped? What is AlphaFold’s key methodological advance, and does it represent a fundamentally new approach? Is DeepMind forthcoming in sharing the details? And what was the community’s reaction? I will summarize my thoughts on these questions and more below. At the end I will also briefly discuss how RGNs, my end-to-end differentiable model for structure prediction, did on CASP13.Continue reading
I recently had the pleasure of attending the 14th International Conference on Systems Biology in Copenhagen. It was a five-day, multi-track bonanza, a strong sign of the field’s continued vibrancy. The keynotes were generally excellent, and while I cannot help but feel a little dismayed by the incrementalism that is inherent to scientific research and that is on display in conferences, the forest view was encouraging and hopeful. This is one of the most exciting fields of science today.
I recently had the fortune of attending the 10th Community Assessment of protein Structure Prediction, or CASP, as it is affectionately known. CASP is a competition of sorts that happens once every two years to ascertain the progress made in computationally predicting protein structure. It is a blind experiment, where the structures to be predicted are unknown beforehand, and thus serves as a unbiased test of the predictive power of current computational methods. It is in many ways a model that the rest of computational biology ought (and is starting) to follow.
Two weeks ago I attended NIPS, one of the leading conferences on machine learning and AI. This was my first time at NIPS, but I got the impression that they always like to have a sprinkling of neuroscience talks swimming in the sea of machine learning presentations. This year, they had no less than four talks with some variation of “consciousness” or “brain” in the title. They were given by Giulio Tononi, Scott Aaronson, Stanislas Dehaene, and Terrence Sejnowski. Despite the ambitious-sounding titles, unfortunately most of these talks did not really tackle the fundamental basis of consciousness, except for Giulio Tononi’s. On the other hand, while all the other talks were reasonably accessible, I found Giulio’s talk to be largely impenetrable. It was intriguing enough however that I will try to summarize my understanding of his theory, and offer some thoughts of my own on the subject. I should warn that I am not going to say anything particularly coherent, and I may in fact be grossly misrepresenting what Giulio intended to say because of my limited understanding of his talk.