The State of Probabilistic Programming

For two weeks last July, I cocooned myself in a hotel in Portland, OR, living and breathing probabilistic programming as a “student” in the probabilistic programming summer school run by DARPA. The school is part of the broader DARPA program on Probabilistic Programming for Advanced Machine Learning (PPAML), which has resulted in a great infusion of energy (and funding) into the probabilistic programming space. Last year was the inaugural one for the summer school, one that is meant to introduce and disseminate the languages and tools being developed to the broader scientific and technology communities. The school was graciously hosted by Galois Inc., which did a terrific job of organizing the event. Thankfully, they’re hosting the summer school again this year (there’s still time to apply!), which made me think that now is a good time to reflect on last year’s program and provide a snapshot of the state of the field. I will also take some liberty in prognosticating on the future of this space. Note that I am by no means a probabilistic programming expert, merely a curious outsider with a problem or two to solve.

Continue reading

What Does a Neural Network Actually Do?

There has been a lot of renewed interest lately in neural networks (NNs) due to their popularity as a model for deep learning architectures (there are non-NN based deep learning approaches based on sum-products networks and support vector machines with deep kernels, among others). Perhaps due to their loose analogy with biological brains, the behavior of neural networks has acquired an almost mystical status. This is compounded by the fact that theoretical analysis of multilayer perceptrons (one of the most common architectures) remains very limited, although the situation is gradually improving. To gain an intuitive understanding of what a learning algorithm does, I usually like to think about its representational power, as this provides insight into what can, if not necessarily what does, happen inside the algorithm to solve a given problem. I will do this here for the case of multilayer perceptrons. By the end of this informal discussion I hope to provide an intuitive picture of the surprisingly simple representations that NNs encode.

Continue reading

Aside

It is tempting to assume that with the appropriate choice of weights for the edges connecting the second and third layers of the NN discussed in this post, it would be possible to create classifiers that output 1 over any composite region defined by unions and intersections of the 7 regions shown below.

Continue reading

ICSB 2013

I recently had the pleasure of attending the 14th International Conference on Systems Biology in Copenhagen. It was a five-day, multi-track bonanza, a strong sign of the field’s continued vibrancy. The keynotes were generally excellent, and while I cannot help but feel a little dismayed by the incrementalism that is inherent to scientific research and that is on display in conferences, the forest view was encouraging and hopeful. This is one of the most exciting fields of science today.

Continue reading

10 Months at Harvard, Quantified

I will soon reach the one-year mark of my fellowship at HMS, which seems like a fitting time to examine how effectively I have spent my time here so far. I have been a practitioner of self quantification long before the movement acquired its name, having tracked some aspect of my life since I was 16. Given the movement’s growing popularity, I thought it appropriate to share some of my life hacking experiments. My approach has cyclically peaked and waned in sophistication, something that I will expound upon later in the post, but I believe that the overall trajectory of my effort has been that of increasing usefulness. Any lifestyle change, particularly one that involves compulsive tracking of one’s behavior, ought to result in actionable information that is demonstrably useful and not merely be a quantitative exercise in vanity. In this post I hope to show that this can in fact be the case for self quantification.

Continue reading

What Hinton’s Google Move Says About the Future of Machine Learning

Earlier this week TechCrunch broke the news that Google had acquired Geoff Hinton’s recently founded deep learning startup. Soon thereafter Geoff posted on his Google+ page an announcement confirming the news and his (part-time) departure to Google from the University of Toronto. From the details that have emerged so far, it appears that he will split his time between UoT and the Google offices in Toronto and Mountain View. What does Geoff’s move, and other recent higher profiles departures, say about the future of machine learning research in academia? A lot, I think.

Continue reading