February 27, 2016

I’d like to start this post with a demonstration about human learning and generalization. Below is a training set which has been grouped into YES and NO images. Look at these, then decide which image from the test set should also be labelled YES.

That shouldn’t have been too difficult. In fact, I suspect even those kids in the training set would notice that the YES instances all contain two of the same thing. One might assume that learning this categorisation would come naturally to a... Read More

January 12, 2016

It is hard not to be enamoured by deep learning nowadays, watching neural networks show off their endless accumulation of new tricks. There are, as I see it, at least two good reasons to be impressed:

(1) Neural networks can learn to model many natural functions well, from weak priors.
The idea of marrying hierarchical, distributed representations with fast, GPU-optimised gradient calculations has turned out to be very powerful indeed. The early days of neural networks saw problems with local optima, but the ability to train deeper networks has ... Read More