The Unreasonable Reputation of Neural Networks

January 12, 2016

It is hard not to be enamoured by deep learning nowadays, watching neural networks show off their endless accumulation of new tricks. There are, as I see it, at least two good reasons to be impressed:

(1) Neural networks can learn to model many natural functions well, from weak priors.
The idea of marrying hierarchical, distributed representations with fast, GPU-optimised gradient calculations has turned out to be very powerful indeed. The early days of neural networks saw problems with local optima, but the ability to train deeper networks has solved this and allowed backpropagation to shine through. After baking in a small amount of domain knowledge through simple architectural decisions, deep learning practitioners now find themselves with a powerful class of parameterised functions and a practical way to optimise them.

The first such architectural decisions were the use of either convolutions or recurrent structure, to imbue models with spatial and temporal invariances. From this alone, neural networks excelled in image classification, speech recognition, machine translation, atari games, and many more domains. More recently, mechanisms for top-down attention over inputs have shown their worth in image and natural language tasks, while differentiable memory modules such as tapes and stacks have even enabled networks to learn the rules of simple algorithms from only input-output pairs.

(2) Neural networks can learn surprisingly useful representations
While the community still waits eagerly for unsupervised learning to bear fruit, deep supervised learning has shown an impressive aptitude for building generalisable and interpretable features. That is to say, the features learned when a neural network is trained to predict P(y|x) often turn out to be both semantically interpretable and useful for modelling some other related function P(z|x).

As just a few examples of this:

  • Units of a convolutional network trained to classify scenes often learn to detect specific objects in those scenes (such as a lighthouse), even though they were not explicitly trained to do so (Zhou et al., 2015)
  • Correlations in the bottom layers of an image classification network provide a surprisingly good signature for the artistic style of an image, allowing new images to be synthesised using the content of one and the style of another (Gatys et al., 2015)
  • A recurrent network [correction below] taught to predict missing words from sentences learns meaningful word embeddings, where simple vector arithmetic can be used to find semantic analogies. For example:
    • vking - vman + vwoman ≈ vqueen
    • vParis - vFrance + vItaly ≈ vRome
    • vWindows - vMicrosoft + vGoogle ≈ vAndroid

I have no doubt that the next few years will see neural networks turn their attention to yet more tasks, integrate themselves more deeply into industry, and continue to impress researchers with new superpowers. This is all well justified, and I have no intention to belittle the current and future impact of deep learning; however, the optimism about the just what these models can achieve in terms of intelligence has been worryingly reminiscent of the 1960s.

 

Extrapolating from the last few years’ progress, it is enticing to believe that Deep Artificial General Intelligence is just around the corner and just a few more architectural tricks, bigger data sets and faster computing power are required to take us there. I feel that there are a couple of solid reasons to be much more skeptical.

To begin with, it is a bad idea to intuit how broadly intelligent a machine must be, or have the capacity to be, based solely on a single task. The checkers-playing machines of the 1950s amazed researchers and many considered these a huge leap towards human-level reasoning, yet we now appreciate that achieving human or superhuman performance in this game is far easier than achieving human-level general intelligence. In fact, even the best humans can easily be defeated by a search algorithm with simple heuristics. The development of such an algorithm probably does not advance the long term goals of machine intelligence, despite the exciting intelligent-seeming behaviour it gives rise to, and the same could be said of much other work in artificial intelligence such as the expert systems of the 1980s. Human or superhuman performance in one task is not necessarily a stepping-stone towards near-human performance across most tasks.

By the same token, the ability of neural networks to learn interpretable word embeddings, say, does not remotely suggest that they are the right kind of tool for a human-level understanding of the world. It is impressive and surprising that these general-purpose, statistical models can learn meaningful relations from text alone, without any richer perception of the world, but this may speak much more about the unexpected ease of the task itself than it does about the capacity of the models. Just as checkers can be won through tree-search, so too can many semantic relations be learned from text statistics. Both produce impressive intelligent-seeming behaviour, but neither necessarily pave the way towards true machine intelligence.

I’d like to reflect on specifically what neural networks are good at, and how this relates to human intelligence. Deep learning has produced amazing discriminative models, generative models and feature extractors, but common to all of these is the use of a very large training dataset. Its place in the world is as a powerful tool for general-purpose pattern recognition, in situations where n and d are high. Very possibly it is the best tool for working in this paradigm.

This is a very good fit for one particular class of problems that the brain solves: finding good representations to describe the constant and enormous flood of sensory data it receives. Before any sense can be made of the environment, the visual and auditory systems need to fold, stretch and twist this data from raw pixels and waves into a form that better captures the complex statistical regularities in the signal*. Whether this is learned from scratch or handed down as a gift from evolution, the brain solves this problem adeptly - and there is even recent evidence that the representations it finds are not too dissimilar from those discovered by a neural network. I contend, deep learning may well provide a fantastic starting point for many problems in perception.

That said, this high n, high d paradigm is a very particular one, and is not the right environment to describe a great deal of intelligent behaviour. The many facets of human thought include planning towards novel goals, inferring others' goals from their actions, learning structured theories to describe the rules of the world, inventing experiments to test those theories, and learning to recognise new object kinds from just one example. Very often they involve principled inference under uncertainty from few observations. For all the accomplishments of neural networks, it must be said that they have only ever proven their worth at tasks fundamentally different from those above. If they have succeeded in anything superficially similar, it has been because they saw many hundreds of times more examples than any human ever needed to.

Deep learning has brought us one branch higher up the tree towards machine intelligence and a wealth of different fruit is now hanging within our grasp. While the ability to learn good features in high dimensional spaces from weak priors with lots of data is both new and exciting, we should not fall into the trap of thinking that most of the problems an intelligent agent faces can be solved in this way. Gradient descent in neural networks may well play a big part in helping to build the components of thinking machines, but it is not, itself, the stuff of thought.

This has been the first post of what I hope will become a long-running blog, where I'll structure and share my thoughts about machine intelligence. Later posts are likely to address how to define intelligence, what might be the most promising routes to building it, and where we can draw inspiration. I hope it will serve as a platform for discussion too, so please go ahead and rebuke me in the comments below. And subscribe! Subscribe to Syndicate
 
 
Correction: The model used to produce word analogies was actually a log linear skip-gram model, trained to discriminate nearby word pairs from negative samples (Mikolov et al., 2013). Many thanks to fnl for pointing this out.
 
*One suggested principle for this is the Efficient coding hypothesis, which ties into information theory and data compression.

 

23 Comments

Game theory

Great article.

I'm just a lay person but I think I understood most of it.

I can see that is possible to have a machine play checkers and chess and beat any human these days due to its deep search capabilities.

I'd be interested to see what happens when you apply one of these neutral networks to a game like rock, paper, scissors. Would it consistently beat a human opponent? Essentially a perfect strategy for this game is a random choice from the three, presumably the machine would get to this if pitted against a copy of itself. But how would it adapt that strategy to beat us? And would that strategy be an emergent property of the system?

Thanks.

Perception isn't all

Neural nets are great at perception. Sparse encodings are great at remembering (Numenta). In fact, neural nets will eventually be a "compiled form of procedural knowledge" (e.g. knowing how to ride a bike without a declarative representation). But this firestorm of success seems to have overshadowed prior AI work.

For example, we did work in Robot-Human cooperation to change a tire. Recognizing a lugnut would be a perfect use of NNs (except for the fact that there are only a small set of lugnuts). But representing the fact that lugnuts can have reverse threads, or are made of different materials (hard to perceive), or require different torque values, all fall under "old school AI" symbolic-reasoning techniques. Planning and symbolic knowledge is required to know that this old truck has reverse-thread lugnuts on the drivers side. A billion dollars is going into OpenAI but it appears that won't include symbolic reasoning work.

I fear another "Lighthill report" that posits that NNs cannot do non-perceptual tasks well and then draws the hasty-generalization that AI doesn't work... leading to another AI winter.

spam

speaking of neural networks, I think someone has captcha-capable spambots pointed at this page...

The Master Algorithm

I like your analysis.  I believe that one-shot learning, unsupervised learning, semi-supervised learning (e.g., good examples, showing how) and reinforcement learning (rewards) all need to play a part in AGI.

I assume you have scanned Pedro Domingos' "The Master Algorithm".  I would be very interesting in your comments on his analysis and especially on his Markov Logic Networks as a step toward AGI

Pages