It is hard not to be enamoured by deep learning nowadays, watching neural networks show off their endless accumulation of new tricks. There are, as I see it, at least two good reasons to be impressed:
(1) Neural networks can learn to model many natural functions well, from weak priors.
The idea of marrying hierarchical, distributed representations with fast, GPU-optimised gradient calculations has turned out to be very powerful indeed. The early days of neural networks saw problems with local optima, but the ability to train deeper networks has solved this and allowed backpropagation to shine through. After baking in a small amount of domain knowledge through simple architectural decisions, deep learning practitioners now find themselves with a powerful class of parameterised functions and a practical way to optimise them.
The first such architectural decisions were the use of either convolutions or recurrent structure, to imbue models with spatial and temporal invariances. From this alone, neural networks excelled in image classification, speech recognition, machine translation, atari games, and many more domains. More recently, mechanisms for top-down attention over inputs have shown their worth in image and natural language tasks, while differentiable memory modules such as tapes and stacks have even enabled networks to learn the rules of simple algorithms from only input-output pairs.
(2) Neural networks can learn surprisingly useful representations
While the community still waits eagerly for unsupervised learning to bear fruit, deep supervised learning has shown an impressive aptitude for building generalisable and interpretable features. That is to say, the features learned when a neural network is trained to predict P(y|x) often turn out to be both semantically interpretable and useful for modelling some other related function P(z|x).
As just a few examples of this:
- Units of a convolutional network trained to classify scenes often learn to detect specific objects in those scenes (such as a lighthouse), even though they were not explicitly trained to do so (Zhou et al., 2015)
- Correlations in the bottom layers of an image classification network provide a surprisingly good signature for the artistic style of an image, allowing new images to be synthesised using the content of one and the style of another (Gatys et al., 2015)
- A
recurrent network[correction below] taught to predict missing words from sentences learns meaningful word embeddings, where simple vector arithmetic can be used to find semantic analogies. For example:- vking - vman + vwoman ≈ vqueen
- vParis - vFrance + vItaly ≈ vRome
- vWindows - vMicrosoft + vGoogle ≈ vAndroid
I have no doubt that the next few years will see neural networks turn their attention to yet more tasks, integrate themselves more deeply into industry, and continue to impress researchers with new superpowers. This is all well justified, and I have no intention to belittle the current and future impact of deep learning; however, the optimism about the just what these models can achieve in terms of intelligence has been worryingly reminiscent of the 1960s.
Extrapolating from the last few years’ progress, it is enticing to believe that Deep Artificial General Intelligence is just around the corner and just a few more architectural tricks, bigger data sets and faster computing power are required to take us there. I feel that there are a couple of solid reasons to be much more skeptical.
To begin with, it is a bad idea to intuit how broadly intelligent a machine must be, or have the capacity to be, based solely on a single task. The checkers-playing machines of the 1950s amazed researchers and many considered these a huge leap towards human-level reasoning, yet we now appreciate that achieving human or superhuman performance in this game is far easier than achieving human-level general intelligence. In fact, even the best humans can easily be defeated by a search algorithm with simple heuristics. The development of such an algorithm probably does not advance the long term goals of machine intelligence, despite the exciting intelligent-seeming behaviour it gives rise to, and the same could be said of much other work in artificial intelligence such as the expert systems of the 1980s. Human or superhuman performance in one task is not necessarily a stepping-stone towards near-human performance across most tasks.
By the same token, the ability of neural networks to learn interpretable word embeddings, say, does not remotely suggest that they are the right kind of tool for a human-level understanding of the world. It is impressive and surprising that these general-purpose, statistical models can learn meaningful relations from text alone, without any richer perception of the world, but this may speak much more about the unexpected ease of the task itself than it does about the capacity of the models. Just as checkers can be won through tree-search, so too can many semantic relations be learned from text statistics. Both produce impressive intelligent-seeming behaviour, but neither necessarily pave the way towards true machine intelligence.
I’d like to reflect on specifically what neural networks are good at, and how this relates to human intelligence. Deep learning has produced amazing discriminative models, generative models and feature extractors, but common to all of these is the use of a very large training dataset. Its place in the world is as a powerful tool for general-purpose pattern recognition, in situations where n and d are high. Very possibly it is the best tool for working in this paradigm.
This is a very good fit for one particular class of problems that the brain solves: finding good representations to describe the constant and enormous flood of sensory data it receives. Before any sense can be made of the environment, the visual and auditory systems need to fold, stretch and twist this data from raw pixels and waves into a form that better captures the complex statistical regularities in the signal*. Whether this is learned from scratch or handed down as a gift from evolution, the brain solves this problem adeptly - and there is even recent evidence that the representations it finds are not too dissimilar from those discovered by a neural network. I contend, deep learning may well provide a fantastic starting point for many problems in perception.
That said, this high n, high d paradigm is a very particular one, and is not the right environment to describe a great deal of intelligent behaviour. The many facets of human thought include planning towards novel goals, inferring others' goals from their actions, learning structured theories to describe the rules of the world, inventing experiments to test those theories, and learning to recognise new object kinds from just one example. Very often they involve principled inference under uncertainty from few observations. For all the accomplishments of neural networks, it must be said that they have only ever proven their worth at tasks fundamentally different from those above. If they have succeeded in anything superficially similar, it has been because they saw many hundreds of times more examples than any human ever needed to.
Deep learning has brought us one branch higher up the tree towards machine intelligence and a wealth of different fruit is now hanging within our grasp. While the ability to learn good features in high dimensional spaces from weak priors with lots of data is both new and exciting, we should not fall into the trap of thinking that most of the problems an intelligent agent faces can be solved in this way. Gradient descent in neural networks may well play a big part in helping to build the components of thinking machines, but it is not, itself, the stuff of thought.
23 Comments
Great article, Well done. I think you are still underestimating
Submitted by Shai Ben-David (not verified) on
We are still very far from understanding the current success of DeapNN methods. In terms of both the computational and informational complexity, analysis of those methods fall way short of explaining their practical success. You point to a paper by Chromaneska et al for the computatational issue, but them, as well as several other recent works in this directnion, make strongly unrealistic and unjustified assumption about the network to be learnt. In the above mentioned paper they call in "uniformity" and it amounts to assuming that a successful problem solving netwrojs have weights that can be viewed as having "uniformly distributed edge weights".
Similarly, we are nowhere close to understanding why such NN can in fact geenralize to yet unseen data points (their emense expressive power requires even more data than they are trained on to guarantee statistical generalization).
I think that the answers are in the direction of implicit prior knowledge that distiguishes the type of tasks and inputs on which those tool do well from arbitrary inputs.
At this point in time, no one can provide a meanigful distinction between the type of tasks on which those tools are likely to do well and those on which they are bound to fail. Still a lot of territory to explore.
Sample Complexity?
Submitted by Luke Hewitt on
Hi,
Thanks for your comment. I'm curious, what did you mean by: "Their emense expressive power requires even more data than they are trained on to guarantee statistical generalization". Are you referencing a specific theoretical result?
Many thanks,
Luke
Nice One
Submitted by Nicolaj Kirchhof (not verified) on
I really like your Article. Independent of the question "when" is AI going to outsmart us, I love Joschuas view on AI as a "Meta-Psychology" and an abstracted model of "how" our thinking works.
BTW: I think you should add additional measures to prevent all the bot posts...
Pages