Decoupled Neural Interfaces Using Synthetic Gradients

Neural networks are the workhorse of many of the algorithms developed at DeepMind. For example, AlphaGo uses convolutional neural networks to evaluate board positions in the game of Go and DQN and Deep Reinforcement Learning algorithms use neural networks to choose actions to play at super-human level on video games.This post introduces some of our latest research in progressing the capabilities and training procedures of neural networks called Decoupled Neural Interfaces using Synthetic Gradients. This work gives us a way to allow neural networks to communicate, to learn to send messages between themselves, in a decoupled, scalable manner paving the way for multiple neural networks to communicate with each other or improving the long term temporal dependency of recurrent networks. This is achieved by using a model to approximate error gradients, rather than by computing error gradients explicitly with backpropagation. The rest of this post assumes some familiarity with neural networks and how to train them. If youre new to this area we highly recommend Nando de Freitas lecture series on Youtube on deep learning and neural networks.Neural networks and the problem of lockingIf you consider any layer or module in a neural network, it can only be updated once all the subsequent modules of the network have been executed, and gradients have been backpropagated to it.Read More