Bringing the best of mobile technology to Imperial College Healthcare NHS Trust

Were really excited to announce that weve agreed a five year partnership with Imperial College Healthcare NHS Trust, helping them make the most of the opportunity for mobile clinical applications to improve care. This is now our second NHS partnership for clinical apps, following a similar partnership we announced last month with the Royal Free London NHS Foundation Trust.Over the last two years, the Trust has moved from paper to electronic patient records, and mobile technology is the natural next stage of this work. By giving clinicians access to cutting-edge healthcare apps that link to electronic patient records, theyll be able to access information on the move, react quickly in response to changing patient needs, and ultimately provide even better care.Well be working with the Trust to deploy our clinical app, Streams, which supports clinicians in caring for patients at risk of deterioration, particularly with conditions where early intervention can make all the difference. Like breaking news alerts on a mobile phone, the technology will notify nurses and doctors immediately when test results show a patient is at risk of becoming seriously ill. It will also enable clinicians at the Trust to securely assign and communicate about clinical tasks, and give them the information they need to make diagnoses and decisions.Read More

DeepMind Papers @ NIPS (Part 3)

Scaling Memory-Augmented Neural Networks with Sparse Reads and WritesAuthors:J Rae, JJ Hunt, T Harley, I Danihelka, A Senior, G Wayne, A Graves, T LillicrapWe can recall vast numbers of memories, making connections between superficially unrelated events. As you read a novel, youll likely remember quite precisely the last few things youve read, but also plot summaries, connections and character traits from far back in the novel.Many machine learning models of memory, such as Long Short Term Memory, struggle at these sort of tasks. The computational cost of these models scales quadratically with the number of memories they can store so they are quite limited in how many memories they can have. More recently, memory augmented neural networks such as the Differentiable Neural Computer or Memory Networks, have shown promising results by adding memory separate from the computation and solving tasks such as reading short stories and answering questions [e.g. Babi].However, while these new architectures show promising results on small tasks, they use “soft-attention for accessing their memories, meaning that at every timestep they touch every word in memory. So while they can scale to short stories, theyre a long way from reading novels.In this work, we develop a set of techniques to use sparse approximations of such models to dramatically improve their scalability.Read More

DeepMind Papers @ NIPS (Part 2)

The second blog post in this series, sharing brief descriptions of the papers we are presenting at NIPS 2016 Conference in Barcelona.Sequential Neural Models with Stochastic LayersAuthors:Marco Fraccaro, Sren Kaae Snderby, Ulrich Paquet, Ole WintherMuch of our reasoning about the world is sequential, from listening to sounds and voices and music, to imagining our steps to reach a destination, to tracking a tennis ball through time. All these sequences have some amount of latent random structure in them. Two powerful and complementary models, recurrent neural networks (RNNs) and stochastic state space models (SSMs), are widely used to model sequential data like these. RNNs are excellent at capturing longer-term dependencies in data, while SSMs model uncertainty in the sequence’s underlying latent random structure, and are great for tracking and control.Is it possible to get the best of both worlds? In this paper we show how you can, by carefully layering deterministic (RNN) and stochastic (SSM) layers. We show how you can efficiently reason about a sequences present latent structure, given its past (filtering) and also its past and future (smoothing).For further details and related work, please see the paper https://arxiv.org/abs/1605.07571Check it out at NIPS:Tue Dec 6th 05:20 – 05:40 PM @ Area 1+2 (Oral) in Deep LearningTue Dec 6th 06:00 – 09:30 PM @ Area 5+6+7+8 #179Read More

Open-sourcing DeepMind Lab

DeepMind’s scientific mission is to push the boundaries of AI, developing systems that can learn to solve any complex problem without needing to be taught how. To achieve this, we work from the premise that AI needs to be general. Agents should operate across a wide range of tasks and be able to automatically adapt to changing circumstances. That is, they should not be pre-programmed, but rather, able to learn automatically from their raw inputs and reward signals from the environment. There are two parts to this research program: (1) designing ever-more intelligent agents capable of more-and-more sophisticated cognitive skills, and (2) building increasingly complex environments where agents can be trained and evaluated.Read More