Trust, confidence and Verifiable Data Audit

Data can be a powerful force for social progress, helping our most important institutions to improve how they serve their communities. As cities, hospitals, and transport systems find new ways to understand what people need from them, theyre unearthing opportunities to change how they work today and identifying exciting ideas for the future.Data can only benefit society if it has societys trust and confidence, and here we all face a challenge. Now that you can use data for so many more purposes, people arent just asking about whos holding information and whether its being kept securely they also want greater assurances about precisely what is being done with it.In that context, auditability becomes an increasingly important virtue. Any well-built digital tool will already log how it uses data, and be able to show and justify those logs if challenged. But the more powerful and secure we can make that audit process, the easier it becomes to establish real confidence about how data is being used in practice.Imagine a service that could give mathematical assurance about what is happening with each individual piece of personal data, without possibility of falsification or omission. Imagine the ability for the inner workings of that system to be checked in real-time, to ensure that data is only being used as it should be.Read More

A milestone for DeepMind Health and Streams

In November we announced a groundbreaking five year partnership with the Royal Free London to deploy and expand on Streams, our secure clinical app that aims to improve care by getting the right information to the right clinician at the right time.The first version of Streams has now been deployed at the Royal Free and were delighted that the early feedback from nurses, doctors and patients has so far been really positive. Some of the nurses using Streams at the hospital estimate that the app is saving them up to two hours per day, giving them more time to spend with patients in need. And were starting to hear the first stories of patients whose conditions were picked up and acted on faster thanks to Streams alerts.Patients likeAfia Ahmed, who was seen more quickly thanks to the instant alerts. You can read more about the deployment and some of the early positive signs over on the Royal Frees website.Read More

Understanding Agent Cooperation

We employ deep multi-agent reinforcement learning to model the emergence of cooperation. The new notion of sequential social dilemmas allows us to model how rational agents interact, and arrive at more or less cooperative behaviours depending on the nature of the environment and the agents cognitive capacity. The research may enable us to better understand and control the behaviour of complex multi-agent systems such as the economy, traffic, and environmental challenges.Read More

Our collaborations with academia to advance the field of AI

When I was studying in the mid-90s as an undergraduate, there was very little active engagement between the academic communities pushing the boundaries of maths and science, and the industries that many students ended up going into, such as finance. This struck me as a missed opportunity. While private institutions benefited from the technological advances being driven by university researchers, the subsequent breakthroughs they made were rarely shared for mutual benefit between the two.Read More

DeepMind’s work in 2016: a round-up

In a world of fiercely complex, emergent, and hard-to-master systems – from our climate to the diseases we strive to conquer – we believe that intelligent programs will help unearth new scientific knowledge that we can use for social benefit. To achieve this, we believe well need general-purpose learning systems that are capable of developing their own understanding of a problem from scratch, and of using this to identify patterns and breakthroughs that we might otherwise miss. This is the focus of our long-term research mission at DeepMind.Read More

Bringing the best of mobile technology to Imperial College Healthcare NHS Trust

Were really excited to announce that weve agreed a five year partnership with Imperial College Healthcare NHS Trust, helping them make the most of the opportunity for mobile clinical applications to improve care. This is now our second NHS partnership for clinical apps, following a similar partnership we announced last month with the Royal Free London NHS Foundation Trust.Over the last two years, the Trust has moved from paper to electronic patient records, and mobile technology is the natural next stage of this work. By giving clinicians access to cutting-edge healthcare apps that link to electronic patient records, theyll be able to access information on the move, react quickly in response to changing patient needs, and ultimately provide even better care.Well be working with the Trust to deploy our clinical app, Streams, which supports clinicians in caring for patients at risk of deterioration, particularly with conditions where early intervention can make all the difference. Like breaking news alerts on a mobile phone, the technology will notify nurses and doctors immediately when test results show a patient is at risk of becoming seriously ill. It will also enable clinicians at the Trust to securely assign and communicate about clinical tasks, and give them the information they need to make diagnoses and decisions.Read More

DeepMind Papers @ NIPS (Part 3)

Scaling Memory-Augmented Neural Networks with Sparse Reads and WritesAuthors:J Rae, JJ Hunt, T Harley, I Danihelka, A Senior, G Wayne, A Graves, T LillicrapWe can recall vast numbers of memories, making connections between superficially unrelated events. As you read a novel, youll likely remember quite precisely the last few things youve read, but also plot summaries, connections and character traits from far back in the novel.Many machine learning models of memory, such as Long Short Term Memory, struggle at these sort of tasks. The computational cost of these models scales quadratically with the number of memories they can store so they are quite limited in how many memories they can have. More recently, memory augmented neural networks such as the Differentiable Neural Computer or Memory Networks, have shown promising results by adding memory separate from the computation and solving tasks such as reading short stories and answering questions [e.g. Babi].However, while these new architectures show promising results on small tasks, they use “soft-attention for accessing their memories, meaning that at every timestep they touch every word in memory. So while they can scale to short stories, theyre a long way from reading novels.In this work, we develop a set of techniques to use sparse approximations of such models to dramatically improve their scalability.Read More

DeepMind Papers @ NIPS (Part 2)

The second blog post in this series, sharing brief descriptions of the papers we are presenting at NIPS 2016 Conference in Barcelona.Sequential Neural Models with Stochastic LayersAuthors:Marco Fraccaro, Sren Kaae Snderby, Ulrich Paquet, Ole WintherMuch of our reasoning about the world is sequential, from listening to sounds and voices and music, to imagining our steps to reach a destination, to tracking a tennis ball through time. All these sequences have some amount of latent random structure in them. Two powerful and complementary models, recurrent neural networks (RNNs) and stochastic state space models (SSMs), are widely used to model sequential data like these. RNNs are excellent at capturing longer-term dependencies in data, while SSMs model uncertainty in the sequence’s underlying latent random structure, and are great for tracking and control.Is it possible to get the best of both worlds? In this paper we show how you can, by carefully layering deterministic (RNN) and stochastic (SSM) layers. We show how you can efficiently reason about a sequences present latent structure, given its past (filtering) and also its past and future (smoothing).For further details and related work, please see the paper https://arxiv.org/abs/1605.07571Check it out at NIPS:Tue Dec 6th 05:20 – 05:40 PM @ Area 1+2 (Oral) in Deep LearningTue Dec 6th 06:00 – 09:30 PM @ Area 5+6+7+8 #179Read More

Open-sourcing DeepMind Lab

DeepMind’s scientific mission is to push the boundaries of AI, developing systems that can learn to solve any complex problem without needing to be taught how. To achieve this, we work from the premise that AI needs to be general. Agents should operate across a wide range of tasks and be able to automatically adapt to changing circumstances. That is, they should not be pre-programmed, but rather, able to learn automatically from their raw inputs and reward signals from the environment. There are two parts to this research program: (1) designing ever-more intelligent agents capable of more-and-more sophisticated cognitive skills, and (2) building increasingly complex environments where agents can be trained and evaluated.Read More