Episode 7: Towards the future

AI researchers around the world are trying to create a general purpose learning system that can learn to solve a broad range of problems without being taught how. Hannah explores the journey to get there.Read More

Accelerating parallel training of neural nets

Earlier this year, we reported a speech recognition system trained on a million hours of data, a feat possible through semi-supervised learning, in which training data is annotated by machines rather than by people. These sorts of massive machine learning projects are becoming more common, and they require distributing the training process across multiple processors. Otherwise, training becomes too time consuming.Read More

New Alexa Research on Task-Oriented Dialogue Systems

Earlier this year, at Amazon’s re:MARS conference, Alexa head scientist Rohit Prasad unveiled Alexa Conversations, a new service that allows Alexa skill developers to more easily integrate conversational elements into their skills. The announcement is an indicator of the next stage in Alexa’s evolution: more-natural, dialogue-based engagements that enable Alexa to aggregate data and refine requests to better meet customer needs.Read More