Developing a new natural-language-understanding system usually requires training it on thousands of sample utterances, which can be costly and time-consuming to collect and annotate. That’s particularly burdensome for small developers, like many who have contributed to the library of more than 70,000 third-party skills now available for Alexa.Read More
New method for compressing neural networks better preserves accuracy
Neural networks have been responsible for most of the top-performing AI systems of the past decade, but they tend to be big, which means they tend to be slow. That’s a problem for systems like Alexa, which depend on neural networks to process spoken requests in real time.Read More
Manifold: A Model-Agnostic Visual Debugging Tool for Machine Learning at Uber
Machine learning (ML) is widely used across the Uber platform to support intelligent decision making and forecasting for features such as ETA prediction and fraud detection. For optimal results, we invest a lot of resources in developing accurate predictive …
The post Manifold: A Model-Agnostic Visual Debugging Tool for Machine Learning at Uber appeared first on Uber Engineering Blog.
Creating a Zoo of Atari-Playing Agents to Catalyze the Understanding of Deep Reinforcement Learning
This research was conducted with valuable help from collaborators at Google Brain and OpenAI.
A selection of trained agents populating the Atari zoo.
Some of the most exciting advances in AI recently have come from the field of deep reinforcement …
The post Creating a Zoo of Atari-Playing Agents to Catalyze the Understanding of Deep Reinforcement Learning appeared first on Uber Engineering Blog.
POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer
Jeff Clune and Kenneth O. Stanley were co-senior authors.
We are interested in open-endedness at Uber AI Labs because it offers the potential for generating a diverse and ever-expanding curriculum for machine learning entirely on its own. Having vast amounts …
The post POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer appeared first on Uber Engineering Blog.
How Alexa may learn to retrieve stored “memories”
In May 2018, Amazon launched Alexa’s Remember This feature, which enables customers to store “memories” (“Alexa, remember that I took Ben’s watch to the repair store”) and recall them later by asking open-ended questions (“Alexa, where is Ben’s watch?”).Read More
How Alexa Knows “Peanut Butter” Is One Shopping-List Item, Not Two
At a recent press event on Alexa’s latest features, Alexa’s head scientist, Rohit Prasad, mentioned multistep requests in one shot, a capability that allows you to ask Alexa to do multiple things at once. For example, you might say, “Alexa, add bananas, peanut butter, and paper towels to my shopping list.” Alexa should intelligently figure out that “peanut butter” and “paper towels” name two items, not four, and that bananas are a separate item.Read More
With New Data Representation Scheme, Alexa Can Better Match Skills to Customer Requests
In recent years, data representation has emerged as an important research topic within machine learning.Read More
New Approach to Language Modeling Reduces Speech Recognition Errors by Up to 15%
Language models are a key component of automatic speech recognition systems, which convert speech into text. A language model captures the statistical likelihood of any particular string of words, so it can help decide between different interpretations of the same sequence of sounds.Read More
Distributed “Re-Ranker” Ensures That Alexa Improvements Reach Customers ASAP
Suppose that you say to Alexa, “Alexa, play Mary Poppins.” Alexa must decide whether you mean the book, the video, or the soundtrack. How should she do it?Read More