Machine learning systems often act on “features” extracted from input data. In a natural-language-understanding system, for instance, the features might include words’ parts of speech, as assessed by an automatic syntactic parser, or whether a sentence is in the active or passive voice.Read More
Leveraging unannotated data to bootstrap Alexa functions more quickly
Developing a new natural-language-understanding system usually requires training it on thousands of sample utterances, which can be costly and time-consuming to collect and annotate. That’s particularly burdensome for small developers, like many who have contributed to the library of more than 70,000 third-party skills now available for Alexa.Read More
New method for compressing neural networks better preserves accuracy
Neural networks have been responsible for most of the top-performing AI systems of the past decade, but they tend to be big, which means they tend to be slow. That’s a problem for systems like Alexa, which depend on neural networks to process spoken requests in real time.Read More
How Alexa may learn to retrieve stored “memories”
In May 2018, Amazon launched Alexa’s Remember This feature, which enables customers to store “memories” (“Alexa, remember that I took Ben’s watch to the repair store”) and recall them later by asking open-ended questions (“Alexa, where is Ben’s watch?”).Read More
How Alexa Knows “Peanut Butter” Is One Shopping-List Item, Not Two
At a recent press event on Alexa’s latest features, Alexa’s head scientist, Rohit Prasad, mentioned multistep requests in one shot, a capability that allows you to ask Alexa to do multiple things at once. For example, you might say, “Alexa, add bananas, peanut butter, and paper towels to my shopping list.” Alexa should intelligently figure out that “peanut butter” and “paper towels” name two items, not four, and that bananas are a separate item.Read More
With New Data Representation Scheme, Alexa Can Better Match Skills to Customer Requests
In recent years, data representation has emerged as an important research topic within machine learning.Read More
New Approach to Language Modeling Reduces Speech Recognition Errors by Up to 15%
Language models are a key component of automatic speech recognition systems, which convert speech into text. A language model captures the statistical likelihood of any particular string of words, so it can help decide between different interpretations of the same sequence of sounds.Read More
Distributed “Re-Ranker” Ensures That Alexa Improvements Reach Customers ASAP
Suppose that you say to Alexa, “Alexa, play Mary Poppins.” Alexa must decide whether you mean the book, the video, or the soundtrack. How should she do it?Read More
The role of context in redefining human-computer interaction
In the past few years, advances in artificial intelligence have captured our imaginations and led to the widespread use of voice services on our phones and in our homes.Read More
Context-Aware Deep-Learning Method Boosts Alexa Dialogue System’s Ability to Recognize Conversation Topics by 35%
Conversational-AI systems have traditionally fallen into two categories: goal-oriented systems, which help users fulfill requests, and chatbots, which carry on informative or entertaining conversations.Read More