Unsupervised learning: The curious pupil

One in a series of posts explaining the theories underpinning our research. Over the last decade, machine learning has made unprecedented progress in areas as diverse as image recognition, self-driving cars and playing complex games like Go. These successes have been largely realised by training deep neural networks with one of two learning paradigmssupervised learning and reinforcement learning. Both paradigms require training signals to be designed by a human and passed to the computer. In the case of supervised learning, these are the targets (such as the correct label for an image); in the case of reinforcement learning, they are the rewards for successful behaviour (such as getting a high score in an Atari game). The limits of learning are therefore defined by the human trainers. While some scientists contend that a sufficiently inclusive training regimefor example, the ability to complete a very wide variety of tasksshould be enough to give rise to general intelligence, others believe that true intelligence will require more independent learning strategies. Consider how a toddler learns, for instance. Her grandmother might sit with her and patiently point out examples of ducks (acting as the instructive signal in supervised learning), or reward her with applause for solving a woodblock puzzle (as in reinforcement learning).Read More

No Coding Required: Training Models with Ludwig, Uber’s Open Source Deep Learning Toolbox

Machine learning models perform a diversity of tasks at Uber, from improving our maps to streamlining chat communications and even preventing fraud.

In addition to serving a variety of use cases, it is important that we make machine learning

The post No Coding Required: Training Models with Ludwig, Uber’s Open Source Deep Learning Toolbox appeared first on Uber Engineering Blog.

Read More

Active learning: Algorithmically selecting training data to improve Alexa’s natural-language understanding

Alexa’s ability to respond to customer requests is largely the result of machine learning models trained on annotated data. The models are fed sample texts such as “Play the Prince song 1999” or “Play River by Joni Mitchell”. In each text, labels are attached to particular words — SongName for “1999” and “River”, for instance, and ArtistName for Prince and Joni Mitchell. By analyzing annotated data, the system learns to classify unannotated data on its own.Read More

Adapting Alexa to Regional Language Variations

As Alexa expands into new countries, she usually has to be trained on new languages. But sometimes, she has to be re-trained on languages she’s already learned. British English, American English, and Indian English, for instance, are different enough that for each of them, we trained a new machine learning model from scratch.Read More

Teaching Alexa to Follow Conversations

n order to engage customers in longer, more productive conversations, Alexa needs to solve the problem of reference resolution. If Alexa says, “‘Believer’ is by Imagine Dragons”, for instance, and the customer replies, “Play their latest album”, Alexa should be able to deduce that “their” refers to Imagine Dragons.Read More