Unsupervised learning: The curious pupil

One in a series of posts explaining the theories underpinning our research. Over the last decade, machine learning has made unprecedented progress in areas as diverse as image recognition, self-driving cars and playing complex games like Go. These successes have been largely realised by training deep neural networks with one of two learning paradigmssupervised learning and reinforcement learning. Both paradigms require training signals to be designed by a human and passed to the computer. In the case of supervised learning, these are the targets (such as the correct label for an image); in the case of reinforcement learning, they are the rewards for successful behaviour (such as getting a high score in an Atari game). The limits of learning are therefore defined by the human trainers. While some scientists contend that a sufficiently inclusive training regimefor example, the ability to complete a very wide variety of tasksshould be enough to give rise to general intelligence, others believe that true intelligence will require more independent learning strategies. Consider how a toddler learns, for instance. Her grandmother might sit with her and patiently point out examples of ducks (acting as the instructive signal in supervised learning), or reward her with applause for solving a woodblock puzzle (as in reinforcement learning).Read More

No Coding Required: Training Models with Ludwig, Uber’s Open Source Deep Learning Toolbox

Machine learning models perform a diversity of tasks at Uber, from improving our maps to streamlining chat communications and even preventing fraud.

In addition to serving a variety of use cases, it is important that we make machine learning

The post No Coding Required: Training Models with Ludwig, Uber’s Open Source Deep Learning Toolbox appeared first on Uber Engineering Blog.

Read More