Podsplainer: What’s a Recommender System? NVIDIA’s Even Oldridge Breaks It Down

The very thing that makes the internet so useful to so many people — the vast quantity of information that’s out there — can also make going online frustrating.

There’s so much available that the sheer volume of choices can be overwhelming. That’s where recommender systems come in, explains NVIDIA AI Podcast host Noah Kravitz.

To dig into how recommender systems work — and why these systems are being harnessed by companies in industries around the globe — Kravitz spoke to Even Oldridge, senior manager for the Merlin team at NVIDIA.

Some highlights, below. For the full conversation we, um, recommend you tune in to the podcast.

Question: So what’s a recommender system and why are they important?

Oldridge: Recommender systems are ubiquitous, and they’re a huge part of the internet and of most mobile apps, and really, most places have interaction that a person has with a computer. A recommender system, at its heart, is a system for taking the vast amount of options available in the world and boiling them down to something that’s relevant to the user in that time or in that context.

That’s a really significant challenge, both from the engineering side and the systems and the models that need to be built. Recommender systems, in my mind, are one of the most complex and significant machine learning challenges of our day. You’re trying to represent what a user like a real live human person is interested in at any given moment. And that’s not an easy thing to do, especially when that person may or may not know what they want.

Question: So broadly speaking, how would you define a recommender system?

Oldridge: A recommender system is a sort of machine learning algorithm that filters content. So you can query the recommender system to narrow down the possible options within a particular context. The classic view that most people have with recommender systems is online shopping, where you’re browsing for a particular item, and you’re seeing other items that are potentially useful in that same context or similar content. And, with sites like Netflix and Spotify and content distributors, you’re seeing content based on the content that you’ve viewed in the past. The recommender system’s role is to try and build a summary of your interests and try to come up with the next relevant thing to show you.

Question: In these different examples that you talked about, do they generally operate the same way across different shopping sites or content sites that users might go to? Or are there different ways of approaching the problem?

Oldridge: There are patterns to the problem, but it’s one of the more fragmented industries, I think. If you look at things like computer vision or natural language processing, there are open-source datasets that have allowed for a lot of significant advancements in the field and allowed for standardization and benchmarking, and those fields have become pretty standardized because of that. In the recommender system space, the interaction data that your users are generating is part of the core value of your company, so most companies are reticent to reveal that data. So there aren’t a lot of great public recommender system datasets out there.

Question: What are you doing at NVIDIA? How did NVIDIA get into the business of recommender systems? What role does your team play?

Oldridge: Why NVIDIA is interested in the business of recommender systems, is, to quote Jensen [Huang, NVIDIA’s CEO], that recommender systems are the most important algorithm on the internet, and they drive a lot of the financial and compute decisions that are being made. For NVIDIA, it’s a very interesting machine learning workload that I think previously has been more applicable to the CPU or has been done more on the CPU.

We’ve gotten to a place where recommender systems on GPUs make a ton of sense. There aren’t many people who are trying to run large natural language processing models or large computer vision models on a CPU. For recsys, there’s a lot of people still focused on the CPU-based solutions. There’s a strong motivation for us to get this right because we have a vested interest in selling GPUs, but beyond that, there’s a similar degree of acceleration that’s possible that led to the revolutions that have happened in computer vision and NLP. When things can happen 10 times faster, then you’re able to do much more exploration and much more diving into the problem space. The field begins to take off in a way that it hasn’t before, and that’s something that our team is really focused on: How we can enable these teams to develop recommender systems much more quickly and efficiently, both from the compute time perspective and making sure that you can develop features and train models and deploy to production really quickly and easily.

Question: How do you judge the effectiveness of a recommender system?

Oldridge: There’s a wide variety of factors that are used to determine and compare the effectiveness, both offline when you’re developing the model and trying to evaluate its performance, and online when you’re running the model in production and the model serving the customer. There’s a lot of metrics in that space. I don’t think there’s any one clear answer of what you need to measure. I think a lot of different companies are, at the heart of it, trying to measure longer-term user engagement, but that’s a very lagging signal. So you need to tie it to metrics that are much more immediate, like interaction and clicks on items, etc.

Question: Could one way to judge it potentially be something like the number of Amazon purchases a user ends up returning?

Oldridge: I think many companies will track those and other forms of user engagement, which can be both positive and negative. Those cost the company a lot, so they’re probably being weighed. It’s very difficult to trace those all the way back to an individual recommendation right at the start. It becomes one of the most interesting and complex challenges — at its heart, a recommender system is trying to model your preference. The preference of a human being in the world is based on their myriad of contexts. That context can change in ways that the model has no idea about. For example, on this podcast, you could tell me about an interesting book that you’ve read and that will lead me to look it up on Amazon and potentially order it. That recommendation from you as a human is you using your context that you have about me and about our conversation and about a bunch of other factors. The system doesn’t know that conversation has happened, so it doesn’t know that that’s kind of something that’s of particular relevance to me.

Question: Tell us about your background?

Oldridge: I did a Ph.D. in computer vision from the University of British Columbia here in Vancouver where I live, and that was pre deep learning. Everything that I did in my Ph.D. could be summarized in probably five lines of TensorFlow at this point.

I went from that role to a job at Plenty of Fish, an online dating site. There were about 30 people when I joined, and the founder had written all of the algorithms that were doing the recommendations. I was the first data science hire and built that up. It was recommending humans to humans for online dating. It’s a very interesting space; it’s a funny one in the sense that the users are the items, and it’s reciprocal. It was a very interesting place to be — you leave work on a Friday night and head home, and there were probably 50,000 or 100,000 people out on a date because of an algorithm. It’s very strange to think about the number of potential new humans that are in the world — marriages, whatever else that just happened — because of these algorithms.

It was interesting, and the data was driving it all. It was my first foray into recommender systems. Then, I left Plenty of Fish and went into deep learning and fast AI. I took six months off when I left Plenty of Fish after it was sold to Match Group, and I spent time really getting into deep learning, which I hadn’t spent any time on, and I got deep into that through the fast AI course.

Question: You mentioned that NVIDIA has some tools available to make it easier for smaller organizations. For anybody who wants to build a recommender system, do you want to speak to any of the specific things that are out there? Or maybe tools that you’re working on with the Merlin team that folks can use?

Oldridge: The Merlin team consists largely of people like myself, who’ve built recommender systems in production in the past and understand the pain of it. It’s really hard to build a recommender system.

We’re working on three main premises:

  • Make it work: We want to have a framework that provides end-to-end recommendations, so you can complete all the different stages and all the things you need.
  • Make it easy: It should be straightforward to be able to do things that are commonly done within the space. We’re really thinking about issues such as, “Where was a pain point in our past where it was a real challenge to use the existing tooling? And how can we smooth that pain point over?”
  • Make it fast: At NVIDIA, we want to make sure that this is performance, at scale, and how these things scale is an incredibly important part of the problem space.

Question: Where do you see the space headed over the next couple of years?

Oldridge: What we’re hoping to do with Merlin is provide a standard set of tools and a standard framework that everyone can use and think about to be able to do accelerated recommender systems. Especially as we accelerate things by a 10x factor or more as it changes the pattern.

One of my favorite diagrams that I saw since joining NVIDIA was the diagram of the developer, who, at the start of their day, gets a coffee and then starts [a data science project] and then goes to get another coffee because that takes so long and kind of just this back and forth, you know, drinking six or 10 cups. And, getting to the point where you know, they can do basically three things in a day.

It hits home because that was me a couple years ago, and I was personally facing that. It was so frustrating because I wanted to get stuff done, but because of that lagging signal, you’re not nearly as effective when you get back to it to try and dig in. Seeing the difference that it makes when it gets to that 1-to-3-minute cycle where you’re running something and getting the results and running something and getting the results and you get into that flow pattern, you’re really able to explore things quickly. Then you get to the point where you’re iterating so quickly, and then you can start leveraging the parallelization that happens on the GPU and begin to scale things up.

Question: For the folks who want to find out more about the work you’re doing and what’s going on with Merlin, where can they go online to learn more or dig a little deeper into some white papers and some of the more technical aspects of the work?

Oldridge: A great starting point is our GitHub. We linked out to a bunch of our papers there. I have a bunch of talks that I’ve given about Merlin across various sources. If you search for my name in YouTube, or for Merlin recommender systems, there’s a lot of information you can find out there.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey

 

The post Podsplainer: What’s a Recommender System? NVIDIA’s Even Oldridge Breaks It Down appeared first on NVIDIA Blog.

Read More