How Hinge used machine learning to build its first online recommendation engine

At DATAx San Francisco, Hinge machine learning lead Shanshan Ding presented a case study on how the dating app built the first version of its online recommendation engine and delivered its first machine-learning algorithm project

21May

The journey that Shanshan Ding, machine learning (ML) lead at Hinge, and her team went on to deliver the online dating app's first online recommendation engine and machine-learning algorithm project took a curious route. From theory to practice, the company sorted through the many options for recommendation systems to find an approach that worked in solving the unique challenges presented by the world of dating.

Speaking to a packed room at the Machine Learning Innovation Summit at DATAx San Francisco, Ding explained that Hinge was a dating app designed to be deleted; with three out of four first dates made on Hinge leading to a second date, its KPIs go much deeper than metrics such as 'swipes'.

The problem the team found itself attempting to solve was not the simple personalized ranking problem that most machine-learning algorithms typically solve for the likes of Spotify or Netflix. Instead, it was about overcoming the challenges associated with delivering its own personalized preferences for potential daters, unlike on a streaming service where it does not matter if every single user "wants to watch Game of Thrones at the same time".

"It is really easy to show initial conversion if you show everyone the most attractive people," Ding noted. "But for online dating, we can have an ecosystem where all users are receiving all of the attention. There is an optimal distribution problem, so the decision was made to use ML for personalized ranking, then apply operations research to the distribution problem."

The metrics the team took into account went beyond initial conversion from the dating profile, to take into account response rate, conversation and even phone numbers – every step in the funnel that could be tracked was tracked.

Ding explained that with Hinge already a mature, successful product prior to the introduction of ML, the team wanted to minimize any changes to the code base. With its backend architecture in Go, a language which does not natively support ML, the team realized it wanted to modularize as much as possible, so that ML was separate to the main app.

"With real-time computations more costly than offline, we pre-computed recommendations, so that the backend queried a static database," said Ding.

Hinge then ensured the design was carried out to a spec that valued maintainability and scalability above everything else. As Ding noted, "the more features a model uses, the more difficult it is to maintain".

Following implementation, Hinge downloads grew by more than four times in 2018, meaning the complexity of the recommendation system also grew exponentially to a growth curve of O (n^2), meaning the team required something that could scale.

To do this, the team required a system that would lend itself well to pre-computation, with a minimal number of features that could be implemented in a scalable way. In the end, Ding's team chose Matrix factorization as its first choice, opting to implement it in Spark for the product.

The whole exercise proved to be an interesting lesson in how past data is a misleading indicator of how a model will actually perform live.

"I had estimated a 3% lift based on past data, but with live data it became a 20% lift in actual data," Ding explained. "In past data, you're not testing what would actually happen, as past data is about re-ranking past results.

"When you change the future, you don't change the past, but you do change what the past would have looked like – basically it's the theme for every time-travel movie ever made," she added. 

Five criteria for evaluating managed sd wan solutions small

Read next:

Five criteria for evaluating managed SD-WAN solutions

i