Yoel Roth, Head of Trust and Safety at Match Group, acknowledged that dating scams are a growing concern, as he outlined how the company is addressing the issue, particularly in India, including ongoing dialogue with the government and law enforcement agencies.
“We do a lot of work with law enforcement across India to ensure that when we remove bad actors from our apps, or when someone reports being the victim of a scam, we can support prosecution and help bring those bad actors to justice through our dedicated law enforcement portal,” Roth told indianexpress.com in an interview in Delhi.
“We respond to requests from police in under 24 hours, and we fully support their investigations, no matter what type of harm they are looking into,” Roth, who was in India this week, added.
Story continues below this ad
Roth joined Match, which owns dating apps including Tinder, Hinge, and OkCupid, among others, in March last year, after previously working at X.
Rising dating app scams
As more people turn to online platforms for dating, there has been a rise in romance scams, where scammers use dating apps and social media to manipulate vulnerable individuals by forging romantic relationships, a practice commonly known as “catfishing.”
These scams often leave victims not only financially drained, losing their life savings or falling into debt but also emotionally devastated, particularly women and the elderly.
India has seen a steady rise in romance scams, as have other parts of the world. In fact, lawmakers in the US are pushing for a bill that would require dating apps to warn users about flagged accounts, as experts warn that artificial intelligence is making such scams harder to detect.
Story continues below this ad
For Roth, the biggest challenge remains how to tackle repeat offenders, or the bad actors who repeatedly violate the platform’s guidelines.
“There are a lot of AI tools that we can use to try to catch duplicate accounts. We can look for device-level signals. We can look at phone numbers, behaviours, and photos. But part of what motivated us to create Face Check is that a sufficiently dedicated bad actor can spoof all of those things, but one of the things you can’t spoof is your face, and because the face is something that is much harder for a bad actor to fake, it’s a much more powerful way for us to keep repeat bad actors off of our apps.”
Use of facial recognition technology
As part of parent company Match Group’s broader effort to improve trust and safety amid ongoing user frustration, Tinder will require new users in India to verify their profiles using facial recognition technology starting this week.
The Face Check feature involves taking a short video selfie to match biometric indicators and verify that the user isn’t a bot. India is the fifth country, after Colombia, Canada, the United States and Australia, where the company has rolled out the Face Check feature, which Roth says will help reduce impersonation.
Story continues below this ad
“We have seen that more sophisticated bad actors can get new email addresses and new phone numbers. They can even buy new phones. If we block their devices, it’s much harder to get a new face, and so face check is built to be something that’s very easy for consumers, but ultimately very difficult for bad actors to circumvent,” Roth said.
Yoel Roth, Head of Trust and Safety at Match Group. (Image: The Indian Express: Anuj Bhatia)
Roth said Face Check differs from Tinder’s ID Check, which uses a government-issued ID to verify age and identity. “Instead of relying on users to simply upload a photo to their profile, which could be a deepfake, we collect liveness data using the front-facing camera on their device while they are using the app. That way, we can ensure that the image isn’t a deepfake or inauthentic, but that it’s a real person,” Roth explained when asked how the feature works.
“It’s also a video selfie, not just a single image. Users are required to move their phone toward and away from their face, allowing us to capture their facial features from multiple angles and perspectives. First, this gives us detailed information about the shape of the face, and second, it makes it much harder for anyone to fake or spoof the video selfie.”
“There’s a significant decrease in the number of reports we receive, and people encounter less harmful content and submit fewer complaints. When we survey users about their perceptions of how real and authentic the accounts are, there’s a noticeable improvement in their sense of safety and trust on Tinder,” Roth said when asked about how the Face Check feature has been received in other markets.
Story continues below this ad
“It’s a feature we continue to test and evolve as we roll it out to more countries, but it’s already proving to be a promising and effective way to improve safety on Tinder and, ultimately, across all apps in the Match Group portfolio,” he added.
Roth said there are other safety features in place, including the ‘Are You Sure?’ prompt, which detects when someone is about to send a message that might be inappropriate, overly sexual, or abusive.
“We then encourage the sender to reconsider by asking, ‘Are you sure you want to send that?’. We have found that one in five people who see that prompt end up changing their message or choosing not to send it at all,” Roth said.
While many users expect apps like Tinder to have a more robust reporting mechanism on the platform, Roth, as the head of safety at Match, ensures that users don’t encounter any trouble on the platform.
Story continues below this ad
‘AI helping to detect fake accounts’
Face Check is the latest in a series of improvements to Tinder’s safety features, but Roth pointed out that the company also relies on artificial intelligence to detect fake accounts and take appropriate action.
“We use AI to scan for fake accounts, scams, nudity, abusive behaviour – any type of content that might violate our rules. Of the fake accounts we remove in India, more than 90 per cent are ones we have identified proactively through the use of AI and other technologies. That’s a number we are focused on continuously improving,” he said.
“My goal is for our AI and technology to detect as many violations of our rules as possible, without users needing to go through the trouble of reporting them,” Roth said. “That said, we place a lot of emphasis on having effective reporting mechanisms. Typically, the reports we receive are reviewed in well under 24 hours – often under six hours, and usually within one hour. Reports, especially those involving severe issues, don’t just go to an AI system; they are reviewed by real people, and they are reviewed very quickly. So we make sure to emphasise effective reporting – not just in India, but around the world.”
‘In real life’ scams in India
For Roth, another goal is to understand the nature of scams specific to India and design a mechanism to address them on the platform. “We do a lot of threat intelligence work to understand the specific nature of scams in India. One of the things we have observed is that some of the most common types of scams in the United States, Japan, and Europe, such as investment scams, are actually less common in India,” Roth said.
Story continues below this ad
“When we look at the data from India, it’s much more about what we call ‘in real life’ scams. These involve two real people meeting in person, and then situations like a ‘dine and dash,’ where someone is left with the restaurant bill. Another example is a pub scam, where the restaurant owner may be working with the scammer and eyeing for very expensive items. These types of scams are more commonly reported in India and are relatively less frequent in other parts of the world.”
“We certainly see reports of financial and investment scams in India as well, but based on the data we have seen, it’s a mix,” he said.
Roth added that the company works closely with Meta, Coinbase, Ripple, and other social and financial firms in a cross-industry collaboration to address scams.
‘LGBTQ+ users’ safety is critical for us’
Dating apps are also under scrutiny as many members of the LGBTQ+ community raise concerns about harassment, as well as sexual and gender-based violence. In India, many users have experienced hate crimes, physical violence, and other harms while using dating apps.
Story continues below this ad
“We encourage people to stay on the app for as long as possible. Once conversations move outside our platforms, we can no longer moderate them,” Roth said.
“So, we urge users to stay in the app and report any suspicious activity. And critically, for those who may not feel comfortable going to the police for whatever reason, we try to make them feel confident reporting those issues directly to us, so we can ensure appropriate action is taken”.