Skip to main content

Yelp puts trust and safety in the spotlight

Yelp released its very first trust and safety report this week, with the goal of explaining the work that it does to crack down on fraudulent and otherwise inaccurate or unhelpful content. With focus on local business reviews and information, you might think Yelp would be relatively free of the types of misinformation that other […]

Yelp released its very first trust and safety report this week, with the goal of explaining the work that it does to crack down on fraudulent and otherwise inaccurate or unhelpful content.

With focus on local business reviews and information, you might think Yelp would be relatively free of the types of misinformation that other social media platforms struggle with. But of course, Yelp reviews are high stakes in their own way, since they can have a big impact on a business’ bottom line.

Like other online platforms, Yelp relies on a mix of software and human curation. On the software side, one of the main tasks is sorting reviews into recommended and not recommended. Group Product Manager for Trust and Safety Sudheer Someshwara told me that a review might not be recommended because it appears to be written by someone with a conflict of interest, or it might be solicited by the business, or it might come from a user who hasn’t posted many reviews before and “we just don’t know enough information about the user to recommend those reviews to our community.”

“We take fairness and integrity very seriously,” Someshwara said. “No employee at Yelp has the ability to override decisions the software has made. That even includes the engineers.”

He added, “We treat every business the same, whether they’re advertising with us or not.”

Yelp trust and safety report

Image Credits: Yelp

So the company says that last year, users posted more than 18.1 million reviews, of which 4.6 million (about 25%) were not recommended by the software. Someshwara noted that even when a review is not recommended, it’s not removed entirely — users just have to seek it out in a separate section.

Removals do happen, but that’s one of the places where the user operations team comes in. As Vice President of Legal, Trust & Safety Aaron Schur explained, “We do make it easy for businesses as well as consumers to flag reviews. Every piece of content that’s flagged in that way does get reviewed by a live human to decide whether it should should be removed violating our guidelines.”

Yelp says that last year, about 710,000 reviews (4%) were removed entirely for violating the company’s policies. Of those, more than 5,200 were removed for violating the platform’s COVID-19 guidelines (among other things, they prohibit reviewers from claiming they contracted COVID from a business, or from complaining about mask requirements or that a business had to close due to safety regulations). Another 13,300 were removed between May 25 and the end of the year for threats, lewdness, hate speech or other harmful content.

“Any current event that takes place will find its way onto Yelp,” acknowledged Vice President of User Operations Noorie Malik. “People turn to Yelp and other social media platforms to have a voice.”

But expressing political beliefs can conflict with what Malik said is Yelp’s “guiding principle,” namely “genuine, first-hand experience.” So Yelp has built software to detect unusual activity on a page and will also add a Consumer Alert when it believes there are “egregious attempts to manipulate ratings and reviews.” For example, it says there was a 206% increase in media-fueled incidents year-over-year.

It’s not that you can’t express political opinions in your reviews, but the review has to come from first-hand experience, rather than being prompted by reading a negative article or an angry tweet about the business. Sometimes, she added, that means the team is “removing content with a point of view that we agree with.”

One example that illustrates this distinction: Yelp will take down reviews that seem driven by media coverage suggesting that a business owner or employee behaved in a racist manner, but at the same time, it also labeled two businesses in December 2020 with a “Business Accused of Racism” alert reflecting “resounding evidence of egregious, racist actions from a business owner or employee.”

Beyond looking at individual reviews and spikes in activities, Someshwara said Yelp will also perform “sting operations” to find groups that are posting fraudulent reviews.

In fact, his team apparently shut down 1,200 user accounts associated with review rings and reported nearly 200 groups to other platforms. And it just rolled out an updated algorithm designed to better detect and un-recommend reviews coming from those groups.

Yelp will show user feedback about businesses’ health and safety practices

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.