Google is finally ready to address the excessive amounts of false and offensive content appearing in their search results. Removing false and offensive content from populating in their giant search engine isn’t going to be a quick process, though. It will take Google a long time to filter through and remove all of the inaccurate and hateful content, as well as develop a strategy to ensure results like these don’t populate in the future.
This news from Google comes months after Mark Zuckerberg published a lengthy post describing how Facebook is going to address misinformation on their social media network.
“Historically, we have relied on our community to help us understand what is fake and what is not. Anyone on Facebook can report any link as false, and we use signals from those reports along with a number of others — like people sharing links to myth-busting sites such as Snopes — to understand which stories we can confidently classify as misinformation. Similar to clickbait, spam and scams, we penalize this content in News Feed so it’s much less likely to spread.”
Zuckerberg continued to claim that Facebook would address these issues by enforcing stronger detection, easy reporting, third party verification, warnings, related articles quality, disrupting fake news economics, and simply listening.
Around the election period things were tense on Facebook and lots of users began complaining about the high amounts of misinformation being posted. The pressure was on for the large communication channel and was growing heavy for the largest search engine, Google. It was only a matter of time before Google was going to have to address the growing issue.
What is Google Offended By?
The definition of something offensive can be different from person to person. What may be offensive to you, isn’t necessarily offensive to me – and vice versa. To avoid any confusion, Google listed the definition of offensive content in their guideline:
Upsetting-Offensive content typically includes the following:
- Content that promotes hate or violence against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.
- Content with racial slurs or extremely offensive terminology.
- Graphic violence, including animal cruelty or child abuse.
- Explicit how-to information about harmful activities (e.g., how-tos on human trafficking or violent assault).
- Other types of content which users in your locale would find extremely upsetting or offensive.
How is Google Going to Stop Offensive Content?
Using a group of 10,000 contracted individuals called “Quality Raters”, they review search results individually and report their findings to Google based on a number of factors, one of which is now false and offensive content. Google provides the 10,000 searchers with specific search terms to look up and analyze results for.
They’re then tasked with labeling what kind of search results populate and how relevant the information that populated was. For example, if Google gives a Quality Rater the search term “Marketing Agencies in San Diego” and a marketing agency in Miami displays they’ll note the irrelevant findings and report it back to Google. The search engine then takes their findings and uses the research to help improve it’s algorithms.
No need to point your pitch forks at the Quality Raters, SEO’s. They aren’t directly affecting rankings, but they are helping Google populate search results that are more relevant to a query.
With all of that being said, these Quality Raters are now also tasked with reporting content that upsets or offends, as well as false information. The 200-page manual was updated to flag any racial slurs, graphic violence, and other offensive types of content. For example, section 13.6 “Fails to Meet” has the below instructions for labeling offensive or upsetting content for the Quality Raters:
The following should also be rated Fails to Meet because they lead to very poor and upsetting user experiences:
- Porn results for non-porn seeking queries.
- Upsetting or offensive results for queries which are not obviously seeking upsetting or offensive content.
- Pages which directly contradict well established scientific or medical consensus for queries seeking scientific or medical information, unless the query indicates the user is seeking an alternative viewpoint.
- Pages which directly contradict well-established historical facts (e.g., unsubstantiated conspiracy theories), unless the query clearly indicates the user is seeking an alternative viewpoint.
What Does This Mean For My Business?
As long as you’re not publishing content that can be deemed offensive in regards to hateful, nonfactual, claim-worthy content, then continue operating as normal. But, if you’re a content driven company that succeeds by publishing false facts and encourages hate, then your content may stop populating in Google.
If your content is deemed as offensive by the Quality Raters, it isn’t going to immediately disappear. In fact, it may never disappear. The data that the Quality Raters report on is used to update Google’s algorithms and better populate results. This means that the content may still populate, but it’s more likely to get flagged in the algorithm and avoid showing.
If you come across offensive content that perhaps the Quality Raters missed, you can submit the URL to Google. This submission page acts as a safe zone to report adult or objectionable content that is appearing in SafeSearch-filtered results. SafeSearch is Google’s helpful way of blocking the “bad” websites for you. It avoids populating search results containing explicit images, videos, and websites and is a great tool for parents with children who use their phone and search on the internet, or for teenagers who have their own computers.