Maximize access to information
At Google, we believe in open access to information. We believe that society works best when people have access to information from many sources. That’s why we do not remove web results from Search, except in very limited circumstances. However, when listings and other information are presented as Search features, users may interpret the information as having greater quality or credibility. In those cases, we apply more restrictive policies.
Two people using a desktop computer
Policies for Web results versus Search features

Web results are web pages, images, videos, news content or other material that Google finds from across the web. In keeping with our commitment to maximize access to information, we do not remove web results except for specific reasons covered by our overall content policies for Google Search, which includes child sexual abuse, highly personal information, spam, site owner requests and valid legal requests.

Search features include panels, carousels, enhancements to web listings (such as through structured data), predictive and refinement features, and results and features spoken aloud. Even though these features and the listings within them are automatically generated as with web results, we understand that people might perceive these to have higher credibility because of how they're presented. We also don't want predictive or refinement features to unexpectedly shock or offend people. This is why we have Search features policies that cover a variety of issues, including barring harassing, hateful and violent content. In addition to these, some Search features have more specific policies.

Why problematic content may appear
Since Search encompasses trillions of pages and other content across the web, occasionally results may contain content that some find objectionable or offensive. This may especially happen if the language used in a search query matches closely with the language that appears within problematic content. It might also happen in situations where fairly little useful or reliable content has been published that aligns with a particular topic. Such problematic content does not reflect Google’s own opinions. However, our belief in open access to information means that we do not remove such content except in accordance with the specific policies or legal obligations.
How we address policy-violating content

Google processes billions of searches per day. In fact, every day, 15% of the searches we process are ones we’ve never seen before. Automation is how Google handles the immense scale of so many searches. Google uses automation to discover content from across the web and other sources. Automated systems – like our search algorithms – are used to surface what seems to be the most useful or reliable content in response to particular queries. Automation also helps power our SafeSearch feature, allowing those who wish to use it to help prevent explicit content from appearing in search results.

Automation is also generally Google’s first line of defense in dealing with policy-violating content. Our systems are designed to prioritize what appears to be the most useful and helpful content on a given topic. Our systems are also designed not to surface content that violates our content policies.

Two people using a desktop computer

No system is 100% perfect. If our process surfaces policy-violating content, we always look to resolve it by improving our automated systems. This allows us to better deal with both a particular issue that’s been detected and improve for related queries and other searches overall.

In some cases, we may also take manual action. This does not mean that Google uses human curation to rearrange the results on a page. Instead, humans are used to review cases where policy-violating content surfaces and take manual action to block this content, in the limited and well-defined situations that warrant this.

You can learn more about our policies that apply to Google Search here.