Once upon a time, there was a client who sold online courses. Because of poor website programming , Google robots indexed internal PDFs containing the lectures for those courses. What was the problem? Well, the client's sales were declining : the PDFs were to be given to users as a response to their queries, and the intention (obviously) was to get them to pay for them . Anyone could search for the PDF on Google and download it for free.
PDFs rank well .
Poor website programming can lead to a decrease in sales .
You have to be careful with tools that kuwait mobile database track positions : if we don't check which URL is registering, we might think we're on the right track when in reality we're throwing money away.
It's not all about Big Data when it comes to blocking PDF indexing
With Search Console we can obtain a lot of information. Within “Search traffic”, in “Search analytics” we can filter pages that contain PDFs in their URLs: this way we will have an idea of the traffic that reaches the PDF pages from the search engine . Here we must check if we want to block all PDFs or if we are interested in indexing some: for example, in the case of the course seller, a free sample can be provided; a summary or an introduction.
traffic to the site - block pdf's
Traffic information chart
Software ping pong: how to prevent PDF files from being indexed
To prevent PDF files (or any other page on a website) from being indexed, Google provides several methods :
A robots no index meta tag located in the <head> section of the site's html code:
<meta name=”robots” content=”noinde.
From this example, we can draw three conclusions :
-
- Posts: 719
- Joined: Thu Jan 02, 2025 7:48 am