Scraping data from Google is an art in and of itself. Many people who use proxies use them to scrape Google, and Google does a lot to prevent this. So what's the best plan of action?

Since you're in the market for proxies, we'll skip the official API (useless for SERPs) and focus on good old scraping using PhantomJS, Selenium or any other custom headless browser. 

Which leads to Advice #1: Use a headless, JS enabled browser. This mimics an actual web request. Other solutions like Curl, simply don't, and make it easier for Google to detect you amongst 'normal' queries. 

Advice  #2: Watch your delays! If you scrape at a rate higher than 8 (used to be 15), you risk detection. Request more than 10 keyswords per hour and you will get blocked. The only way to scale up is to use more proxies! Do not use these delays to the second as well, but randomize! Act human.

Advice #3: Do not scrape all day! Use delays. Act like a user. In fact, we advice to drop down to as little as 4 queries per hour for a maximum of 4 hours per day, spread over a normal work period of 8 hours, to mimic a regular end-user. Just think about it, when was the last time you queried Google unrelated keywords 8 hours straight, 8 times per hour?

Advice #4: Accept that you're the mouse in a cat and mouse game. You want something from Google, they can block you for it. It's not the proxy provider's fault if and when this happens, nor does it have to be yours. Accept it as part of the game and business you're in and you'll reduce your daily dosage of stress :-) There are always options to refresh, or upgrade your proxies.

Failure to follow this advice will get you banned. So scrape safely! And remember - scaling up your proxies is cheaper and easier than using Anti Captcha services, or experiencing downtime.

Did this answer your question?