badboyilprimo / GoogleSearchCrawler

a tool for crawl Google search results

Google Search Crawler

This is a simple google search results crawler. Before use this tool, please read these tips below.


  1. Python

    python should be installed in your computer. here is the official website:

  2. BeautifulSoup

    A html parser to extract search results from Google. BeautifulSoup(version 4) is better.

    For more information about BeautifuleSoup, please visit:

How to Use?

  1. single key word

    python 'your query key words'

    It will return about 10 extracted results by default. if you need more results, just change the expect_num value.

  2. list of key words


    First create a file named keywords, put your key words list into this file, one key word per line.

  3. If there are any problems or bugs about this tool, please contact me.

Last run failed with status code 1.

Console output of last run

Traceback (most recent call last): File "/repo/", line 215, in <module> crawler() File "/repo/", line 191, in crawler load_user_agent() File "/repo/", line 181, in load_user_agent fp = open('./user_agents', 'r') IOError: [Errno 2] No such file or directory: './user_agents'


Total run time: less than 5 seconds

Total cpu time used: less than 5 seconds

Total disk space used: 1.02 MB


  • Manually ran revision 63435f54 and failed .
    nothing changed in the database
  • Created on

Scraper code


GoogleSearchCrawler /