Coordinatorhub / google_news_scrape_2

google news scrape


Forked from ScraperWiki

Contributors Coordinatorhub

Last run failed with status code 1.

Console output of last run

Traceback (most recent call last): File "/repo/scraper.py", line 20, in <module> items = get_google_new_results( '3d printer', 50 ) File "/repo/scraper.py", line 6, in get_google_new_results obj = parseString( urllib2.urlopen('http://news.google.com/news?q=%s&output=rss' % term).read() ) File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 406, in open response = meth(req, response) File "/usr/lib/python2.7/urllib2.py", line 519, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python2.7/urllib2.py", line 444, in error return self._call_chain(*args) File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 527, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 400: Bad Request

Statistics

Total run time: less than 5 seconds

Total cpu time used: less than 5 seconds

Total disk space used: 18.5 KB

History

  • Manually ran revision 73f7a534 and failed .
    nothing changed in the database
  • Manually ran revision 73f7a534 and failed .
    nothing changed in the database
  • Forked from ScraperWiki

Scraper code

Python

google_news_scrape_2 / scraper.py