danwainwright / scraper_15_tutorial

album data for tutorial


Contributors danwainwright

Last run failed with status code 1.

Console output of last run

Injecting configuration and compiling... [1G [1G-----> Python app detected [1G-----> Installing python-2.7.9 [1G $ pip install -r requirements.txt [1G Obtaining scraperwiki from git+http://github.com/openaustralia/scraperwiki-python.git@morph_defaults#egg=scraperwiki (from -r /tmp/build/requirements.txt (line 6)) [1G Cloning http://github.com/openaustralia/scraperwiki-python.git (to morph_defaults) to /app/.heroku/src/scraperwiki [1G Collecting lxml==3.4.4 (from -r /tmp/build/requirements.txt (line 8)) [1G Downloading lxml-3.4.4.tar.gz (3.5MB) [1G Collecting cssselect==0.9.1 (from -r /tmp/build/requirements.txt (line 9)) [1G Downloading cssselect-0.9.1.tar.gz [1G Collecting dumptruck>=0.1.2 (from scraperwiki->-r /tmp/build/requirements.txt (line 6)) [1G Downloading dumptruck-0.1.6.tar.gz [1G Collecting requests (from scraperwiki->-r /tmp/build/requirements.txt (line 6)) [1G Downloading requests-2.18.1-py2.py3-none-any.whl (88kB) [1G Collecting urllib3<1.22,>=1.21.1 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 6)) [1G Downloading urllib3-1.21.1-py2.py3-none-any.whl (131kB) [1G Collecting idna<2.6,>=2.5 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 6)) [1G Downloading idna-2.5-py2.py3-none-any.whl (55kB) [1G Collecting certifi>=2017.4.17 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 6)) [1G Downloading certifi-2017.4.17-py2.py3-none-any.whl (375kB) [1G Collecting chardet<3.1.0,>=3.0.2 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 6)) [1G Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB) [1G Installing collected packages: dumptruck, urllib3, idna, certifi, chardet, requests, scraperwiki, lxml, cssselect [1G Running setup.py install for dumptruck: started [1G Running setup.py install for dumptruck: finished with status 'done' [1G Running setup.py develop for scraperwiki [1G Running setup.py install for lxml: started [1G Running setup.py install for lxml: still running... [1G Running setup.py install for lxml: finished with status 'done' [1G Running setup.py install for cssselect: started [1G Running setup.py install for cssselect: finished with status 'done' [1G Successfully installed certifi-2017.4.17 chardet-3.0.4 cssselect-0.9.1 dumptruck-0.1.6 idna-2.5 lxml-3.4.4 requests-2.18.1 scraperwiki urllib3-1.21.1 [1G [1G [1G-----> Discovering process types [1G Procfile declares types -> scraper Injecting scraper and running... Traceback (most recent call last): File "scraper.py", line 48, in <module> scrape_and_look_for_next_link(starting_url) File "scraper.py", line 31, in scrape_and_look_for_next_link html = scraperwiki.scrape(url) File "/app/.heroku/src/scraperwiki/scraperwiki/utils.py", line 31, in scrape f = urllib2.urlopen(req) File "/app/.heroku/python/lib/python2.7/urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "/app/.heroku/python/lib/python2.7/urllib2.py", line 437, in open response = meth(req, response) File "/app/.heroku/python/lib/python2.7/urllib2.py", line 550, in http_response 'http', request, response, code, msg, hdrs) File "/app/.heroku/python/lib/python2.7/urllib2.py", line 475, in error return self._call_chain(*args) File "/app/.heroku/python/lib/python2.7/urllib2.py", line 409, in _call_chain result = func(*args) File "/app/.heroku/python/lib/python2.7/urllib2.py", line 558, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 404: Not Found

Statistics

Total run time: 2 minutes

Total cpu time used: less than 5 seconds

Total disk space used: 22.6 KB

History

  • Manually ran revision 07623b96 and failed .
    nothing changed in the database
  • Created on morph.io

Scraper code

scraper_15_tutorial