woodbine / sp_E0303_RBC_gov

Scrapes www.reading.gov.uk

Reading Borough Council


Contributors blablupcom

Last run failed with status code 1.

Console output of last run

Injecting configuration and compiling...  -----> Python app detected  ! The latest version of Python 2 is python-2.7.14 (you are using python-2.7.6, which is unsupported).  ! We recommend upgrading by specifying the latest version (python-2.7.14).  Learn More: https://devcenter.heroku.com/articles/python-runtimes -----> Installing python-2.7.6 -----> Installing pip -----> Installing requirements with pip  Obtaining scraperwiki from git+http://github.com/openaustralia/scraperwiki-python.git@morph_defaults#egg=scraperwiki (from -r /tmp/build/requirements.txt (line 1))  Cloning http://github.com/openaustralia/scraperwiki-python.git (to morph_defaults) to /app/.heroku/src/scraperwiki  Collecting lxml==3.4.4 (from -r /tmp/build/requirements.txt (line 2))  /app/.heroku/python/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning.  SNIMissingWarning  /app/.heroku/python/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning.  InsecurePlatformWarning  Downloading lxml-3.4.4.tar.gz (3.5MB)  Collecting cssselect==0.9.1 (from -r /tmp/build/requirements.txt (line 3))  Downloading cssselect-0.9.1.tar.gz  Collecting beautifulsoup4 (from -r /tmp/build/requirements.txt (line 4))  Downloading beautifulsoup4-4.6.0-py2-none-any.whl (86kB)  Collecting python-dateutil (from -r /tmp/build/requirements.txt (line 5))  Downloading python_dateutil-2.6.1-py2.py3-none-any.whl (194kB)  Collecting dumptruck>=0.1.2 (from scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading dumptruck-0.1.6.tar.gz  Collecting requests (from scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading requests-2.18.4-py2.py3-none-any.whl (88kB)  Collecting six>=1.5 (from python-dateutil->-r /tmp/build/requirements.txt (line 5))  Downloading six-1.11.0-py2.py3-none-any.whl  Collecting chardet<3.1.0,>=3.0.2 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB)  Collecting certifi>=2017.4.17 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading certifi-2018.1.18-py2.py3-none-any.whl (151kB)  Collecting urllib3<1.23,>=1.21.1 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading urllib3-1.22-py2.py3-none-any.whl (132kB)  Collecting idna<2.7,>=2.5 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading idna-2.6-py2.py3-none-any.whl (56kB)  Installing collected packages: dumptruck, chardet, certifi, urllib3, idna, requests, scraperwiki, lxml, cssselect, beautifulsoup4, six, python-dateutil  Running setup.py install for dumptruck: started  Running setup.py install for dumptruck: finished with status 'done'  Running setup.py develop for scraperwiki  Running setup.py install for lxml: started  Running setup.py install for lxml: still running...  Running setup.py install for lxml: finished with status 'done'  Running setup.py install for cssselect: started  Running setup.py install for cssselect: finished with status 'done'  Successfully installed beautifulsoup4-4.6.0 certifi-2018.1.18 chardet-3.0.4 cssselect-0.9.1 dumptruck-0.1.6 idna-2.6 lxml-3.4.4 python-dateutil-2.6.1 requests-2.18.4 scraperwiki six-1.11.0 urllib3-1.22   ! Hello! It looks like your application is using an outdated version of Python.  ! This caused the security warning you saw above during the 'pip install' step.  ! We recommend 'python-3.6.2', which you can specify in a 'runtime.txt' file.  ! -- Much Love, Heroku.   -----> Discovering process types  Procfile declares types -> scraper Injecting scraper and running... Traceback (most recent call last): File "scraper.py", line 101, in <module> links = blocks.find_all('a', href=True) AttributeError: 'NoneType' object has no attribute 'find_all'

Data

Downloaded 589 times by SimKennedy MikeRalphson woodbine

To download data sign in with GitHub

Download table (as CSV) Download SQLite database (15 KB) Use the API

rows 10 / 30

l f d
E0303_RBC_gov_2015_Q3
2016-05-17 01:29:28.051059
E0303_RBC_gov_2015_Q2
2016-05-17 01:29:28.986853
E0303_RBC_gov_2015_Q1
2016-05-17 01:29:29.761542
E0303_RBC_gov_2014_Q4
2016-05-17 01:29:30.049956
E0303_RBC_gov_2014_Q1
2016-05-17 01:29:30.347730
E0303_RBC_gov_2013_Q4
2016-05-17 01:29:30.639159
E0303_RBC_gov_2013_Q3
2016-05-17 01:29:30.930410
E0303_RBC_gov_2013_Q2
2016-05-17 01:29:31.222399
E0303_RBC_gov_2013_Q1
2016-05-17 01:29:31.510883
E0303_RBC_gov_2012_Q4
2016-05-17 01:29:31.907669

Statistics

Average successful run time: 2 minutes

Total run time: about 1 month

Total cpu time used: 7 minutes

Total disk space used: 42 KB

History

  • Auto ran revision dfa1e484 and failed .
    nothing changed in the database
    1 page scraped
  • Auto ran revision dfa1e484 and failed .
    nothing changed in the database
    1861 pages scraped
  • Auto ran revision dfa1e484 and failed .
    nothing changed in the database
    1 page scraped
  • Auto ran revision dfa1e484 and failed .
    nothing changed in the database
    1 page scraped
  • Auto ran revision dfa1e484 and failed .
    nothing changed in the database
    1 page scraped
  • ...
  • Created on morph.io

Show complete history

Scraper code

Python

sp_E0303_RBC_gov / scraper.py