blablupcom / sp_RTV_NWBHNFT_gov


We provide treatment, support and guidance for a range of health issues for people living in Greater Manchester, Halton, Knowsley, Sefton, St Helens, Warrington and Wigan.

Contributors blablupcom

Last run completed successfully .

Console output of last run

Injecting configuration and compiling...  -----> Python app detected -----> Installing python-2.7.14 -----> Installing pip -----> Installing requirements with pip  Obtaining scraperwiki from git+ (from -r /tmp/build/requirements.txt (line 1))  Cloning (to morph_defaults) to /app/.heroku/src/scraperwiki  Collecting lxml==3.4.4 (from -r /tmp/build/requirements.txt (line 2))  Downloading lxml-3.4.4.tar.gz (3.5MB)  Collecting cssselect==0.9.1 (from -r /tmp/build/requirements.txt (line 3))  Downloading cssselect-0.9.1.tar.gz  Collecting beautifulsoup4 (from -r /tmp/build/requirements.txt (line 4))  Downloading beautifulsoup4-4.6.0-py2-none-any.whl (86kB)  Collecting dumptruck>=0.1.2 (from scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading dumptruck-0.1.6.tar.gz  Collecting requests (from scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading requests-2.18.4-py2.py3-none-any.whl (88kB)  Collecting idna<2.7,>=2.5 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading idna-2.6-py2.py3-none-any.whl (56kB)  Collecting urllib3<1.23,>=1.21.1 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading urllib3-1.22-py2.py3-none-any.whl (132kB)  Collecting certifi>=2017.4.17 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading certifi-2018.1.18-py2.py3-none-any.whl (151kB)  Collecting chardet<3.1.0,>=3.0.2 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB)  Installing collected packages: dumptruck, idna, urllib3, certifi, chardet, requests, scraperwiki, lxml, cssselect, beautifulsoup4  Running install for dumptruck: started  Running install for dumptruck: finished with status 'done'  Running develop for scraperwiki  Running install for lxml: started  Running install for lxml: still running...  Running install for lxml: finished with status 'done'  Running install for cssselect: started  Running install for cssselect: finished with status 'done'  Successfully installed beautifulsoup4-4.6.0 certifi-2018.1.18 chardet-3.0.4 cssselect-0.9.1 dumptruck-0.1.6 idna-2.6 lxml-3.4.4 requests-2.18.4 scraperwiki urllib3-1.22   -----> Discovering process types  Procfile declares types -> scraper Injecting scraper and running... RTV_NWBHNFT_gov_2017_12 RTV_NWBHNFT_gov_2017_11 RTV_NWBHNFT_gov_2017_10 RTV_NWBHNFT_gov_2017_09 RTV_NWBHNFT_gov_2017_07 RTV_NWBHNFT_gov_2017_05 RTV_NWBHNFT_gov_2017_06 RTV_NWBHNFT_gov_2017_08 RTV_NWBHNFT_gov_2017_04 RTV_NWBHNFT_gov_2016_12 RTV_NWBHNFT_gov_2017_01 RTV_NWBHNFT_gov_2016_07 RTV_NWBHNFT_gov_2016_06 RTV_NWBHNFT_gov_2016_05 RTV_NWBHNFT_gov_2016_11 RTV_NWBHNFT_gov_2016_10 RTV_NWBHNFT_gov_2016_09 RTV_NWBHNFT_gov_2017_03 RTV_NWBHNFT_gov_2017_02 RTV_NWBHNFT_gov_2016_04 RTV_NWBHNFT_gov_2016_08


Downloaded 0 times

To download data sign in with GitHub

Download table (as CSV) Download SQLite database (8 KB) Use the API

rows 10 / 21

d l f
2018-03-30 18:33:53.259105 documents/2017-18 £25k Reporting.xlsx
2018-03-30 18:33:54.148800 documents/Trans reports Nov 17.xlsx
2018-03-30 18:33:55.066322 documents/October 2017 - Transparency reports.xlsx
2018-03-30 18:33:55.967945 documents/September 2017.xlsx
2018-03-30 18:33:56.808724 documents/July Trans.xlsx
2018-03-30 18:33:57.612317 documents/37602_May 2017.xlsx
2018-03-30 18:33:58.382106 documents/37603_June 2017.xlsx
2018-03-30 18:33:59.018525 documents/August.xlsx
2018-03-30 18:33:59.695703 documents/37601_Copy of 2017-18 25k Reporting.xlsx
2018-03-30 18:34:00.412959 documents/Dec 16.csv


Average successful run time: 2 minutes

Total run time: 2 minutes

Total cpu time used: less than 5 seconds

Total disk space used: 62.1 KB


  • Manually ran revision 0dc1ba73 and completed successfully .
    21 records added in the database
    22 pages scraped
  • Created on

Scraper code