blablupcom / sp_NFTR1K_LNWUHNT_gov


London North West Healthcare NHS Trust is one of the largest trusts in the country. The Trust includes Northwick Park, Central Middlesex and Ealing hospitals

Contributors blablupcom

Last run completed successfully .

Console output of last run

Injecting configuration and compiling...  -----> Python app detected -----> Installing python-2.7.14 -----> Installing pip -----> Installing requirements with pip  Obtaining scraperwiki from git+ (from -r /tmp/build/requirements.txt (line 1))  Cloning (to morph_defaults) to /app/.heroku/src/scraperwiki  Collecting lxml==3.4.4 (from -r /tmp/build/requirements.txt (line 2))  Downloading lxml-3.4.4.tar.gz (3.5MB)  Collecting cssselect==0.9.1 (from -r /tmp/build/requirements.txt (line 3))  Downloading cssselect-0.9.1.tar.gz  Collecting beautifulsoup4 (from -r /tmp/build/requirements.txt (line 4))  Downloading beautifulsoup4-4.6.0-py2-none-any.whl (86kB)  Collecting dumptruck>=0.1.2 (from scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading dumptruck-0.1.6.tar.gz  Collecting requests (from scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading requests-2.18.4-py2.py3-none-any.whl (88kB)  Collecting idna<2.7,>=2.5 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading idna-2.6-py2.py3-none-any.whl (56kB)  Collecting urllib3<1.23,>=1.21.1 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading urllib3-1.22-py2.py3-none-any.whl (132kB)  Collecting certifi>=2017.4.17 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading certifi-2018.1.18-py2.py3-none-any.whl (151kB)  Collecting chardet<3.1.0,>=3.0.2 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))  Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB)  Installing collected packages: dumptruck, idna, urllib3, certifi, chardet, requests, scraperwiki, lxml, cssselect, beautifulsoup4  Running install for dumptruck: started  Running install for dumptruck: finished with status 'done'  Running develop for scraperwiki  Running install for lxml: started  Running install for lxml: still running...  Running install for lxml: finished with status 'done'  Running install for cssselect: started  Running install for cssselect: finished with status 'done'  Successfully installed beautifulsoup4-4.6.0 certifi-2018.1.18 chardet-3.0.4 cssselect-0.9.1 dumptruck-0.1.6 idna-2.6 lxml-3.4.4 requests-2.18.4 scraperwiki urllib3-1.22   -----> Discovering process types  Procfile declares types -> scraper Injecting scraper and running... NFTR1K_LNWUHNT_gov_2017_09 NFTR1K_LNWUHNT_gov_2017_08 NFTR1K_LNWUHNT_gov_2017_07 NFTR1K_LNWUHNT_gov_2017_06 NFTR1K_LNWUHNT_gov_2016_04 NFTR1K_LNWUHNT_gov_2016_03 NFTR1K_LNWUHNT_gov_2016_02 NFTR1K_LNWUHNT_gov_2016_01 NFTR1K_LNWUHNT_gov_2015_12 NFTR1K_LNWUHNT_gov_2015_11 NFTR1K_LNWUHNT_gov_2015_10 NFTR1K_LNWUHNT_gov_2015_09 NFTR1K_LNWUHNT_gov_2015_08 NFTR1K_LNWUHNT_gov_2015_07 NFTR1K_LNWUHNT_gov_2015_06 NFTR1K_LNWUHNT_gov_2015_05 NFTR1K_LNWUHNT_gov_2015_04 NFTR1K_LNWUHNT_gov_2015_03 NFTR1K_LNWUHNT_gov_2015_02 NFTR1K_LNWUHNT_gov_2015_01 NFTR1K_LNWUHNT_gov_2014_12 NFTR1K_LNWUHNT_gov_2014_11 NFTR1K_LNWUHNT_gov_2014_10


Downloaded 0 times

To download data sign in with GitHub

Download table (as CSV) Download SQLite database (8 KB) Use the API

rows 10 / 23

l d f
2018-03-29 19:04:05.888965
2018-03-29 19:04:09.167239
2018-03-29 19:04:12.710996
2018-03-29 19:04:16.294229
2018-03-29 19:04:19.859647
2018-03-29 19:04:23.321819
2018-03-29 19:04:27.277176
2018-03-29 19:04:32.496517
2018-03-29 19:04:35.901087
2018-03-29 19:04:39.410708


Average successful run time: 4 minutes

Total run time: 4 minutes

Total cpu time used: less than 10 seconds

Total disk space used: 57.4 KB


  • Manually ran revision a12403d3 and completed successfully .
    23 records added in the database
    48 pages scraped
  • Created on