blablupcom / t391_CR_tra

Scrapes and

Crossrail is the new high frequency, high capacity railway for London and the South East. When the service opens Crossrail trains will travel from Maidenhead and Heathrow in the west to Shenfield and Abbey Wood in the east via new twin tunnels under central London. It will link Heathrow Airport, the West End, the City of London and Canary Wharf.

Contributors blablupcom

Last run completed successfully .

Console output of last run

Injecting configuration and compiling... [1G-----> Python app detected [1G-----> Stack changed, re-installing runtime [1G-----> Installing runtime (python-2.7.6) [1G-----> Installing dependencies with pip [1G Obtaining scraperwiki from git+ (from -r requirements.txt (line 1)) [1G Cloning (to morph_defaults) to ./.heroku/src/scraperwiki [1G Collecting lxml==3.4.4 (from -r requirements.txt (line 2)) [1G Downloading lxml-3.4.4.tar.gz (3.5MB) [1G Building lxml version 3.4.4. [1G Building without Cython. [1G Using build configuration of libxslt 1.1.28 [1G /app/.heroku/python/lib/python2.7/distutils/ UserWarning: Unknown distribution option: 'bugtrack_url' [1G warnings.warn(msg) [1G Collecting cssselect==0.9.1 (from -r requirements.txt (line 3)) [1G Downloading cssselect-0.9.1.tar.gz [1G Collecting beautifulsoup4 (from -r requirements.txt (line 4)) [1G Downloading beautifulsoup4-4.4.0-py2-none-any.whl (81kB) [1G Collecting dumptruck>=0.1.2 (from scraperwiki->-r requirements.txt (line 1)) [1G Downloading dumptruck-0.1.6.tar.gz [1G Collecting requests (from scraperwiki->-r requirements.txt (line 1)) [1G Downloading requests-2.7.0-py2.py3-none-any.whl (470kB) [1G Installing collected packages: requests, dumptruck, beautifulsoup4, cssselect, lxml, scraperwiki [1G [1G Running install for dumptruck [1G [1G Running install for cssselect [1G Running install for lxml [1G Building lxml version 3.4.4. [1G Building without Cython. [1G Using build configuration of libxslt 1.1.28 [1G /app/.heroku/python/lib/python2.7/distutils/ UserWarning: Unknown distribution option: 'bugtrack_url' [1G warnings.warn(msg) [1G building 'lxml.etree' extension [1G gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/tmp/pip-build-WSvwNy/lxml/src/lxml/includes -I/app/.heroku/python/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w [1G gcc -pthread -shared build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -L/app/.heroku/python/lib -lxslt -lexslt -lxml2 -lz -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/lxml/ [1G building 'lxml.objectify' extension [1G gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/tmp/pip-build-WSvwNy/lxml/src/lxml/includes -I/app/.heroku/python/include/python2.7 -c src/lxml/lxml.objectify.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.objectify.o -w [1G gcc -pthread -shared build/temp.linux-x86_64-2.7/src/lxml/lxml.objectify.o -L/app/.heroku/python/lib -lxslt -lexslt -lxml2 -lz -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/lxml/ [1G Running develop for scraperwiki [1G Creating /app/.heroku/python/lib/python2.7/site-packages/scraperwiki.egg-link (link to .) [1G Adding scraperwiki 0.3.7 to easy-install.pth file [1G Installed /app/.heroku/src/scraperwiki [1G Successfully installed beautifulsoup4-4.4.0 cssselect-0.9.1 dumptruck-0.1.6 lxml-3.4.4 requests-2.7.0 scraperwiki [1G [1G-----> Discovering process types [1G Procfile declares types -> scraper Injecting scraper and running... t391_CR_tra_2010_11 t391_CR_tra_2010_12 t391_CR_tra_2010_01 t391_CR_tra_2010_02 t391_CR_tra_2010_03 t391_CR_tra_2011_04 t391_CR_tra_2011_05 t391_CR_tra_2011_06 t391_CR_tra_2011_07 t391_CR_tra_2011_08 t391_CR_tra_2011_09 t391_CR_tra_2011_10 t391_CR_tra_2011_10 t391_CR_tra_2011_11 t391_CR_tra_2011_12 t391_CR_tra_2011_01 t391_CR_tra_2011_02 t391_CR_tra_2011_03 t391_CR_tra_2012_04 t391_CR_tra_2012_05 t391_CR_tra_2012_06 t391_CR_tra_2012_07 t391_CR_tra_2012_08 t391_CR_tra_2012_09 t391_CR_tra_2012_10 t391_CR_tra_2012_10 t391_CR_tra_2012_11 t391_CR_tra_2012_12 t391_CR_tra_2012_01 t391_CR_tra_2012_02 t391_CR_tra_2012_03 t391_CR_tra_2013_04 t391_CR_tra_2013_05 t391_CR_tra_2013_06 t391_CR_tra_2013_07 t391_CR_tra_2013_08 t391_CR_tra_2013_09 t391_CR_tra_2013_10 t391_CR_tra_2013_10 t391_CR_tra_2013_11 t391_CR_tra_2013_12 t391_CR_tra_2013_01 t391_CR_tra_2013_02 t391_CR_tra_2013_03 t391_CR_tra_2014_04 t391_CR_tra_2014_05 t391_CR_tra_2014_06 t391_CR_tra_2014_07 t391_CR_tra_2014_08 t391_CR_tra_2014_09 t391_CR_tra_2014_10 t391_CR_tra_2014_10 t391_CR_tra_2014_11 t391_CR_tra_2014_12 t391_CR_tra_2014_01 t391_CR_tra_2014_02 t391_CR_tra_2014_03 t391_CR_tra_2015_04 t391_CR_tra_2015_05


Downloaded 3 times by MikeRalphson

To download data sign in with GitHub

Download table (as CSV) Download SQLite database (29 KB) Use the API

rows 10 / 59

d f l
2015-09-07 21:56:25.965553
2015-09-07 21:56:27.110637
2015-09-07 21:56:27.712584
2015-09-07 21:56:28.199289
2015-09-07 21:56:28.493240
2015-09-07 21:56:35.160532
2015-09-07 21:56:35.625282
2015-09-07 21:56:36.264931
2015-09-07 21:56:36.726408
2015-09-07 21:56:37.350752


Average successful run time: 3 minutes

Total run time: 3 minutes

Total cpu time used: less than 5 seconds

Total disk space used: 53.6 KB


  • Manually ran revision 5a3a4c23 and completed successfully .
    59 records added in the database
    66 pages scraped
  • Created on

Scraper code