blablupcom / t391_CR_tra

Scrapes and

Crossrail is the new high frequency, high capacity railway for London and the South East. When the service opens Crossrail trains will travel from Maidenhead and Heathrow in the west to Shenfield and Abbey Wood in the east via new twin tunnels under central London. It will link Heathrow Airport, the West End, the City of London and Canary Wharf.

Last run completed successfully .

Console output of last run

Injecting configuration and compiling... -----> Python app detected -----> Stack changed, re-installing runtime -----> Installing runtime (python-2.7.6) -----> Installing dependencies with pip  Obtaining scraperwiki from git+ (from -r requirements.txt (line 1))  Cloning (to morph_defaults) to ./.heroku/src/scraperwiki  Collecting lxml==3.4.4 (from -r requirements.txt (line 2))  Downloading lxml-3.4.4.tar.gz (3.5MB)  Building lxml version 3.4.4.  Building without Cython.  Using build configuration of libxslt 1.1.28  /app/.heroku/python/lib/python2.7/distutils/ UserWarning: Unknown distribution option: 'bugtrack_url'  warnings.warn(msg)  Collecting cssselect==0.9.1 (from -r requirements.txt (line 3))  Downloading cssselect-0.9.1.tar.gz  Collecting beautifulsoup4 (from -r requirements.txt (line 4))  Downloading beautifulsoup4-4.4.0-py2-none-any.whl (81kB)  Collecting dumptruck>=0.1.2 (from scraperwiki->-r requirements.txt (line 1))  Downloading dumptruck-0.1.6.tar.gz  Collecting requests (from scraperwiki->-r requirements.txt (line 1))  Downloading requests-2.7.0-py2.py3-none-any.whl (470kB)  Installing collected packages: requests, dumptruck, beautifulsoup4, cssselect, lxml, scraperwiki   Running install for dumptruck   Running install for cssselect  Running install for lxml  Building lxml version 3.4.4.  Building without Cython.  Using build configuration of libxslt 1.1.28  /app/.heroku/python/lib/python2.7/distutils/ UserWarning: Unknown distribution option: 'bugtrack_url'  warnings.warn(msg)  building 'lxml.etree' extension  gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/tmp/pip-build-WSvwNy/lxml/src/lxml/includes -I/app/.heroku/python/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w  gcc -pthread -shared build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -L/app/.heroku/python/lib -lxslt -lexslt -lxml2 -lz -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/lxml/  building 'lxml.objectify' extension  gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/tmp/pip-build-WSvwNy/lxml/src/lxml/includes -I/app/.heroku/python/include/python2.7 -c src/lxml/lxml.objectify.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.objectify.o -w  gcc -pthread -shared build/temp.linux-x86_64-2.7/src/lxml/lxml.objectify.o -L/app/.heroku/python/lib -lxslt -lexslt -lxml2 -lz -lm -lpython2.7 -o build/lib.linux-x86_64-2.7/lxml/  Running develop for scraperwiki  Creating /app/.heroku/python/lib/python2.7/site-packages/scraperwiki.egg-link (link to .)  Adding scraperwiki 0.3.7 to easy-install.pth file  Installed /app/.heroku/src/scraperwiki  Successfully installed beautifulsoup4-4.4.0 cssselect-0.9.1 dumptruck-0.1.6 lxml-3.4.4 requests-2.7.0 scraperwiki  -----> Discovering process types  Procfile declares types -> scraper Injecting scraper and running... t391_CR_tra_2010_11 t391_CR_tra_2010_12 t391_CR_tra_2010_01 t391_CR_tra_2010_02 t391_CR_tra_2010_03 t391_CR_tra_2011_04 t391_CR_tra_2011_05 t391_CR_tra_2011_06 t391_CR_tra_2011_07 t391_CR_tra_2011_08 t391_CR_tra_2011_09 t391_CR_tra_2011_10 t391_CR_tra_2011_10 t391_CR_tra_2011_11 t391_CR_tra_2011_12 t391_CR_tra_2011_01 t391_CR_tra_2011_02 t391_CR_tra_2011_03 t391_CR_tra_2012_04 t391_CR_tra_2012_05 t391_CR_tra_2012_06 t391_CR_tra_2012_07 t391_CR_tra_2012_08 t391_CR_tra_2012_09 t391_CR_tra_2012_10 t391_CR_tra_2012_10 t391_CR_tra_2012_11 t391_CR_tra_2012_12 t391_CR_tra_2012_01 t391_CR_tra_2012_02 t391_CR_tra_2012_03 t391_CR_tra_2013_04 t391_CR_tra_2013_05 t391_CR_tra_2013_06 t391_CR_tra_2013_07 t391_CR_tra_2013_08 t391_CR_tra_2013_09 t391_CR_tra_2013_10 t391_CR_tra_2013_10 t391_CR_tra_2013_11 t391_CR_tra_2013_12 t391_CR_tra_2013_01 t391_CR_tra_2013_02 t391_CR_tra_2013_03 t391_CR_tra_2014_04 t391_CR_tra_2014_05 t391_CR_tra_2014_06 t391_CR_tra_2014_07 t391_CR_tra_2014_08 t391_CR_tra_2014_09 t391_CR_tra_2014_10 t391_CR_tra_2014_10 t391_CR_tra_2014_11 t391_CR_tra_2014_12 t391_CR_tra_2014_01 t391_CR_tra_2014_02 t391_CR_tra_2014_03 t391_CR_tra_2015_04 t391_CR_tra_2015_05


Downloaded 3 times by MikeRalphson

To download data sign in with GitHub

Download table (as CSV) Download SQLite database (29 KB) Use the API

rows 10 / 59

d f l
2015-09-07 21:56:25.965553
2015-09-07 21:56:27.110637
2015-09-07 21:56:27.712584
2015-09-07 21:56:28.199289
2015-09-07 21:56:28.493240
2015-09-07 21:56:35.160532
2015-09-07 21:56:35.625282
2015-09-07 21:56:36.264931
2015-09-07 21:56:36.726408
2015-09-07 21:56:37.350752


Average successful run time: 3 minutes

Total run time: 3 minutes

Total cpu time used: less than 5 seconds

Total disk space used: 51 KB


  • Manually ran revision 5a3a4c23 and completed successfully .
    59 records added in the database
    66 pages scraped
  • Created on

Scraper code


t391_CR_tra /