tubaman / travis-uslakes-info

Get the level of Lake Travis


This is a scraper that runs on Morph. It reads the current water level of Lake Travis in Austin, TX USA

Contributors tubaman mlandauer

Last run completed successfully .

Console output of last run

Injecting configuration and compiling...  -----> Python app detected  ! The latest version of Python 2 is python-2.7.14 (you are using python-2.7.9, which is unsupported).  ! We recommend upgrading by specifying the latest version (python-2.7.14).  Learn More: https://devcenter.heroku.com/articles/python-runtimes -----> Installing python-2.7.9 -----> Installing pip -----> Installing requirements with pip  Obtaining scraperwiki from git+http://github.com/openaustralia/scraperwiki-python.git@morph_defaults#egg=scraperwiki (from -r /tmp/build/requirements.txt (line 6))  Cloning http://github.com/openaustralia/scraperwiki-python.git (to revision morph_defaults) to /app/.heroku/src/scraperwiki  Collecting lxml==3.4.4 (from -r /tmp/build/requirements.txt (line 8))  Downloading https://files.pythonhosted.org/packages/63/c7/4f2a2a4ad6c6fa99b14be6b3c1cece9142e2d915aa7c43c908677afc8fa4/lxml-3.4.4.tar.gz (3.5MB)  Collecting cssselect==0.9.1 (from -r /tmp/build/requirements.txt (line 9))  Downloading https://files.pythonhosted.org/packages/aa/e5/9ee1460d485b94a6d55732eb7ad5b6c084caf73dd6f9cb0bb7d2a78fafe8/cssselect-0.9.1.tar.gz  Collecting beautifulsoup4==4.3.2 (from -r /tmp/build/requirements.txt (line 12))  Downloading https://files.pythonhosted.org/packages/30/bd/5405ba01391d06646de9ec90cadeb3893fa355a06438966afff44531219a/beautifulsoup4-4.3.2.tar.gz (143kB)  Collecting python-dateutil==2.2 (from -r /tmp/build/requirements.txt (line 13))  Downloading https://files.pythonhosted.org/packages/75/c5/85d027471fa665f8c8b8eb0b925f9d84b4eee745a257b16de4957de99e81/python-dateutil-2.2.tar.gz (259kB)  Collecting dumptruck>=0.1.2 (from scraperwiki->-r /tmp/build/requirements.txt (line 6))  Downloading https://files.pythonhosted.org/packages/15/27/3330a343de80d6849545b6c7723f8c9a08b4b104de964ac366e7e6b318df/dumptruck-0.1.6.tar.gz  Collecting requests (from scraperwiki->-r /tmp/build/requirements.txt (line 6))  Downloading https://files.pythonhosted.org/packages/65/47/7e02164a2a3db50ed6d8a6ab1d6d60b69c4c3fdf57a284257925dfc12bda/requests-2.19.1-py2.py3-none-any.whl (91kB)  Collecting six (from python-dateutil==2.2->-r /tmp/build/requirements.txt (line 13))  Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl  Collecting idna<2.8,>=2.5 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 6))  Downloading https://files.pythonhosted.org/packages/4b/2a/0276479a4b3caeb8a8c1af2f8e4355746a97fab05a372e4a2c6a6b876165/idna-2.7-py2.py3-none-any.whl (58kB)  Collecting certifi>=2017.4.17 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 6))  Downloading https://files.pythonhosted.org/packages/7c/e6/92ad559b7192d846975fc916b65f667c7b8c3a32bea7372340bfe9a15fa5/certifi-2018.4.16-py2.py3-none-any.whl (150kB)  Collecting urllib3<1.24,>=1.21.1 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 6))  Downloading https://files.pythonhosted.org/packages/bd/c9/6fdd990019071a4a32a5e7cb78a1d92c53851ef4f56f62a3486e6a7d8ffb/urllib3-1.23-py2.py3-none-any.whl (133kB)  Collecting chardet<3.1.0,>=3.0.2 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 6))  Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)  Installing collected packages: dumptruck, idna, certifi, urllib3, chardet, requests, scraperwiki, lxml, cssselect, beautifulsoup4, six, python-dateutil  Running setup.py install for dumptruck: started  Running setup.py install for dumptruck: finished with status 'done'  Running setup.py develop for scraperwiki  Running setup.py install for lxml: started  Running setup.py install for lxml: still running...  Running setup.py install for lxml: finished with status 'done'  Running setup.py install for cssselect: started  Running setup.py install for cssselect: finished with status 'done'  Running setup.py install for beautifulsoup4: started  Running setup.py install for beautifulsoup4: finished with status 'done'  Running setup.py install for python-dateutil: started  Running setup.py install for python-dateutil: finished with status 'done'  Successfully installed beautifulsoup4-4.3.2 certifi-2018.4.16 chardet-3.0.4 cssselect-0.9.1 dumptruck-0.1.6 idna-2.7 lxml-3.4.4 python-dateutil-2.2 requests-2.19.1 scraperwiki six-1.11.0 urllib3-1.23   -----> Discovering process types  Procfile declares types -> scraper Injecting scraper and running...

Data

Downloaded 69735 times by philipnye tubaman kyleledbetter MikeRalphson

To download data sign in with GitHub

Download table (as CSV) Download SQLite database (81 KB) Use the API

rows 10 / 1050

timestamp unit level
2014-07-18T14:00:00+00:00
FEET
628.61
2014-07-19T10:00:00+00:00
FEET
628.59
2014-07-20T12:00:00+00:00
FEET
628.57
2014-07-20T18:00:00+00:00
FEET
628.49
2014-07-22T12:00:00+00:00
FEET
628.42
2014-07-23T00:00:00+00:00
FEET
628.35
2014-07-24T02:00:00+00:00
FEET
628.31
2014-07-26T13:00:00+00:00
Feet MSL
628.16
2014-07-26T22:00:00+00:00
Feet MSL
628.07
2014-07-28T17:00:00+00:00
Feet MSL
627.92

Statistics

Average successful run time: 2 minutes

Total run time: about 2 months

Total cpu time used: 10 minutes

Total disk space used: 105 KB

History

  • Auto ran revision 0be3e388 and completed successfully .
    1 record added in the database
  • Auto ran revision 0be3e388 and completed successfully .
    1 record added in the database
  • Auto ran revision 0be3e388 and completed successfully .
    1 record added in the database
  • Auto ran revision 0be3e388 and completed successfully .
    1 record added in the database
    1 page scraped
  • Auto ran revision 0be3e388 and completed successfully .
    1 record added in the database
  • ...
  • Created on morph.io

Show complete history

Scraper code

Python

travis-uslakes-info / scraper.py