Injecting configuration and compiling...
[1G [1G-----> Python app detected
[1G-----> Installing python-2.7.14
[1G-----> Installing pip
[1G-----> Installing requirements with pip
[1G Obtaining scraperwiki from git+http://github.com/openaustralia/scraperwiki-python.git@morph_defaults#egg=scraperwiki (from -r /tmp/build/requirements.txt (line 1))
[1G Cloning http://github.com/openaustralia/scraperwiki-python.git (to morph_defaults) to /app/.heroku/src/scraperwiki
[1G Collecting lxml==3.4.4 (from -r /tmp/build/requirements.txt (line 2))
[1G Downloading lxml-3.4.4.tar.gz (3.5MB)
[1G Collecting cssselect==0.9.1 (from -r /tmp/build/requirements.txt (line 3))
[1G Downloading cssselect-0.9.1.tar.gz
[1G Collecting beautifulsoup4 (from -r /tmp/build/requirements.txt (line 4))
[1G Downloading beautifulsoup4-4.6.0-py2-none-any.whl (86kB)
[1G Collecting dumptruck>=0.1.2 (from scraperwiki->-r /tmp/build/requirements.txt (line 1))
[1G Downloading dumptruck-0.1.6.tar.gz
[1G Collecting requests (from scraperwiki->-r /tmp/build/requirements.txt (line 1))
[1G Downloading requests-2.18.4-py2.py3-none-any.whl (88kB)
[1G Collecting idna<2.7,>=2.5 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))
[1G Downloading idna-2.6-py2.py3-none-any.whl (56kB)
[1G Collecting urllib3<1.23,>=1.21.1 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))
[1G Downloading urllib3-1.22-py2.py3-none-any.whl (132kB)
[1G Collecting certifi>=2017.4.17 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))
[1G Downloading certifi-2018.1.18-py2.py3-none-any.whl (151kB)
[1G Collecting chardet<3.1.0,>=3.0.2 (from requests->scraperwiki->-r /tmp/build/requirements.txt (line 1))
[1G Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB)
[1G Installing collected packages: dumptruck, idna, urllib3, certifi, chardet, requests, scraperwiki, lxml, cssselect, beautifulsoup4
[1G Running setup.py install for dumptruck: started
[1G Running setup.py install for dumptruck: finished with status 'done'
[1G Running setup.py develop for scraperwiki
[1G Running setup.py install for lxml: started
[1G Running setup.py install for lxml: still running...
[1G Running setup.py install for lxml: finished with status 'done'
[1G Running setup.py install for cssselect: started
[1G Running setup.py install for cssselect: finished with status 'done'
[1G Successfully installed beautifulsoup4-4.6.0 certifi-2018.1.18 chardet-3.0.4 cssselect-0.9.1 dumptruck-0.1.6 idna-2.6 lxml-3.4.4 requests-2.18.4 scraperwiki urllib3-1.22
[1G
[1G [1G-----> Discovering process types
[1G Procfile declares types -> scraper
Injecting scraper and running...
Traceback (most recent call last):
File "scraper.py", line 95, in <module>
html = requests.get(url)
File "/app/.heroku/python/lib/python2.7/site-packages/requests/api.py", line 72, in get
return request('get', url, params=params, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/requests/adapters.py", line 506, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='www3.halton.gov.uk', port=443): Max retries exceeded with url: /Pages/councildemocracy/opendata/Payments-over-500.aspx (Caused by SSLError(SSLError(1, u'[SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:661)'),))