!/usr/bin/env python

import scraperwiki import scraperwiki
import lxml.html import uuid import datetime

Blank Python

ASINS = ["B00C6Q1Z6E","B00CQHZ2LW","B00DF0ZP8Y","B00DSDYE3A","B00C6Q9688"] summary = ""

for asin in ASINS: url = "http://www.amazon.com/dp/"+asin html = scraperwiki.scrape(url) root = lxml.html.fromstring(html) for title in root.cssselect("span[id='btAsinTitle']"):
summary += title.text +": " break for price in root.cssselect("span[id='actualPriceValue'] b"): summary += price .text +"
" break summary += url + "
"

now = datetime.datetime.now() data = { 'link': "http://www.amazon.com/"+"&uuid="+str(uuid.uuid1()), 'title': "Price Monitoring " + str(now), 'description': summary, 'pubDate': str(now) , } scraperwiki.sqlite.save(unique_keys=['link'],data=data)

Contributors etraderz

Last run failed with status code 255.

Console output of last run

Injecting configuration and compiling...  -----> Python app detected  ! The latest version of Python 2 is python-2.7.14 (you are using python-2.7.6, which is unsupported).  ! We recommend upgrading by specifying the latest version (python-2.7.14).  Learn More: https://devcenter.heroku.com/articles/python-runtimes -----> Installing python-2.7.6 -----> Installing pip -----> Installing requirements with pip  /tmp/buildpacks/04_buildpack-python/bin/steps/pip-install: line 10: /app/.heroku/python/bin/pip: No such file or directory

Statistics

Average successful run time: half a minute

Total run time: about 19 hours

Total cpu time used: 16 minutes

Total disk space used: 21 KB

History

  • Auto ran revision 5ad93387 and failed .
    nothing changed in the database
  • Auto ran revision 5ad93387 and failed .
    nothing changed in the database
  • Auto ran revision 5ad93387 and failed .
    nothing changed in the database
  • Auto ran revision 5ad93387 and failed .
    nothing changed in the database
  • Auto ran revision 5ad93387 and failed .
    nothing changed in the database
  • ...
  • Created on morph.io

Show complete history

Scraper code

Python

me / scraper.py