cyp1991 / twitter_scrape

Twitter Scraper Library


This library scrapes twitter efficient by using max_id and since_id instead of using the old pages. For user timeline it scrapes 3200 tweets.

You only need two lines of code to scrape a user timeline:

from scraperwiki import swimport swimport('twitter_scrape').user_timeline(<username>[,<verbose>])

You can also use this library to scrape the statuses of a list, but this will only scrape around 2 months of history (?):

swimport('twitter_scrape').statuses(<username>,<list>[,<verbose>])

Forked from ScraperWiki

Contributors cyp1991

Last run completed successfully .

Console output of last run

Injecting configuration and compiling... Injecting scraper and running...

Statistics

Average successful run time: 1 minute

Total run time: 1 minute

Total cpu time used: less than 5 seconds

Total disk space used: 23.8 KB

History

  • Manually ran revision 01f451b6 and completed successfully .
    nothing changed in the database
  • Forked from ScraperWiki

Scraper code

Python

twitter_scrape / scraper.py