Playing around with #rstats twitter data

As a bit of weekend fun, I decided to briefly look into the #rstats twitter data that Stephen Turner collected and made available (thanks!). Essentially, this data set contains some basic information about over 100,000 tweets that contain the hashtag "#rstats" that denotes that a tweeter is tweeting about R.

As a warning, I don't know much about how these data were collected, whether it was collected and random times during the day or whether it was biased toward particular times and, therefore, locations. I wouldn't really read too much into this.

Most common co-occuring hashtags
When a tweet uses a hashtag at all, it very often uses more than one. To extract the co-occuring hashtags, I used the following perl script:

#!/usr/bin/perl

while(<>){
    chomp;
    $_ = lc($_);
    $_ =~ s/#rstats//g;
    my @matches;
    push @matches, /(#\w+)/;
    print join "\n" => @matches if @matches;
}

which uses the regular expression "(#\w+)" to search for hashtags after removing "#rstats" from every tweet.

On the unix command-line, I put these other hashtags into a file and sorted via these commands:

cat data/R-hashtag-data.txt | ./PERL_SCRIPT_ABOVE.pl | tee other-hashtags.txt

sort other-hashtags.txt | uniq -c | sort -n -r > sorted-other-hashtags.txt

After running these commands, I get a numbered list of co-occuring hashtags, sorted in descending order. The top 10 co-occuring hashtags were as follows (you can see the rest here :

5258 #datascience
1665 #python
1625 #bigdata
1542 #r
1451 #dataviz
1360 #ggplot2
 852 #statistics
 783 #dplyr
 749 #machinelearning
 743 #analytics

Neat-o. The presence of "#python" and "#ggplot2" in the top 10 made me wonder what the top 10 programming language and R package related hashtags were. Here they are, respectively:

1665 #python
 423 #d3js (plus 72 for #d3) (plus 2 for #js)
 343 #sas
 312 #julialang (plus 43 for #julia)
 240 #fsharp
 140 #spss  (plus 7 for #ibmspss)
 102 #stata
  75 #matlab
  55 #sql
  38 #java

1360 #ggplot2  (plus 298 for ggplot)  (plus for 6 #gglot2) (plus 4 for #ggpot)
 783 #dplyr
 663 #shiny
 557 #rcpp (plus 22 for rcpp11)
 251 #knitr
 156 #magrittr
 105 #lme4
  93 #ggvis   (plus 11 for #ggivs)
  65 #datatable
  46 #rneo4j

You can view the full list here and here.

I was happy to see my favorite languages (python, perl, clojure, lisp, haskell, c) besides R being represented in the first list. Additionally, most of my favorite packages were fairly well tweeted about--at least as far as hashtags-applied-to-a-package go.

#strangehashtags
Before moving on to the next section, I wanted to share my favorite co-occuring hashtags that I found while sifting through the data: #rcatladies, #rdogfella, #bayesianbootycall, #dontbeaplyrhater, #overlyhonestmethods, #rickshaw (??), #statafail, and #monkeysinfrontoftypewriters.

Most prolific #rstats tweeters
One of the first things I did with these data is a simple aggregation and sort to find the tweeters that used the hashtag most often:

library(dplyr)
THE_DATA %>%
  group_by(User) %>%
  summarise(count = n()) %>%
  arrange(desc(count)) -> prolific.rstats.tweeters

Here is the top 10 (you can see the rest here.)

@Rbloggers	1081
@hadleywickham	498
@timelyportfolio	427
@recology_	419
@revodavid	210
@chlalanne	209
@adolfoalvarez	199
@RLangTip	175
@jmgomez	160

Nothing terribly surprising here.

Normalizing by total tweets
In a twitter discussion about these data, a twitter friend Tim Hopper posited that though he had fewer #rstats tweets than another mutual friend, Trey Causey, he would have a higher number of #rstats tweets if you control for total tweet volume. I wondered how this sorting would look.

Answering this question gave me an excuse to use Hadley Wickham's new package, rvest (I literally just got why the package is named as much while typing this out) which makes web scraping easier--in part by leveraging the expressive power of the magrittr package.

To get the total number of tweets for a particular tweeter, I wrote the following function:

library(rvest)
library(magrittr)
get.num.tweets <- function(handle){
  tryCatch({
    unraw <- function(raw_str){
      raw_str <- sub(",", "", raw_str)    # remove commas if any
      if(grepl("K", raw_str)){
        return(as.numeric(sub("K", "", raw_str))*1000)   # in thousands
      }
      return(as.numeric(raw_str))
    }
    html(paste0("http://twitter.com/", sub("@", "", handle))) %>%
      html_nodes(".is-active .ProfileNav-value") %>%
      html_text() %>%
      unraw
    },
    error=function(cond){return(NA)})
}

The real logic (and beauty) of which is contained only in the last few lines:

    html(paste0("http://twitter.com/", sub("@", "", TWITTER_HANDLE))) %>%
      html_nodes(".is-active .ProfileNav-value") %>%
      html_text()

The CSS element that houses the number of total tweets from a useR's twitter page was found easily using SelectorGadget.

After scraping the number of tweets for almost 10,000 #rstats tweeters (waiting a few seconds between each request because I'm considerate) I divided number of #rstats tweets by the total number of tweets to come up with a normalized value.

The top 10 tweeteRs were as follows:

              User count num.of.tweets     ratio 
1     @medzihorsky     9            28 0.3214286 
2        @statworx     5            16 0.3125000 
3    @LearnRinaDay   114           404 0.2821782 
4  @RforExcelUsers     4            15 0.2666667 
5     @showmeshiny    27           102 0.2647059 
6           @tcrug     6            25 0.2400000 
7   @DailyRpackage   155           666 0.2327327 
8   @R_Programming    49           250 0.1960000 
9        @hexadata     8            41 0.1951220 
10     @Deep_RHelp    11            58 0.1896552 

In case you were wondering, Trey Causey still "won" by a long shot:

> tweeters[which(tweeters$User=="@tdhopper"),]   
Source: local data frame [1 x 4]                 
                                                 
       User count num.of.tweets        ratio     
1 @tdhopper     8         26700 0.0002996255     
> tweeters[which(tweeters$User=="@treycausey"),] 
Source: local data frame [1 x 4]                 
                                                 
         User count num.of.tweets      ratio     
1 @treycausey    50         28700 0.00174216

Before ending this post, I feel compelled to issue an almost certainly unnecessary but customary warning against using number of #rstats tweets as a proxy for who likes R the most or who are the biggest R "thought leaders" (whatever that is). Most tweets about R don't use the #rstats hashtag, anyway.

Again, I would't read too much into this :)

share this: Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

How to make an absurd twitter bot in python

In my last post, I outlined the steps I took to programmatically mimic the wine reviews of a dilettante sommelier. In this post, I'll explain the steps I took to create the twitter bot @HorseWineReview which combines a random wine with a random computer-generated review. I'll keep it short and sweet–the steps are as follows:

  • get a list of wines (from Freebase)
  • create a twitter account and application
  • write script to create and post the tweet
  • automate it with a cron job

Get a list of wines (from Freebase)
Freebase is a collaborative knowledge base that uses a graph database to store semantic information. Information can be retrieved by running MQL queries on their web interface or through a google-powered API. We'll test a query first on their query editor online, but move to accessing the result from the API once we can verify that our query is constructed properly.
The query is very simple, it looks like this:

[{
  "name": null,
  "type": "/wine/wine"
}]

This will return a list of names of all entities of type "/wine/wine" in JSON format. This is an excerpt:

{
  "result": [
    {
      "type": "/wine/wine",
      "name": "1999 Domaine Romanee Conti La Tache"
    },
     .............

Now that we’ve confirmed that this query works and the syntax is correct, let's get the results by using the Google Freebase API. If you don't have one already, you need to get a Google Developers account. From the Google Developers Console, you need to grab a Freebase API key. After that, we can write and run a python script to retrieve and dump out all the wine names. You need the python module "freebase" which can be installed via pip. The code goes thusly:

#!/usr/bin/python
import freebase
import json
import urllib

api_key = "YOUR KEY HERE"
service_url = 'https://www.googleapis.com/freebase/v1/mqlread'

freebase_query = [{'name': None,
                   'limit': 999999999999999,
                   "type": "/wine/wine"}]

params = {"query": json.dumps(freebase_query),
          "key": api_key}

url = service_url + "?" + urllib.urlencode(params)
response = json.loads(urllib.urlopen(url).read())

for item in response['result']:
    print item['name'].encode('utf-8')

Run this script and redirect its contents to a file...

This stores a wine name on each line of the file.

Create a twitter account for your bot and register an application
After following the standard procedure of setting up a twitter account, head over to dev.twitter.com and set-up a developers account on behalf of your bot. Then head over to apps.twitter.com to create a new application; this will be the conduit with which we programmatically update your twitter bot's status. Make sure you read the Terms of Service and don't violate them. Make sure you also allow your application read and write access. After this, if you navigate to the "API Keys" tab, you should record the following information: the API key, API secret, access token, and access token secret. We'll need this to authenticate from our auto-posting python script.

Write script to tweet
We'll use the Twython package to authenticate and serve as an interface to twitter.

A bare bones script to perform this task looks like this:

#!/usr/bin/python

import subprocess
import random
import re
import os
from twython import Twython

twitter = Twython("YOUR API KEY",
                  "YOUR API SECRET",
                  "YOUR ACCESS TOKEN",
                  "YOUR ACCESS TOKEN SECRET")

def output_tweet(text):
    twitter.update_status(status=text)
    os._exit(0)

lengthoftweet = 999
# string to build tweet
tweet = ""

while lengthoftweet > 140:
    tweet = ""
    all_names = [wine.rstrip() for wine
                  in open("ABSOLUTE PATH TO WINE LIST FILE").read().split("\n")]
    wine_name = random.choice(all_names)
    tweet += wine_name + ": "
    rev = subprocess.check_output("ABSOLUTE PATH TO REVIEW GENERATOR")
    # we don't want a review to imply that the wine is either
    # red or white
    if "red" in rev:
        continue
    if "white" in rev:
        continue
    if len(tweet + rev.rstrip()) < 141:
        output_tweet(tweet + rev.rstrip())
    # if it is too long, try to use the first
    # sentence only (using regex)
    rev = re.search("(.+?\.).*", rev).group(1)
    tweet += rev.rstrip()
    lengthoftweet = len(tweet)
    
output_tweet(tweet)
os._exit(0)

A lot of the code is to ensure that the tweet doesn't exceed 140 characters. You might want to add a logging feature to the script–it will help with debugging the cron job.

Automate it with a cron job
If you're on a Unix system we can set up this script to run at specified times automatically using a cron job. Windows users can use Windows Task Scheduler but I've never used it so you're on your own.

Cron jobs are notoriously hard to debug. The #1 problem is encountered is not using absolute paths. When cron calls your script, the working directory is not where the script resides (unlike when you call the script on the command-line from the same directory). Because of this, any file IO in the script that uses relative paths will fail (quietly, if you don't add logging to the script).

The #2 problem you may encounter with cron jobs is that, because it runs as a detached process outside the login environment, the shell it executes it from may not be the one you normally use. Furthermore, it may not have the directories in the PATH that you need, or any other environment variables that you depend on.

The third problem that occurs often in botched cron jobs are bad permissions. Make sure you have all correct permissions (e.g. chmod +x YOUR_SCRIPT.py)

One final problem that I encountered was not putting an empty line after your cron entry–learn from my mistakes!

To add a cron entry, execute

in the shell

In the editor, I wrote the entry like this:

Your paths will obviously depend on what you are running and from where. Make sure you have an empty line after the entry!

To briefly explain this entry…

The first two numbers (00) specify the minute (0-60) to run the job. I chose to run it on the first (zeroth) minute of the hour. The second section (9,14,21) specifies the hour and tells cron to run the job at 9:00 am, 2:00 pm, and 9:00 pm. The next three sections (the asterisks) specify "day of the month" (1-31), "month of the year" (1-12), and "day of the week" (0-7), respectively. The asterisks indicate that this is to run every day of the month and every month of the year.

The next two strings instruct cron to run the script (which was explained in my last post), and the rest of the line redirects any output from logging to a text file called CRON.log

First endnote: Why @HorseWineReviews?
The name pays homage to the infamous twitter bot @Horse_ebooks that (used to) post (unintentionally hilarious) context-free excerpts and Markov chain clumps from books about horses in an (successful) effort to avoid looking like a spam account whilst occasionally tweeting links to promote the sales of e-books.

Final endnote
While the task of tweeting fake wine reviews has already been taken, if you are looking for ideas of a twitter bot of your own, dear reader, you might want to explore the following ideas that I think would a hit:

  • Train a Markov chain on equal parts famous philosophical works and vacuous and decidedly un-philosophical ramblings (a la KimKierkegaardashian and Kantye West). Philosophical corpora can be grabbed from Project Gutenburg. Vacuous babble can probably be obtained from choice subreddits or most of the trending hashtags on twitter.
  • Web-scrape a huge corpus of episode descriptions from various television shows, train a Markov chain on them and let it loose on Twitter. You can get episode descriptions from tvrage.com (example)
  • Train a Markov chain on the abstracts of academic papers.

You’re welcome :)

share this: Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Visualizing data analysis pipelines using NetworkX

In complicated data analysis pipelines and scientific workflows, it's often difficult to keep track of which tasks have to be performed before others. Even with informal forms of documentation (my personal favorite is 'notes.txt'), as the size of a project grows, and more dependencies are introduced, a formal documentation process has to be put in place, or else the project will become unsustainable.

I'm writing a automated system for statisticians and scientists for carrying out large multistep analytics processes. I'll discuss this more in later posts, but the details pertinent for this post are that each step of a analytics pipeline is detailed in a YAML document called a "Sakefile" (not-so-clever play on Makefile) with sections explicitly defining dependencies and resulting output files.

Given dependency resolution's usage of concepts from graph theory (topological sorting) I thought it would be easy and neat to write a tool to visualize the components and dependencies that go into an analytics workflow as a directed graph.

I've rustled up a simple example examining correlates of DUI arrests with various adolescent-related data by state. I chose these data sets because they’re very small and freely available on the net.

The "Sakefile" looks like this:

---
format dui stats:
    help: format raw (copy and pasted) dui/state data using perl
    dependencies:
        - rawdata.txt
    formula: >
        perl -pe 's/^(\D+)\s+([\d,]+)\s+([\d,]+)\s*/\1\t\2\t\3\n/'
        rawdata.txt | sed 's/,//g' > duistats.tsv;
    output:
        - duistats.tsv

fetch teen stats:
    help: fetches various teen statstics from the web
    # no dependencies
    formula: >
        curl -o teenstats.xls http://mathforum.org/workshops/sum96/data.collections/datalibrary/US_TeenStats.XL.zip.xls;
    output:
        - teenstats.xls

convert teen stats to csv:
    help: uses gnumerics ssconvert to convert ugly xls to csv and cleans it
    dependencies:
        - teenstats.xls
    formula: >
        ssconvert teenstats.xls messyteenstats.csv;
        cat <(echo -n "state") <(< teenstats.csv sed '55,$d' |
        sed '1,2d') | sed 's/,,/,/g' > teenstats.csv;
        rm messyteenstats.csv;
    output:
        - teenstats.csv

find correlates:
    help: calls R script that finds correlated of DUI arrest in various teen statistics
    dependencies:
        - duistats.tsv
        - teenstats.csv
    formula: >
        ./dui-correlates.R
    output:
        - corrogram.png
        - table.csv

all:
    - format dui stats
    - fetch teen stats
    - convert teen stats to csv
    - find correlates
...

A short description of each of the steps appears in the "help" field on each entry. Basically, there are two source data files: one exists and raw text copy and pasted from a website, and the other is fetched from the web using curl. The former is cleaned and formatted using perl and sed; the latter has to go through a process that converts downloaded excel file into a CSV and strips useless lines. Both of these source data files then get read by an R script which, ultimately, outputs a corrogram graphic and a summarization table.

Below is the small python program that parses the "Sakefile" and created the visualization. It uses the great NetworkX module to create the graph and render it as an image.

#!/usr/bin/env python -tt

import matplotlib.pyplot as plt
import networkx as nx
import pudb
import yaml

sakefile = yaml.load(open("Sakefile.yaml").read())

G = nx.DiGraph()

def check_for_dep_in_outputs(dep):
    print "checking dep {}".format(dep)
    ret_list = []
    for node in G.nodes(data=True):
        if "output" not in node[1]:
            continue
        if dep in node[1]['output']:
            ret_list.append(node[0])
    return ret_list

# make graph nodes for each target
for target in sakefile:
    if target == "all":
        # we don't want this node
        continue
    G.add_node(target, sakefile[target])


for node in G.nodes(data=True):
    print "checking node {} for dependencies".format(node[0])
    if "dependencies" not in node[1]:
        continue
    print "it has dependencies"
    connects = []
    for dep in node[1]['dependencies']:
        matches = check_for_dep_in_outputs(dep)
        if not matches:
            continue
        for match in matches:
            connects.append(match)
    if connects:
        for connect in connects:
            G.add_edge(connect, node[0])


nx.draw(G, node_color="pink", node_size=10000)
plt.savefig("dependency-visualization.png")

The resulting visualization looks like this:

dependency-visualization

Sure, the arrows look weird and this is a really simple example, but it's easy to see that, even for the most byzantine of pipelines, that a visualization like this can really help get a sense all the actions involved in a workflow.

I'll go over the actual running and results of this example in a later post, when I get the "sake" system working properly. :)

share this: Facebooktwittergoogle_plusredditpinterestlinkedintumblrmail