Update 31 January: I've folded source_GitHubData
into the repmis packaged. See this post.
Update 7 January 2012: I updated the internal workings of source_GitHubData
so that it now relies on httr rather than RCurl
. Also it is more directly descended from devtool
's source_url
command.
This has two advantages.
- Shortened URL's can be used instead of the data sets' full GitHub URL,
- The ssl.verifypeer issue is resolved. (Though please let me know if you have problems).
The post has been rewritten to reflect these changes.
In previous posts I've discussed how to download data stored in plain-text data files (e.g. CSV, TSV) on GitHub directly into R.
Not sure why it took me so long to get around to this, but I've finally created a little function that simplifies the process of downloading plain-text data from GitHub. It's called source_GitHubData
. (The name mimicks the devtools syntax for functions like source_gist
and source_url
. The function's syntax is actually just a modified version of source_url
.)
The function is stored in a GitHub Gist HERE (it's also at the end of this post). You can load it directly into R with devtools' source_gist
command.
Here is an example of how to use the function to download the electoral disproportionality data I discussed in an earlier post.
# Load source_GitHubData
library(devtools)
# The functions' gist ID is 4466237
source_gist("4466237")
# Create Disproportionality data UrlAddress object
# Make sure the URL is for the "raw" version of the file
# The URL was shortened using bitly
UrlAddress <- "http://bit.ly/Ss6zDO"
# Download data
Data <- source_GitHubData(url = UrlAddress)
# Show Data variable names
names(Data)
## [1] "country" "year" "disproportionality"
There you go.
Note that the the function is set by default to load comma-separated data (CSV). This can easily be changed with the sep
argument.
Comments
source_GitHubData("https://docs.google.com/spreadsheet/pub?key=0Agz-ZYJ5rH_WdG9oR2Y3T3U1Y3I5YlgzUmNBSVFrRUE&single=true&gid=0&output=csv")
A function to get data from Google Spreadsheets could be a useful addition to your package IMHO. People also use DropBox, so that could be another addition. Curious to know your thoughts on that.
Also, your download method seems better than the one I was using with RCurl, because RCurl's getURL() needs ssl.verifypeer=FALSE to work properly in some (HTTPS) cases. It seems httr's GET does not encounter the issue.
Yep, it should work for Google spreadsheets published to CSV.
I might add a wrapper to the repmis package source_GoogleData or something like that, but it would basically be the same as source_GitHubData.
You actually don't need source_GitHubData to download data stored in a plain-text format on a Dropbox public folder. They use non-secure (http) URLs, so you can just use read.table. (source_GitHubData works for https sites)
Data stored in non-Public folders on Drobpox cannot be easily downloaded into R, because their URLs take you to a page that is more than just the text-file. They have lots of HTML that needs to be scrapped away.
GitHub, of course, is a better option, because it does not require "publishing to the web" to share the file. Perhaps the Google Docs API makes it possible to stopifnot(publish = TRUE).
Dropbox is less required IMHO because it's (a) less secure and (b) easier to move or delete things on it. I did not know their URLs were mere HTTP, it does not sound very wise.
All that just to say that your package is inspiring.