Skip to main content

Updates to repmis: caching downloaded data and Excel data downloading

Over the past few months I’ve added a few improvements to the repmis–miscellaneous functions for reproducible research–R package. I just want to briefly highlight two of them:

  • Caching downloaded data sets.

  • source_XlsxData for downloading data in Excel formatted files.

Both of these capabilities are in repmis version 0.2.9 and greater.

Caching

When working with data sourced directly from the internet, it can be time consuming (and make the data hoster angry) to repeatedly download the data. So, repmis’s source functions (source_data, source_DropboxData, and source_XlsxData) can now cache a downloaded data set by setting the argument cache = TRUE. For example:

DisData <- source_data("http://bit.ly/156oQ7a", cache = TRUE)

When the function is run again, the data set at http://bit.ly/156oQ7a will be loaded locally, rather than downloaded.

To delete the cached data set, simply run the function again with the argument clearCache = TRUE.

source_XlsxData

I recently added the source_XlsxData function to download Excel data sets directly into R. This function works very similarly to the other source functions. There are two differences:

  • You need to specify the sheet argument. This is either the name of one specific sheet in the downloaded Excel workbook or its number (e.g. the first sheet in the workbook would be sheet = 1).

  • You can pass other arguments to the read.xlsx function from the xlsx package.

Here’s a simple example:

RRurl <- 'http://www.carmenreinhart.com/user_uploads/data/22_data.xls'

RRData <- source_XlsxData(url = RRurl, sheet = 2, startRow = 5)

startRow = 5 basically drops the first 4 rows of the sheet.

Comments

Popular posts from this blog

A Link Between topicmodels LDA and LDAvis

Carson Sievert and Kenny Shirley have put together the really nice LDAvis R package. It provides a Shiny-based interactive interface for exploring the output from Latent Dirichlet Allocation topic models. If you've never used it, I highly recommend checking out their XKCD example (this paper also has some nice background). LDAvis doesn't fit topic models, it just visualises the output. As such it is agnostic about what package you use to fit your LDA topic model. They have a useful example of how to use output from the lda package. I wanted to use LDAvis with output from the topicmodels package. It works really nicely with texts preprocessed using the tm package. The trick is extracting the information LDAvis requires from the model and placing it into a specifically structured JSON formatted object. To make the conversion from topicmodels output to LDAvis JSON input easier, I created a linking function called topicmodels_json_ldavis . The full function is below. To

Set up R/Stan on Amazon EC2

A few months ago I posted the script that I use to set up my R/JAGS working environment on an Amazon EC2 instance. Since then I've largely transitioned to using R/ Stan to estimate my models. So, I've updated my setup script (see below). There are a few other changes: I don't install/use RStudio on Amazon EC2. Instead, I just use R from the terminal. Don't get me wrong, I love RStudio. But since what I'm doing on EC2 is just running simulations (I handle the results on my local machine), RStudio is overkill. I don't install git anymore. Instead I use source_url (from devtools) and source_data (from repmis) to source scripts from GitHub. Again all of the manipulation I'm doing to these scripts is on my local machine.

Slide: one function for lag/lead variables in data frames, including time-series cross-sectional data

I often want to quickly create a lag or lead variable in an R data frame. Sometimes I also want to create the lag or lead variable for different groups in a data frame, for example, if I want to lag GDP for each country in a data frame. I've found the various R methods for doing this hard to remember and usually need to look at old blog posts . Any time we find ourselves using the same series of codes over and over, it's probably time to put them into a function. So, I added a new command– slide –to the DataCombine R package (v0.1.5). Building on the shift function TszKin Julian posted on his blog , slide allows you to slide a variable up by any time unit to create a lead or down to create a lag. It returns the lag/lead variable to a new column in your data frame. It works with both data that has one observed unit and with time-series cross-sectional data. Note: your data needs to be in ascending time order with equally spaced time increments. For example 1995, 1996