Skip to main content

InstallOldPackages: a repmis command for installing old R package versions

A big problem in reproducible research is that software changes. The code you used to do a piece of research may depend on a specific version of software that has since been changed. This is an annoying problem in R because install.packages only installs the most recent version of a package. It can be tedious to collect the old versions.

On Toby Dylan Hocking's suggestion, I added tools to the repmis package so that you can install, load, and cite specific R package versions. It should work for any package version that is stored on the CRAN archive (http://cran.r-project.org).

To only install old package versions use the new repmis command InstallOldPackages. For example:

# Install old versions of the e1071 and gtools packages.

# Create vectors of the package names and versions to install
# Note the names and version numbers must be in the same order
Names <- c("e1071", "gtools")
Vers <- c("1.6", "2.6.1")

# Install old package versions into the default library
InstallOldPackages(pkgs = Names, versions = Vers)

You can also now have LoadandCite install specific package versions:

# Install, load, and cite specific package versions

# Create vectors of the package names and versions to install
# Note the names and version numbers must be in the same order
Names <- c("e1071", "gtools") 
Vers <- c("1.6", "2.6.1")

# Run LoadandCite
LoadandCite(pkgs = Names, versions = Vers, install = TRUE, file = "PackageCites.bib")

See this post for more details on LoadandCite.

Future

I intend to continue improving these capabilities. So please post any suggestions for improvement (or report any bugs) at on the GitHub issues page.

Comments

This is kind of interesting to me, and at the same time there's a bit of a philosophical issue I have.

Some background: I collect and distribute tools for computational journalism. See http://znmeb.github.com/CompJournoStick. The core of these tool collections is a Linux desktop, R and in some cases RStudio.

Recently, two of the packages that have been in past versions of this tool set, BARD and RcmdrPlugin.TextMining, went out of the main CRAN repository and are only available from the archive repository.

As a distributor, I need a way to detect when this happens without running my install script. And I need to know whether my users actually want a package that's no longer being maintained or has been kicked out of the main CRAN repository for some reason.

So I will definitely check this tool out and see if it can help me, but at the same time I'm not sure I'd want to facilitate use of "obsolete" or "unmaintained" software.
pirategrunt.com said…
This may be a silly question, but can repmis also sort out re-installation of packages when I upgrade my version of R? I recently upgraded and found that many packages had disappeared. I can see them out there under a folder with the old R version number, but I've had to reinstall in order to make them available.

Again, there's probably some basic bit of idiocy that I committed during the install. Just curious if there's an easy way to undo it.
Re M Edward Borasky:

I agree that you should generally be using the most updated version of a package/maintained packages for your research.

I intend InstallOldPackages and the similar functionality in LoadandCite to be used for replication purposes only.

When a piece of research is under active development researchers should use LoadandCite without specifying the package version. If install = TRUE then only the most recent versions of the packages will be installed from CRAN.

When a researcher releases a final replication version of their Sweave or knitr file then they should specify the package versions in LoadandCite. This help make the code in their file run as intended during replication.
Re pirategrunt.com

This has to do with where your library path is. By default each new major version resets the path.

You can change the library path: see this Stack Exchange page for more details: http://stackoverflow.com/questions/2615128/where-does-r-store-packages.
Great work. Your example installation of old packages gtools and e1071 worked for me.

But do you have any ideas about how to resolve the chicken-and-egg problem? i.e. what version of repmis should be required, and how to indicate that?
Ha, yeah there is definitely a chicken and egg problem.

I'll have to think about what can be done. But at the very least LoadandCite consolidates the issue of having to manually update replication code into one command, rather than having to go through a whole analysis and update all of the packages and/or syntax that may have changed.
Anonymous said…
Talking about replicating: the change in the theming system after ggplot2 0.8.9 broke a *lot* of code for me.

When I wanted to replicate some graphs, I was so annoyed that I returned to the old version by hand.

Thanks for drawing my attention to {repmis} via R-bloggers!

Popular posts from this blog

Showing results from Cox Proportional Hazard Models in R with simPH

Update 2 February 2014: A new version of simPH (Version 1.0) will soon be available for download from CRAN. It allows you to plot using points, ribbons, and (new) lines. See the updated package description paper for examples. Note that the ribbons argument will no longer work as in the examples below. Please use type = 'ribbons' (or 'points' or 'lines'). Effectively showing estimates and uncertainty from Cox Proportional Hazard (PH) models, especially for interactive and non-linear effects, can be challenging with currently available software. So, researchers often just simply display a results table. These are pretty useless for Cox PH models. It is difficult to decipher a simple linear variable’s estimated effect and basically impossible to understand time interactions, interactions between variables, and nonlinear effects without the reader further calculating quantities of interest for a variety of fitted values.So, I’ve been putting together the simPH R p…

Slide: one function for lag/lead variables in data frames, including time-series cross-sectional data

I often want to quickly create a lag or lead variable in an R data frame. Sometimes I also want to create the lag or lead variable for different groups in a data frame, for example, if I want to lag GDP for each country in a data frame.I've found the various R methods for doing this hard to remember and usually need to look at old blogposts. Any time we find ourselves using the same series of codes over and over, it's probably time to put them into a function. So, I added a new command–slide–to the DataCombine R package (v0.1.5).Building on the shift function TszKin Julian posted on his blog, slide allows you to slide a variable up by any time unit to create a lead or down to create a lag. It returns the lag/lead variable to a new column in your data frame. It works with both data that has one observed unit and with time-series cross-sectional data.Note: your data needs to be in ascending time order with equally spaced time increments. For example 1995, 1996, 1997. ExamplesNot…

Do Political Scientists Care About Effect Sizes: Replication and Type M Errors

Reproducibility has come a long way in political science. Many major journals now require replication materials be made available either on their websites or some service such as the Dataverse Network. Most of the top journals in political science have formally committed to reproducible research best practices by signing up to the The (DA-RT) Data Access and Research Transparency Joint Statement.This is certainly progress. But what are political scientists actually supposed to do with this new information? Data and code availability does help avoid effort duplication--researchers don't need to gather data or program statistical procedures that have already been gathered or programmed. It promotes better research habits. It definitely provides ''procedural oversight''. We would be highly suspect of results from authors that were unable or unwilling to produce their code/data.However, there are lots of problems that data/code availability requirements do not address.…