Skip to main content

Bubbles, Bubbles, Bubbles

This graph from The Atlantic's Daniel Indiviglio is pretty astounding (click to enlarge).


A good deal has already been written about it. Particularly the fact that the growth in student debt exceeded the growth in other debt during what is now know to have been a massive housing bubble. I don't want to double up on too much of what others have written, but I had some thoughts.

The graph (and the article) (and current students) make a compelling case that we are in a student loan bubble. So, it might be sensible given the obviously high cost of the previous bubble to begin to at least draw down the growth in student debt. This likely means some combination of (a) reducing government support for student loans (removing guarantees, tightening lending standards, etc.), (b) providing grants to students, (c) instituting some sort of price controls on universities (tuition caps). All three of these measures are politically difficult:

Reducing Loan Support: Much of the increase in the cost of higher education and student debt can likely be attributed to government programs that allow students (especially grad student and undergrads' parents) to take out virtually unlimited loans. Removing support for this type of lending would decrease student's ability to pay high tuition. Hopefully, universities would have to respond by lowering fees. However, this has distributional consequences, especially in the short-term. Middle and lower income students will effectively be locked out of many universities. This will clearly be politically unpopular since university has become one of the few ways to try to ensure a stable and moderately high income.

Providing Grants: Given most state and federal budget realities, changing loans to grants would effectively be changing student debt to public debt. There is clearly little appetite for this right now.

However, it should be noted that if the student bubble bursts, like the housing bubble, explicit or implicit contingent liabilities have a way of hitting the budget directly anyways.

Tuition Caps: There are numerous political problems. Imagine any US politician these days advocating price controls. Furthermore, there would be difficulties getting universities to go along with such measures since they are the direct beneficiaries of tuition inflation.

(For my UK readers, the current government used a couple of these strategies when they raised university fees. Though we can discuss what you think about higher tuition fees in the UK, it is clear that the growth in UK student debt isn't going to grow at the same continuously massive rate as in the US.)

I'm not sure how this is going to turn out, but I do have one further thought as a budding academic. Though universities (including faculty) clearly benefit from a system that is rapidly allocating considerable resources to it, they have a long-term interest in taming this bubble. The clear analogy is the housing industry.

Sure when the bubble is building things are great. But when it crashes and crashes in such a way that demand sinks for a long time, the former winners become losers.



Comments

PinskVinsk said…
Interesting. I think this article from the Guardian makes a good point about another entity (besides banks/governments) getting in on the higher-education feeding frenzy:

http://www.guardian.co.uk/commentisfree/2011/aug/29/academic-publishers-murdoch-socialist?mobile-redirect=false

Popular posts from this blog

Do Political Scientists Care About Effect Sizes: Replication and Type M Errors

Reproducibility has come a long way in political science. Many major journals now require replication materials be made available either on their websites or some service such as the Dataverse Network. Most of the top journals in political science have formally committed to reproducible research best practices by signing up to the The (DA-RT) Data Access and Research Transparency Joint Statement.This is certainly progress. But what are political scientists actually supposed to do with this new information? Data and code availability does help avoid effort duplication--researchers don't need to gather data or program statistical procedures that have already been gathered or programmed. It promotes better research habits. It definitely provides ''procedural oversight''. We would be highly suspect of results from authors that were unable or unwilling to produce their code/data.However, there are lots of problems that data/code availability requirements do not address.…

Showing results from Cox Proportional Hazard Models in R with simPH

Update 2 February 2014: A new version of simPH (Version 1.0) will soon be available for download from CRAN. It allows you to plot using points, ribbons, and (new) lines. See the updated package description paper for examples. Note that the ribbons argument will no longer work as in the examples below. Please use type = 'ribbons' (or 'points' or 'lines'). Effectively showing estimates and uncertainty from Cox Proportional Hazard (PH) models, especially for interactive and non-linear effects, can be challenging with currently available software. So, researchers often just simply display a results table. These are pretty useless for Cox PH models. It is difficult to decipher a simple linear variable’s estimated effect and basically impossible to understand time interactions, interactions between variables, and nonlinear effects without the reader further calculating quantities of interest for a variety of fitted values.So, I’ve been putting together the simPH R p…

Slide: one function for lag/lead variables in data frames, including time-series cross-sectional data

I often want to quickly create a lag or lead variable in an R data frame. Sometimes I also want to create the lag or lead variable for different groups in a data frame, for example, if I want to lag GDP for each country in a data frame.I've found the various R methods for doing this hard to remember and usually need to look at old blogposts. Any time we find ourselves using the same series of codes over and over, it's probably time to put them into a function. So, I added a new command–slide–to the DataCombine R package (v0.1.5).Building on the shift function TszKin Julian posted on his blog, slide allows you to slide a variable up by any time unit to create a lead or down to create a lag. It returns the lag/lead variable to a new column in your data frame. It works with both data that has one observed unit and with time-series cross-sectional data.Note: your data needs to be in ascending time order with equally spaced time increments. For example 1995, 1996, 1997. ExamplesNot…