Skip to main content

Korean Lessons for the US, Part 1: Credibly Committing to Bad Regulation

It's always a good day when you notice your PhD research overlapping with what's going on in the news. PhD research might actually matter!

This happened to me when I was listening to a recent Fresh Air interview with NY Times reporter Louise Story about why the United States has prosecuted so few people involved in the financial crisis. A couple points caught my attention:

  • Regulatory agencies, especially the SEC, are understaffed. (not really news to most people interested in this stuff)
  • Since 2008 the Justice Department has officially allowed financial companies to defer prosecutions if they conduct their own investigation into alleged wrongdoing. (that's more like it)
  • The combined effect of understaffing and essentially outsourcing investigations to financial companies is that regulators are:

    • Losing the capacity to do their own investigations of financial institutions
    • Not going to even be able to critically evaluate the investigations given to them by the companies they are regulating.

She lists a number of other reasons for lax regulatory enforcement, but these points caught my attention. They reminded me of the 1997 Korean situation. (I'm shovelling through the Korea-end of a comparison of financial crisis in Korea and Ireland with Mícheál O'Keefe at the LSE.) In particular it reminded me of an argument we're trying to make regarding regulator capacity and information.

Long-story-short: if the financial sector is unhealthy (lots of non-performing loans, etc), but a regulator doesn't want regulations tightened they can obscure the information they give to policymakers. Put another way, making the economy seem good makes people feel like there is no need to impose new regulations.

I guess regulators could just lie about the state of the financial sector. But this has its problems, like being called to testify at a congressional investigation about why you lied. There is also the problem of credible commitments.

Question: How can you ensure that information remains bad overtime and in a way that is credibly signalled to financial markets?

Answer: make the regulator unable to collect good information. (I'll come back to this point in the next post.)

In pre-1997 Crisis Korea the Ministry of Finance and the Economy (the ministry of finance and financial regulator wrapped up in one) was able to do this through a complex web of understaffed regulatory departments (for background see a 2002 paper by Jin Wook Choi). Louise Story's reporting indicates some ways that US regulators can credibly commit to bad information and therefore weak regulation.

I'm sure you're all wondering about two unresolved questions. Why would a financial regulator prefer weak regulation and how did Korea solve this problem?

Well, I'll give my answers to these questions in the next post.

Comments

Popular posts from this blog

Showing results from Cox Proportional Hazard Models in R with simPH

Update 2 February 2014: A new version of simPH (Version 1.0) will soon be available for download from CRAN. It allows you to plot using points, ribbons, and (new) lines. See the updated package description paper for examples. Note that the ribbons argument will no longer work as in the examples below. Please use type = 'ribbons' (or 'points' or 'lines' ). Effectively showing estimates and uncertainty from Cox Proportional Hazard (PH) models , especially for interactive and non-linear effects, can be challenging with currently available software. So, researchers often just simply display a results table. These are pretty useless for Cox PH models. It is difficult to decipher a simple linear variable’s estimated effect and basically impossible to understand time interactions, interactions between variables, and nonlinear effects without the reader further calculating quantities of interest for a variety of fitted values. So, I’ve been putting together th

Slide: one function for lag/lead variables in data frames, including time-series cross-sectional data

I often want to quickly create a lag or lead variable in an R data frame. Sometimes I also want to create the lag or lead variable for different groups in a data frame, for example, if I want to lag GDP for each country in a data frame. I've found the various R methods for doing this hard to remember and usually need to look at old blog posts . Any time we find ourselves using the same series of codes over and over, it's probably time to put them into a function. So, I added a new command– slide –to the DataCombine R package (v0.1.5). Building on the shift function TszKin Julian posted on his blog , slide allows you to slide a variable up by any time unit to create a lead or down to create a lag. It returns the lag/lead variable to a new column in your data frame. It works with both data that has one observed unit and with time-series cross-sectional data. Note: your data needs to be in ascending time order with equally spaced time increments. For example 1995, 1996

Dropbox & R Data

I'm always looking for ways to download data from the internet into R. Though I prefer to host and access plain-text data sets (CSV is my personal favourite) from GitHub (see my short paper on the topic) sometimes it's convenient to get data stored on Dropbox . There has been a change in the way Dropbox URLs work and I just added some functionality to the repmis R package. So I though that I'ld write a quick post on how to directly download data from Dropbox into R. The download method is different depending on whether or not your plain-text data is in a Dropbox Public folder or not. Dropbox Public Folder Dropbox is trying to do away with its public folders. New users need to actively create a Public folder. Regardless, sometimes you may want to download data from one. It used to be that files in Public folders were accessible through non-secure (http) URLs. It's easy to download these into R, just use the read.table command, where the URL is the file name