Skip to main content

Korean Lessons for the US, Part 1: Credibly Committing to Bad Regulation

It's always a good day when you notice your PhD research overlapping with what's going on in the news. PhD research might actually matter!

This happened to me when I was listening to a recent Fresh Air interview with NY Times reporter Louise Story about why the United States has prosecuted so few people involved in the financial crisis. A couple points caught my attention:

  • Regulatory agencies, especially the SEC, are understaffed. (not really news to most people interested in this stuff)
  • Since 2008 the Justice Department has officially allowed financial companies to defer prosecutions if they conduct their own investigation into alleged wrongdoing. (that's more like it)
  • The combined effect of understaffing and essentially outsourcing investigations to financial companies is that regulators are:

    • Losing the capacity to do their own investigations of financial institutions
    • Not going to even be able to critically evaluate the investigations given to them by the companies they are regulating.

She lists a number of other reasons for lax regulatory enforcement, but these points caught my attention. They reminded me of the 1997 Korean situation. (I'm shovelling through the Korea-end of a comparison of financial crisis in Korea and Ireland with Mícheál O'Keefe at the LSE.) In particular it reminded me of an argument we're trying to make regarding regulator capacity and information.

Long-story-short: if the financial sector is unhealthy (lots of non-performing loans, etc), but a regulator doesn't want regulations tightened they can obscure the information they give to policymakers. Put another way, making the economy seem good makes people feel like there is no need to impose new regulations.

I guess regulators could just lie about the state of the financial sector. But this has its problems, like being called to testify at a congressional investigation about why you lied. There is also the problem of credible commitments.

Question: How can you ensure that information remains bad overtime and in a way that is credibly signalled to financial markets?

Answer: make the regulator unable to collect good information. (I'll come back to this point in the next post.)

In pre-1997 Crisis Korea the Ministry of Finance and the Economy (the ministry of finance and financial regulator wrapped up in one) was able to do this through a complex web of understaffed regulatory departments (for background see a 2002 paper by Jin Wook Choi). Louise Story's reporting indicates some ways that US regulators can credibly commit to bad information and therefore weak regulation.

I'm sure you're all wondering about two unresolved questions. Why would a financial regulator prefer weak regulation and how did Korea solve this problem?

Well, I'll give my answers to these questions in the next post.

Comments

Popular posts from this blog

Dropbox & R Data

I'm always looking for ways to download data from the internet into R. Though I prefer to host and access plain-text data sets (CSV is my personal favourite) from GitHub (see my short paper on the topic) sometimes it's convenient to get data stored on Dropbox . There has been a change in the way Dropbox URLs work and I just added some functionality to the repmis R package. So I though that I'ld write a quick post on how to directly download data from Dropbox into R. The download method is different depending on whether or not your plain-text data is in a Dropbox Public folder or not. Dropbox Public Folder Dropbox is trying to do away with its public folders. New users need to actively create a Public folder. Regardless, sometimes you may want to download data from one. It used to be that files in Public folders were accessible through non-secure (http) URLs. It's easy to download these into R, just use the read.table command, where the URL is the file name...

Slide: one function for lag/lead variables in data frames, including time-series cross-sectional data

I often want to quickly create a lag or lead variable in an R data frame. Sometimes I also want to create the lag or lead variable for different groups in a data frame, for example, if I want to lag GDP for each country in a data frame. I've found the various R methods for doing this hard to remember and usually need to look at old blog posts . Any time we find ourselves using the same series of codes over and over, it's probably time to put them into a function. So, I added a new command– slide –to the DataCombine R package (v0.1.5). Building on the shift function TszKin Julian posted on his blog , slide allows you to slide a variable up by any time unit to create a lead or down to create a lag. It returns the lag/lead variable to a new column in your data frame. It works with both data that has one observed unit and with time-series cross-sectional data. Note: your data needs to be in ascending time order with equally spaced time increments. For example 199...

A Link Between topicmodels LDA and LDAvis

Carson Sievert and Kenny Shirley have put together the really nice LDAvis R package. It provides a Shiny-based interactive interface for exploring the output from Latent Dirichlet Allocation topic models. If you've never used it, I highly recommend checking out their XKCD example (this paper also has some nice background). LDAvis doesn't fit topic models, it just visualises the output. As such it is agnostic about what package you use to fit your LDA topic model. They have a useful example of how to use output from the lda package. I wanted to use LDAvis with output from the topicmodels package. It works really nicely with texts preprocessed using the tm package. The trick is extracting the information LDAvis requires from the model and placing it into a specifically structured JSON formatted object. To make the conversion from topicmodels output to LDAvis JSON input easier, I created a linking function called topicmodels_json_ldavis . The full function is below. To...