Tuesday, April 22, 2014

Drexel on Monday 4/28

Looks interesting if you're in the area.  I plan to be at the lunch.
From: Drexel University's LeBow College of Business <announce@lebow.drexel.edu>
Date: Thu, Apr 17, 2014 at 11:14 AM
Subject: School of Economics Presents: 2 Presentations by Dr. Preston McAfee, Director, Google Strategic Technologies: 4.28.14

Please join Drexel School of Economics for one or both of the two presentations that will be given by Dr. Preston McAfee, Director, Google Strategic Technologies:
Monday, April 28, 2014
Lunch and Learn with Dr. Preston McAfee: Machine Learning in an Exchange Environment
12:00 p.m.
Gerri C. LeBow Hall, room 406
Lunch provided.

Dr. Preston McAfee Presents: Digital Advertising: Benefits and Costs
5:00 p.m.
Gerri C. LeBow Hall, room 220 Grand Meeting Room
Reception to follow.

For more information, contact Cassandra Brown at clb87@drexel.edu or 215.895.6294.

Monday, April 21, 2014

On Kaggle Forecasting Competitions, Part 1: The Hold-Out Sample(s)

Kaggle competitions are potentially pretty cool. Kaggle supplies in-sample data ("training data"), and you build a model and forecast out-of-sample data that they withhold ("test data"). The winner gets a significant prize, often $100,000.00 or more. Kaggle typically runs several such competitions simultaneously.

The Kaggle paradigm is clever because it effectively removes the ability for modelers to peek at the test data, which is a key criticism of model-selection procedures that claim to insure against finite-sample over-fitting by use of split samples. (See my earlier post, Comparing Predictive Accuracy, Twenty Years Later, and the associated paper of the same name.)

Well, sort of. Actually, Kaggle partly reveals part of the test data. In the time before a competition deadline, participants are typically allowed to submit one forecast per day, which Kaggle scores against part of the test data. Then, when the deadline arrives, forecasts are actually scored against the remaining test data. Suppose, for example, that there are 100 observations in total. Kaggle gives you 1, ..., 60 (training) and holds out 61, ..., 100 (test). But each day before the deadline, you can submit a forecast for 61, ..., 75, which they score against the held-out realization of 61,..., 75 and use to update the "leaderboard." Then when the deadline arrives, you submit your forecast for 61, .., 100, but they score it only against the truly held-out realizations 76, ..., 100. So honesty is enforced for 76, ..., 100 (good) , but convoluted games are played with 61, ..., 75 (bad). Is having a leaderboard really that important? Why not cut the games? Simply give people 1, ..., 75 and ask them to forecast 76, ..., 100.

To be continued.

Friday, April 18, 2014

Monday, April 14, 2014

Frequentists vs. Bayesians on the Exploding Sun

Time for something light.  Check out xkcd.com, "A webcomic of romance, sarcasm, math, and language," written by a literate former NASA engineer.  Really fine stuff.  Thanks to my student M.D. for introducing me to it.  Here's one on Fisher vs. Bayes:

Frequentists vs. Bayesians

Monday, April 7, 2014

Point Forecast Accuracy Evaluation

Here's a new one for your reading pleasure. Interesting history. Minchul and I went in trying to escape the expected loss minimization paradigm. We came out realizing that we hadn't escaped, but simultaneously, that not all loss functions are created equal. In particular, there's a direct and natural connection between our stochastic error divergence (SED) and absolute-error loss, elevating the status of absolute-error loss in our minds and perhaps now making it our default benchmark of choice. Put differently, "quadratic loss is for squares." (Thanks to Roger Koenker for the cute mantra.)

Diebold, F.X. and Shin, M. (2014), "Assessing Point Forecast Accuracy by Stochastic Divergence from Zero," PIER Working Paper 14-011, Department of Economics, University of Pennsylvania.

Abstract: We propose point forecast accuracy measures based directly on the divergence of the forecast-error c.d.f. F(e) from the unit step function at 0, and we explore several variations on the basic theme. We also provide a precise characterization of the relationship between our approach of stochastic error divergence (SED) minimization and the conventional approach of expected loss minimization. The results reveal a particularly strong connection between SED and absolute-error loss and generalizations such as the ``check function" loss that underlies quantile regression.

Monday, March 31, 2014

Student Advice I: Some Good Reading for Good Writing (and Good Graphics)

                            
Good writing is good thinking, so when you next hear some pretentious moron boast that ``I don't like to write, I like to think," rest assured, he's surely a bad writer and a bad thinker. Again, good writing is good thinking. If you like "to do research" but don't like "to write it up," then you're not thinking clearly. Research and writing are inextricably intertwined.

The Elements of StyleHow to get there? Read and absorb McCloskey's Rhetoric of Economics, and Strunk and White's Elements of Style. There's no real need to read or absorb much else (about writing). But do bolt the Chicago Manual of Style to your desk. Then get going. Think about what you want to say, why, and to whom. Think hard and critically about logical structure and flow, at all scales, small and large. Revise and edit, again and again. Make things easy for your readers.  Listen to your words; push your prose toward poetry. 

VDQI Book Cover

Good graphics is also good thinking, and precisely the same advice holds. Read and absorb Tufte's Visual Display of Quantitative Information. Notice, by the way, how well Tufte writes (even if he sometimes goes overboard with the poetry thing). It's no accident. As Tufte says: show the data, and appeal to the viewer. Recognize that your first cut using default software settings will never, ever, be satisfactory. (If that statement doesn't instantly resonate with you, then you're in desperate need of a Tufte infusion.) So revise and edit, again and again. And again. 

Friday, March 28, 2014

Nate Silver and the Krugman Embarrassment

I'm happy that Nate Silver and his FiveThirtyEight are back. Nate generally provides interesting and responsible data-based journalism for the educated layperson. (Of course he sometimes gets in over his head, but don't we all?)

Now Krugman suddenly starts to dislike Silver; see his "Tarnished Silver" post. Funny, he never complained much when Silver worked at the New York Times (the trough where Krugman feeds), but now that Silver has moved elsewhere, Krugman's vitriol erupts. Perhaps Krugman always felt that way but was mum so as not to offend his NYT. Or perhaps he now wants to punish Silver for defecting. Or perhaps it's a little of both. In any event it strikes me as an embarrassment. Let's call it the Krugman Embarrassment.

I'm not the only one who's noticed the Krugman Embarrassment. See the recent post from Big Data, Plainly Spoken, which I think gets things right in labeling FiveThirtyEight-bashing "premature and immature." Also see the chart at FiveThirtyEight's Data Lab, which speaks for itself.

Monday, March 24, 2014

Sheldon Hackney Memorial Celebration, March 27

If you're in the area:  Sheldon Hackney Celebration, Thursday, March 27. Program 4-5, reception 5-6, Irvine Auditorium, 34th and Spruce, Philadelphia.  See my earlier memorial post.

Sunday, March 23, 2014

GAS and DCS Models: Tasty Stuff, and I'm Hungry for More



Generalized Autoregressive Score (GAS) models, also known as Dynamic Conditional Score (DCS) models, are an important development. They extend significantly the scope of observation-driven models, with their simple closed-form likelihoods, in contrast to parameter-driven models whose estimation and inference require heavy simulation.

Many talented people are pushing things forward. Most notable are the Amsterdam group (Siem Jan Koopman et al.; see the GAS site) and the Cambridge group (Andrew Harvey et al., see Andrew's interesting new book). The GAS site is very informative, with background description, a catalog of GAS papers, code in Ox and R, conference information, etc. The key paper is Creal, Koopman and Lucas (2008). (It was eventually published in 2012 in Journal of Applied Econometrics, proving once again that the better the paper, the longer it takes to publish.)

The GAS idea is simple. Just use a conditional observation density \(p(y_t |f_t)\) whose time-varying parameter \(f_t\) follows the recursion
\begin{equation}f_{t+1} = ω + β f_t + α S(f_t) \left [ \frac{∂logp(y_t | f_t)}{∂ f_t} \right ],~~~~~~~(1) \end{equation} where \(S(f_t)\) is a scaling function. Note in particular that the scaled score drives \(f_t\). The resulting GAS models retain observation-driven simplicity yet are quite flexible. In the volatility context, for example, GAS can be significantly more flexible than GARCH, as Harvey emphasizes.

Well, the GAS idea seems simple. At least it's simple to implement if taken at face value. But I'm not sure that I understand it fully. In particular, I'm hungry for a theorem that tells me in what sense (1) is the "right" thing to do. That is, I can imagine other ways of updating \(f_t\), so why should I necessarily adopt (1)? It would be great, for example, if (1) were the provably unique solution to an optimal approximation problem for non-linear non-Gaussian state space models. Is it? (It sure looks like a first-order approximation to something.) And if so, might we want to acknowledge that in doing the econometrics, instead of treating (1) as if it were the DGP? And could we somehow improve the approximation?

To the best of my knowledge, the GAS/DCS literature is silent on such fundamental issues. But based on my experience with the fine scholarship of Creal, Harvey, Koopman, Lucas, and their co-authors and students, I predict that answers will arrive soon.

Sunday, March 16, 2014

Is Interdisciplinarity Vastly Over-Rated?


Interdisciplinarity is clearly the flavor of the month (read: two decades) among the academic cognoscenti. Although it makes for entertaining popular press, what's the real intellectual benefit of a top-down interdisciplinary "industrial policy"? Difficult question! That's not necessarily to suggest that there is no benefit; rather, it's simply to suggest that the issues are subtle and deserving of serious thought from all sides.

Hence it's refreshing to see a leading academic throw his hat in the ring with a serious evidence-based defense of the traditional disciplines, as does Penn sociologist Jerry Jacobs in his new book In Defense of Disciplines.

Perhaps the best thing I can do to describe the book and whet your appetite is to reprint some of the book's back-cover blurbs, which get things just right. So here goes:

“Jerry Jacobs’s new book provides the missing counterpoint to the fanfare for interdisciplinary collaboration that has swept over much of academe during the last three decades. Thanks to Jacobs’s creative and painstaking research, we now know that disciplines are not the ‘silos’ they are so often made out to be; instead, they are surprisingly open to good ideas and new methods developed elsewhere. Nor are universities rigidly bound to the disciplines—instead, they, routinely foster interdisciplinary work through dozens of organized research centers. This book is more than a necessary corrective. It is a well-crafted piece of social science, equally at home in the worlds of intellectual history, organizational studies, and quantitative methods. It deserves to be read by all who care about the future of universities—defenders and critics of the disciplines alike.” (Steven G. Brint, University of California, Riverside)

“At a time of undue hoopla about interdisciplinarity, this is a sobering, highly readable, and data-driven defense of retaining disciplinary units as the primary mode of organizing research universities. A must read for those concerned with the future of knowledge innovation.” (Myra H. Strober, Stanford University)

“This is a timely, subtle and much needed evaluation of interdisciplinarity as a far reaching goal sweeping around the globe. Jerry Jacobs sets new standards of discussion by documenting with great new data the long term fate of interdisciplinary fields and the centrality of disciplines to higher education and the modern research university.” (Karin Knorr Cetina, University of Chicago)