I’ve been wanting to write this piece for a long time, but never figured out the right outlet. This blog, however, is a great space for me to try it out (ah, the beauty of blogging!). Plus, I think I did reasonably well with my Freakonomics analysis on Monday, so I figure it’s safe to work my thoughts through another popular non-fiction book.
Moneyball, by Michael Lewis, is a book about baseball. So, I don’t know how many readers of this blog will have read the book. But, for me, the book is about so much more than baseball and the lessons of the book are incredibly applicable to education. Lewis initially set out to write a book about how the Oakland A’s, a major league baseball team on the low end of the financial resources continuum, managed to compete at such a high level seemingly every year. There are tremendous disparities in spending across Major League Baseball, and the game has been plagued by the perception that the teams that spend the most money win the most. The New York Yankees (my favorite team!) were the face of this perception. The A’s seemed to defy this perception.
What Lewis ultimately discovered was that the A’s, led by their unique general manager Billy Beane, had adopted an organizational commitment to scouting and assessing players using statistical analysis of the loads of data generated by the game of baseball. These forms of analysis, labeled generally as sabermetrics, had been around for many years, but they were largely written off as the province of geeks and statheads who just happened to like baseball. Beane, however, came to believe that a sabermetric approach to scouting and valuing players would allow them to make the most cost-effective decisions possible. They were able to figure out which individual statistics were the greatest predictors of team success. Then they sought players who thrived in those key areas despite being deemed by others as flawed in other areas of the game thereby devaluing their salaries. So, for example, sabermetricians were able to demonstrate that on base percentage (OBP – the likelihood that a player will get on base in any given plate appearance) was the single greatest predictor of runs scored. That seems logical since a player must get on base to score a run. But, traditionally, players were valued based more on their batting average (the likelihood that a player will get a hit in any given plate appearance) than OBP. Therefore, there were some players who didn’t have great batting averages (and as a result didn’t earn very high salaries), but had relatively higher on base percentages (mostly because they earned lots of base-on-balls – aka walks). Those players became Oakland A’s.
Most baseball purists and old school baseball people fervently opposed this sabermetric orientation. They argued that you couldn’t judge a player by crunching numbers. You had to watch the players play, get to know them as people, etc.; in other words, make value determinations by scouting the old fashioned way. The numbers were cold and unreliable, they’d say.
Is this starting to sound familiar? Purists, traditionalists arguing that we should not rely on numerical data to make decisions? Numerical data are cold and unreliable, and they can’t tell you what you need to know about people? These are the same arguments you hear from those opposed to what has been labeled “data-driven decision-making” (DDDM) in education.
I could stop here and argue that the Oakland A’s commitment to sabermetrics and cost-effective decision-making has been highly successful (just look at how well they’re doing in this year’s playoffs!), so everyone should buy in to DDDM in education. But, that’s not the real point I want to make.
For me, there is another perception problem here. The popular sports media has, for the most part, portrayed this so-called Moneyball philosophy inaccurately. The popular sports media would have us believe that sabermetric analysis is an opposing paradigm to traditional baseball scouting methods. But, the fact is that sabermetric analysis has been used by the A’s (and now many other teams as well) as a complement to more traditional methods of scouting and player valuation. It is not as if the A’s have fired all of their scouts and hired all statisticians; their scouting department includes a few number crunchers in addition to all of the scouts who do what they’ve always done.
Similarly, in education, “data-driven decision making” is the label given to the movement to making decisions based on the scores of numerical data that are now available to educators as technological means (computers, databases, etc.) have intersected with a climate of standards and assessment. But, to suggest that DDDM is a new movement or idea in education implies that before now, decisions were made in a vacuum; decisions were made in the absence of data. That’s not the case, though. Decisions, particularly those about individual students, were made based on professional judgments (teacher perceptions, observations, etc.). Like sabermetrics in baseball, (statistically oriented) DDDM is a complementary approach to professional judgment in education. They are epistemologically different approaches, but they are not mutually exclusive.
Finally, the Oakland A’s needed to add sabermetric analysis to their organization because they were playing on an uneven playing field with respect to financial resources. As a result, they have been able to compete successfully against the big spenders. Education is a notoriously uneven playing field with respect to financial resources. I hope schools and districts struggling with relatively low per-pupil expenditures see DDDM as a way to make more cost-effective decisions.