Empirical Accuracy And Logical Rigor Elude Foreign Policy Experts
I used to think, when I heard the words “foreign policy expert”, that it really meant something, the way the phrase “nuclear power expert” connotes the image of someone who has an impressive level of comprehension of vast amounts of empirical data about the care and feeding of radioactive isotopes in a nuclear reactor. But these days, I am more inclined, when I see the moniker “foreign policy expert”, to picture someone more along the lines of a sports betting handicapper, who routinely divines the future outcome of basketball or football games as if the incomplete and often contradictory information he has access to about a team’s past performance is always indicative of its future results.
Which is why, when there are foreign conflicts erupting in places like the Middle East, where there are multiple independent actors, all of whom possess a multiplicity of internal and external motivations, I am convinced that it is practically impossible for the people who stare sagely into cable news television cameras every night to predict what will happen next.
It is the somewhat gratifying lesson of Philip Tetlock’s new book, “Expert Political Judgment: How Good Is It? How Can We Know?” , that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons.
They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be.
Everybody’s An Expert: Putting predictions to the test The New Yorker
Philip E. Tetlock is a professor of psychology at University of Pennsylvania who has managed to turn a simple experiment—evaluating more than 80,000 prognostications of 300 professional pundits over a twenty year period—into a goldmine of information thoroughly debunking the notion that the political experts who venture forth nightly with predictions are as good as they think they are.
So why do we as a nation continue to watch these so called experts? Why do we continue to put so much stock in their pronouncements, even when we know they have often been wrong before?
“We need to believe we live in a predictable, controllable world, so we turn to authoritative-sounding people who promise to satisfy that need. That’s why part of the responsibility for experts’ poor record falls on us. We seek out experts who promise impossible levels of accuracy, then we do a poor job keeping score.”