Math-Frolic Interview #44
"Ronald Coase cynically observed that, 'If you torture the data long enough, it will confess.' Standard Deviations is an exploration of dozens of examples of tortuous assertions that, with even a moment's reflection, don't pass the smell test. Sometimes, the unscrupulous deliberately try to mislead us. Other times, the well-intentioned are blissfully unaware of the mischief they are committing. My intention in writing this book is to help protect us from errors -- both external and self-inflicted. You will learn simple guidelines for recognizing bull when you see it -- or say it. Not only do others use data to fool us, we often fool ourselves."
-- from the Introduction to Standard Deviations by Gary Smith
Gary N. Smith is an award-winning Professor of Economics at Pomona College in California (...which happens to be my wonderful alma mater — and though he’s been there 37 years, I graduated before he arrived, so we never crossed paths).
He’s also the author of several popular books. “
Standard Deviations” is one of my favorite takes on statistics that everyone should know about. He followed that up with “
What The Luck,” a similarly entertaining, engaging volume on probabilities in major parts of our lives. And I also enjoyed his very readable and instructive “
Money Machine,” a great introduction to the sort of “value investing” I believe in and wish I had started earlier in life! Later this year he’ll be out with “
The AI Delusion,” more on that below.
If you’ve missed any of his books you should check them out…
And now a little more:
***************************
1) Tell us a little about your background and the path that led to your interests in economics and statistics?
Way back when (in junior high school?), I became fascinated with mathematical puzzles, including several of Martin Gardner’s books, such as The Scientific American Book of Mathematical Puzzles and Diversions and Fads and Fallacies in the Name of Science. I went to Harvey Mudd College, here in California, and I majored in math, but I was also on the debate team and the national topic my freshman year was, “Resolved that the federal government should establish a national program of public works for the unemployed.” I was drawn to economics for the same reasons that attracted many economists, including James Tobin, who would become one of my mentors when I went to graduate school at Yale:
"I [Tobin] studied economics and made it my career for two reasons. The subject was and is intellectually fascinating and challenging, particularly to someone with taste and talent for theoretical reasoning and quantitative analysis. At the same time it offered the hope, as it still does, that improved understanding could better the lot of mankind."
After earning my PhD in economics at Yale, I accepted a job there as an assistant professor, initially teaching statistics and macroeconomics. Then the Yale economics department asked students what courses they would like added to the curriculum and the runaway winners were Marx and the stock market. I wasn’t interested in Marx, but James Tobin was the chair of my thesis committee and would be awarded the Nobel Prize in Economics, in part for his analysis of financial markets. So, I volunteered to create a stock market course. I loved it because of the wonderful combination of mathematical theories, empirical data, and real world relevance. My main interests today are statistics and finance.
2) When I thought about interviewing you and clicked on your webpage I discovered you have a brand new book on the way later this year, “The AI Delusion,” so go ahead and tell us about that. I assume from the title you feel a lot of what we hear/read about AI is overhyped?
p.s., just curious too, have you ever ridden in a driverless car?
I have not yet ridden in a driverless car.
A few years ago, I wrote Standard Deviations: Flawed Assumptions, Tortured Data, and Other Ways to Lie With Statistics, which is a compilation of examples of statistical errors and mischief that I collected over the years. Recently, a lot of these errors and mischief come from data mining—ransacking data for statistical patterns, without being guided or constrained by any coherent theory. Big data and powerful computers have made the problem much worse because they make the data mining so easy.
For example, back in 2008, Chris Anderson, editor-in-chief of Wired, wrote an article with the provocative title, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.” Anderson argued that,
"With enough data, the numbers speak for themselves…. The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all."
That is a dangerous argument. Too many people think that computers are smarter than humans and should therefore be trusted to make important decisions for us based on statistical patterns unearthed by ransacking data. For example, courts all over the country are using computer models to make bail, prison-sentence, and parole decisions based on statistical patterns that may be merely coincidental, but cannot be evaluated because they are hidden inside black boxes.
The truth is that, while computer algorithms are great and getting better, they are still designed to have the very narrow capabilities needed to perform well-defined chores, not the general intelligence needed to deal with unfamiliar situations by assessing what is happening, why it is happening, and what the consequences are of taking action.
Artificial intelligence is not at all like the real intelligence that comes from human brains. Computers do not know what words mean because computers do not experience the world the way we do. They do not even know what the real world is. Computers do not have the common sense or wisdom that humans accumulate by living life. Computers have no way of judging whether the patterns they discover are meaningful or meaningless, and when the algorithms are inside black boxes, no one knows.
3) Since you're an economist and a statistician I’m curious if there are any specific economists and statisticians you would especially recommend for mathematically-inclined readers to follow on the internet (on blogs, websites, Twitter, Facebook, wherever)?
Nate Silver and Andrew Gelman are reliably interesting.
Also, besides your books do you have Web-accessible pieces you would recommend to readers wanting to get a taste of your own work?
4) In some places I see a push in secondary education for some sort of statistics and data science course to be a required part of the math curriculum (probably replacing one of the currently-mandated math offerings). Do you have any thoughts on that?
I was a math major, I love math, and I use math and write computer programs for almost all my academic research, but I think that a basic understanding of statistics and data science are more useful and relevant for most citizens. Understanding the difference between good data and bad data, the nature and limitations of statistical inference, and the dangers of data mining are essential.
5) When you’re not trying to raise math and financial literacy in this country ;) what are some of your other main interests/hobbies/activities?
I used to play soccer, squash, and all sorts of sports but I had to have knee replacement surgery a year ago. Now I’m mainly trying to be a good husband and father.
***************************
Thanks Gary, I've never before interviewed either an economist or a Fighting Sagehen (Pomona mascot) here, so thanks for two firsts! ;)
[In retrospect, I'm only sorry that you weren't the 47th interview here (...inside joke)]
And again, Gary's books are fun... AND timely reads if you're not already familiar with them.