Preview Mode Links will not work in preview mode

Todd Nief's Show


Nov 9, 2018

<p>Even though we live in an era of "big data" and huge amounts of our internet usage and content consumption are governed by algorithms (Facebook's newsfeed, YouTube's related videos, Google's predictive search, the advertising we're served online, etc.), many people don't trust algorithms when they're presented with the opportunity to use them in their own decision-making.</p>
<p>Berkeley Dietvorst thinks this results  in people making a lot of very foolish decisions, and wasting a lot of time, money, and effort.
So, he's been researching the concept of "algorithm aversion" for several years and he's published several highly illuminating papers on the topic.</p>
<p>Berkeley has developed a theory of why humans don't like to use algorithms (they're probably chasing perfection in their predictions and they excessively punish algorithms for making visible errors) and he continues to work on understanding ways in which we can increase the trust that human decision-makers place in algorithms.</p>
<p><b>Check out more from Berkeley here:</b><li>Website: <a href="http://faculty.chicagobooth.edu/berkeley.dietvorst/index.html">Berkeley's Chicago Booth Profile</a>//<a href="http://faculty.chicagobooth.edu/berkeley.dietvorst/research/index.html">Berkeley's Research</a></li></p>
<p><b>If you're enjoying the show, <a href="https://itunes.apple.com/us/podcast/todd-niefs-show/id1278759120?mt=2">why not a leave a review</a>?</b> It makes a difference in terms of other people finding the show.</p>
<p><b>You can also subscribe to receive my e-mail newsletter at <a href="http://www.toddnief.com">www.toddnief.com</a>.</b> Most of my writing never makes it to the blog, so get on that list.</p>
<p><b>Show Notes</b><ul> <li>[1:28] Berkeley is a marketing professor - yet studies algorithm aversion</li>
<li>[4:22] Humans are algorithmically averse - what’s our problem?</li>
<li>[12:10] Humans are risk-seeking so will choose not to use algorithms in order to seek outsized reward</li>
<li>[19:02] Humans err by regularly changing the weighting they give things based upon emotions</li>
<li>[26:22] Humans are more likely to use algorithms when they’re allowed to modify an algorithm</li>
<li>[35:20] Increasing human adherence to using superior algorithms to make predictions</li>
<li>[40:58] Are there ever good reasons for humans to distrust algorithms?</li>
<li>[1:04:17] How do we optimize the decision-making for individual decision-makers? And what would Berkeley like to know about how large tech companies get humans to use algorithms?</li>
<li>[1:11:15] How can people learn more about Berkeley’s research? And what research projects is he currently working on?</li>
</ul></p>
<p><b>Links and Resources Mentioned</b><ul> <li><a href="https://www.chicagobooth.edu/">The University of Chicago Booth School of Business</a><br/></li>
<li><a href="https://www.wharton.upenn.edu/">The Wharton School</a><br/></li>
<li><a href="https://www.predictiveanalyticsworld.com/patimes/target-really-predict-teens-pregnancy-inside-story/3566/">Did Target Really Predict a Teen&rsquo;s Pregnancy?</a><br/></li>
<li><a href="https://www.nytimes.com/2011/08/21/magazine/do-you-suffer-from-decision-fatigue.html">Do You Suffer From Decision Fatigue?</a><br/></li>
<li><a href="https://en.wikipedia.org/wiki/Robyn_Dawes">Robyn Dawes</a><br/></li>
<li><a href="https://en.wikipedia.org/wiki/CompStat">CompStat</a><br/></li>
<li><a href="https://medium.com/">Medium.com</a><br/></li>
<li><a href="https://fivethirtyeight.com/contributors/nate-silver/">Nate Silver &ndash; FiveThirtyEight</a><br/></li>
<li><a href="https://magic.wizards.com/en">Magic: The Gathering</a><br/></li>
<li><a href="http://dnd.wizards.com/">Dungeons &amp; Dragons</a><br/></li>
</ul></p>