-
Using AI Persuasion to Reduce Political
Polarization
Job Market Paper
Political polarization is a growing problem in democratic societies, negatively affect‐ ing everything from personal behavior to the functioning of institutions. This paper investigates a new way to reduce polarization: AI‐powered persuasion. In a pre‐registered randomized controlled trial with a representative sample of the US population, I show that conversational AI agents can persuade some people to adopt more moderate views on the issue of U.S. support for Ukraine. As a result, overall ideological polarization in the sample is reduced by about 20 percentage points. This depolarization effect was still present in an obfuscated follow‐up study one month later. These findings suggest that AI‐powered persuasion could be a useful tool in efforts to reduce polarization.
-
Advised by an Algorithm: Learning with Different
Informational Resources and Reactions to Heterogeneous
Advice Quality
Joint work with Jan Biermann and John Horton.
In a wide range of settings, decision-makers increasingly rely on algorithmic tools for support. Often, the algorithm serves as an advisor, leaving the final decision to be made by human judgment. In this setting, we focus on two aspects: first, identifying the informational resources that aid individuals in evaluating algorithmic guidance, and second, exploring human reactions to varying qualities of algorithmic advice. To address these questions, we conducted an online experiment involving 1565 participants. In the baseline treatment, subjects repeatedly perform the same estimation task and are provided with algorithmic guidance, all without knowledge of the type of algorithm or feedback after each round. Subsequently, we introduce two interventions aimed at enhancing the quality of human decisions when receiving algorithmic advice. In the first intervention, we explain the way the algorithm functions. We find that while this intervention reduces adherence to algorithmic advice, it does not improve decision-making performance. In the second treatment, we disclose the correct answer to the task after each round. This intervention leads to a reduction in adherence to algorithmic advice and an improvement in human decision-making performance. Furthermore, we investigate the extent to which individuals can adjust their assessment of the algorithm when advice quality fluctuates due to external circumstances. We find some evidence that individuals can assess algorithmic advice thoughtfully, adjusting their adherence depending on the quality of algorithmic recommendations.
-
The Effect of AI on the demand for Human Expertise
Joint work with Sebastian Valet.
This paper investigates the impact of consumer AI adoption, specifically ChatGPT, on the demand for human expertise. Utilizing a dual methodology, we first analyze extensive observational data from over 100,000 users to assess the downstream effects of ChatGPT adoption. Our findings indicate a significant reduction in visits to websites offering human expertise, such as WebMD and Quora, following AI adoption. In the second part of our study, we conduct an online lab experiment to explore the underlying mechanisms. Participants are tasked with finding information about a disease they currently have, with three groups: one interacting with an AI, another using online search, and a control group with no assistance. The outcome measured is the self-reported likelihood of visiting a doctor. Results show that participants in the AI treatment group report a significantly lower probability of seeking medical advice compared to the online search and control groups. These findings suggest that AI expertise may crowd out human knowledge, with profound implications for regulation and the labor market.
-
Turing Markets
Joint work with Dominik Rehse and Sebastian Valet.
Abstract: ??
-
Incentives and Economics of Data Sharing
Joint work with Dominik Rehse and Sebastian Valet.
Abstract: ??