Kristoffer Ahlstrom-Vij | Ph.D.

Research Projects

 

the informationeffects package (2022-23)

In politics like elsewhere, what we know matters for what we want. This insight is largely due to the the literature on co-called ‘information effects’, which looks to quantify differences between actual and estimated fully informed political preferences. At the same time, there is no established workflow for information effects research. That was the motivation for my informationeffects package, which provides a complete pipeline for estimating information effects using R, an open-source programming language for statistical computing. The package provides a variety of functions, enabling researchers to do everything from construct a knowledge scale to implementing a counterfactual model with propensity scores in just a few lines of code. A full vignette demonstrating the package can be found here.

 

Are pollsters asking the wrong question? (2021-22)

What is the best way to find out what people want? Ask them, not about their own preferences, but about the preferences of people in their social circles. But why? In a forthcoming paper in Electoral Studies, I argue that it is because asking about others taps into our social knowledge and thereby generates an implicit super sample that includes non-sampled members of participants’ social circles. In fact, I use a set of simulation studies to show that the superiority of social-circle surveys can be expected to be robust in the face of respondent selection bias, people being highly fallible about other people’s preferences (egocentric bias), and people largely surrounding themselves with those who share their preferences (homophily). In the paper, I also  discuss the relationship between social-circle questions and the closely related expectation questions (e.g., “Who do expect will win the election?”) typically found on prediction markets which also tend to outperform traditional polls. The very fact that prediction markets ask an expectation question, and as such likely tap into implicit super samples, offers a particularly promising and parsimonious explanation of the accuracy of prediction market estimates.


what would you believe if you were fully informed? (2019-20)

There has been a lot of talk recently about “fake news” and other form of voter misinformation. Looking ahead at the 2020 Presidential election, I recently built a website, “If Informed,” where US voters can find out what political positions they likely would have taken, had they been fully informed. The underlying models use large-scale survey data from the American National Election Study to tease out what combinations of demographic factors and levels of political knowledge tend to go with what political attitudes. This also makes it possible to estimate what attitudes people would have taken, had they been just the way they are across those demographic factors, but highly politically informed. The website is part of a broader research project on the role of knowledge in political choice, and the underlying models are based on a long-standing tradition on so-called ‘information effects’ and a more recently established methodology of counterfactual modelling in the social sciences. The website is a collaboration with Alfred Malmros, and builds on a previous website that we built on the basis of British Election Study data in the lead-up to the 2019 General Election in the UK.

Screen Shot 2020-10-19 at 07.12.34.png

Screen Shot 2019-04-02 at 23.23.49.png

The ‘Will of the People’? (2018-20)

When we act on false beliefs, we’re not doing what we want. After all, had we been informed, we would (typically) have acted differently, and arguably more in line with our true desires. The same goes for an electorate acting on false beliefs: it’s not acting in the way it wants - and would (typically) have acted differently were it informed. Given widespread and well-documented public ignorance on political matters, it follows that we cannot read off the ‘will of the people’ from electoral outcomes. As part of this project, I illustrate this point by using British Election Study (BES) data to estimate what the 2016 EU referendum would have looked like, had the electorate been fully informed, as operationalised through a set of knowledge items administered as part of the BES survey. This follows a fairly established tradition in political science looking at ‘enlightened preferences.’ However, given skepticism about deriving causal (or counterfactual) claims from observational data, I also attempt to validate the model through an experimental study—conducted together with researchers at the National Institute of Economic and Social Research (NIESR) and Oxford’s Centre on Migration, Policy, and Society (COMPAS)—that finds information effect on immigration policy preferences. In light of this, any government seeking to implement policies endorsed by an uninformed electorate isn’t obviously channeling ‘the will of the people.’


self-resolving information markets (2018-19)

Information markets are online platforms for people to place bets on future or otherwise unknown events. Over the past couple of decades, the price signals arising on such markets have proven highly accurate (see, e.g., my piece in The Blackwell Companion to Applied Philosophy). But there's a limit to traditional information markets: resolving them requires waiting until the event bet on takes place (or fails to take place). This makes it impossible to bet on events far into the future or on counterfactual events. That is, unless we set up a self-resolving market, where pay-offs are settled with reference to factors internal to the market. In a pair of papers in the Journal of Prediction Markets—one published (jointly with Nick Williams of Dysrupt Labs) and another one forthcoming—I’ve shown that trading behaviour on self-resolving markets under fairly standard conditions comes out virtually identical to that on traditional markets. More specifically, the market profiles of otherwise identical traditional and self-resolving markets show significantly higher degrees of correlation than do randomly paired markets, and the average accuracies of the two types of markets are practically equivalent. This suggests that self-resolving markets have the potential of matching traditional markets in accuracy while shedding their limitations in relation to long-term predictions and counterfactuals.


Reconciling Public Perceptions with the Economic Evidence on EU immigration (2017-18)

The UK has voted to leave the EU, and there is a consensus that concerns about immigration played an important role in the minds of many voters. This is in line with existing research which for years has shown consistently high levels of concern among people in the UK over the scale of immigration and its impact on jobs, wages and services. At the same time, available economic evidence suggests that the impacts of immigration is very small, and likely positive. So, on the face of it, there is a disconnect between popular perceptions and available evidence here. Against this background, I worked as part of a Leverhulme-funded project with the National Institute of Economic and Social Research (NIESR) to better understand how people process economic evidence about the impact of immigration. We found, among other things, that people are less concerned about the quantity of immigrants than they are about their “quality”, and that there’s a clear evidence hierarchy behind how people form judgments about “quality”, with anecdotes on top and aggregate statistics at the bottom, possibly explaining why standard forms of “myth busting” about immigration tends to be ineffective. More details can be found in the final report for the project.


epistemic consequentialism: Problems and prospects (2014-16)

Epistemic consequentialists believe that what's epistemically right (e.g., justified) is to be defined in terms of what's epistemically good (e.g., true belief). While highly controversial in ethics, consequentialism has arguably been widely accepted in epistemology -- up until very recently. As part of a British Academy-funded project, Jeff Dunn (DePauw) and I take on recent criticisms of epistemic consequentialism in a couple of papers (see 'A Defence of Epistemic Consequentialism', in Philosophical Quarterly, and 'Is Reliabilism a Form of Consequentialism?', in American Philosophical Quarterly), and also edit an anthology, entitled (fittingly enough) Epistemic Consequentialism, featuring some of the most recent and interesting work for and against epistemic consequentialism.

1280px-The_Taking_of_Christ-Caravaggio_(c.1602).jpg

316N09415_8GFY7.jpg

on cognitve outsourcing (2013-16)

I worked on the epistemology of cognitive outsourcing, i.e., of handing over your information collection and processing to others, together with researchers on Lund University's £1.7M project Knowledge in a Digital World, funded by the Swedish Research Council. My main contribution was published in Philosophical Issues as 'Is there a Problem with Cognitive Outsourcing?' (my main conclusion: it's not clear that there is), and will also be published in a shortened form in a special issue of the Italian philosophy journal Iride (under the title 'L’outsourcing cognitivo pone un problema epistemico?'), edited by Annalisa Coliva.


the epistemic virtue of deference (2012-13)

As part of Wake Forest University's Character Project, funded by the John Templeton Foundation, I worked on the epistemic virtue of deference -- a virtue manifested in so far as one listens to, and subsequently believes, those who know what they're talking about. My work here resulted in a number of publications, including 'Against the Bifurcation of Virtue' (forthcoming in Noûs), 'The Social Virtue of Blind Deference' (Philosophy and Phenomenological Research), 'Procedural Justice and the Problem of Intellectual Deference' (Episteme), and 'The Epistemic Virtue of Deference' (forthcoming in The Routledge Handbook of Virtue Epistemology, edited by Heather Battaly). In the near future, I hope to develop these papers into a book-length defence of a consequentialist virtue epistemology.

Marc Chagall - Moses..jpg

defending Epistemic Paternalism (2010-12)

We know that we are fallible creatures, prone to a variety of systematic reasoning mistakes. We also know that we all have a tendency to be overconfident about the accuracy of our judgments, as well as about our ability to overcome or avoid reasoning mistakes. In my book Epistemic Paternalism: A Defence (Palgrave Macmillan 2013), I argue that this dual tendency for bias and overconfidence gives us reason to accept that we are sometimes justified in interfering with the inquiry of others without their consent but for their own epistemic good. My forthcoming piece in the Routledge Handbook in the Philosophy of Paternalism, edited by Kalle Grill and Jason Hanna, gives a summary of the case offered in the book.