The ‘Will of the People’? (2018-19)
When we act on false beliefs, we’re not doing what we want. After all, had we been informed, we would (typically) have acted differently, and arguably more in line with our true desires. The same goes for an electorate acting on false beliefs: it’s not acting in the way it wants - and would (typically) have acted differently were it informed. Given widespread and well-documented public ignorance on political matters, it follows that we cannot read off the ‘will of the people’ from electoral outcomes. As part of this project, I illustrate this point by using British Election Study (BES) data to estimate what the 2016 EU referendum would have looked like, had the electorate been fully informed, as operationalised through a set of knowledge items administered as part of the BES survey. This follows a fairly established tradition in political science looking at ‘enlightened preferences.’ However, given skepticism about deriving causal (or counterfactual) claims from observational data, I also attempt to validate the model through an experimental study—conducted together with researchers at the National Institute of Economic and Social Research (NIESR) and Oxford’s Centre on Migration, Policy, and Society (COMPAS)—that finds information effect on immigration policy preferences. In light of this, any government seeking to implement policies endorsed by an uninformed electorate isn’t obviously channeling ‘the will of the people.’
self-resolving information markets (2018-19)
Information markets are online platforms for people to place bets on future or otherwise unknown events. Over the past couple of decades, the price signals arising on such markets have proven highly accurate (see, e.g., my piece in The Blackwell Companion to Applied Philosophy). But there's a limit to traditional information markets: resolving them requires waiting until the event bet on takes place (or fails to take place). This makes it impossible to bet on events far into the future or on counterfactual events. That is, unless we set up a self-resolving market, where pay-offs are settled with reference to factors internal to the market. In a pair of papers in the Journal of Prediction Markets—one published (jointly with Nick Williams of Dysrupt Labs) and another one forthcoming—I’ve shown that trading behaviour on self-resolving markets under fairly standard conditions comes out virtually identical to that on traditional markets. More specifically, the market profiles of otherwise identical traditional and self-resolving markets show significantly higher degrees of correlation than do randomly paired markets, and the average accuracies of the two types of markets are practically equivalent. This suggests that self-resolving markets have the potential of matching traditional markets in accuracy while shedding their limitations in relation to long-term predictions and counterfactuals.
Reconciling Public Perceptions with the Economic Evidence on EU immigration (2017-18)
The UK has voted to leave the EU, and there is a consensus that concerns about immigration played an important role in the minds of many voters. This is in line with existing research which for years has shown consistently high levels of concern among people in the UK over the scale of immigration and its impact on jobs, wages and services. At the same time, available economic evidence suggests that the impacts of immigration is very small, and likely positive. So, on the face of it, there is a disconnect between popular perceptions and available evidence here. Against this background, I worked as part of a Leverhulme-funded project with the National Institute of Economic and Social Research (NIESR) to better understand how people process economic evidence about the impact of immigration. We found, among other things, that people are less concerned about the quantity of immigrants than they are about their “quality”, and that there’s a clear evidence hierarchy behind how people form judgments about “quality”, with anecdotes on top and aggregate statistics at the bottom, possibly explaining why standard forms of “myth busting” about immigration tends to be ineffective. More details can be found in the final report for the project.
epistemic consequentialism: Problems and prospects (2014-16)
Epistemic consequentialists believe that what's epistemically right (e.g., justified) is to be defined in terms of what's epistemically good (e.g., true belief). While highly controversial in ethics, consequentialism has arguably been widely accepted in epistemology -- up until very recently. As part of a British Academy-funded project, Jeff Dunn (DePauw) and I take on recent criticisms of epistemic consequentialism in a couple of papers (see 'A Defence of Epistemic Consequentialism', in Philosophical Quarterly, and 'Is Reliabilism a Form of Consequentialism?', in American Philosophical Quarterly), and also edit an anthology, entitled (fittingly enough) Epistemic Consequentialism, featuring some of the most recent and interesting work for and against epistemic consequentialism.
on cognitve outsourcing (2013-16)
I worked on the epistemology of cognitive outsourcing, i.e., of handing over your information collection and processing to others, together with researchers on Lund University's £1.7M project Knowledge in a Digital World, funded by the Swedish Research Council. My main contribution was published in Philosophical Issues as 'Is there a Problem with Cognitive Outsourcing?' (my main conclusion: it's not clear that there is), and will also be published in a shortened form in a special issue of the Italian philosophy journal Iride (under the title 'L’outsourcing cognitivo pone un problema epistemico?'), edited by Annalisa Coliva.
the epistemic virtue of deference (2012-13)
As part of Wake Forest University's Character Project, funded by the John Templeton Foundation, I worked on the epistemic virtue of deference -- a virtue manifested in so far as one listens to, and subsequently believes, those who know what they're talking about. My work here resulted in a number of publications, including 'Against the Bifurcation of Virtue' (forthcoming in Noûs), 'The Social Virtue of Blind Deference' (Philosophy and Phenomenological Research), 'Procedural Justice and the Problem of Intellectual Deference' (Episteme), and 'The Epistemic Virtue of Deference' (forthcoming in The Routledge Handbook of Virtue Epistemology, edited by Heather Battaly). In the near future, I hope to develop these papers into a book-length defence of a consequentialist virtue epistemology.
defending Epistemic Paternalism (2010-12)
We know that we are fallible creatures, prone to a variety of systematic reasoning mistakes. We also know that we all have a tendency to be overconfident about the accuracy of our judgments, as well as about our ability to overcome or avoid reasoning mistakes. In my book Epistemic Paternalism: A Defence (Palgrave Macmillan 2013), I argue that this dual tendency for bias and overconfidence gives us reason to accept that we are sometimes justified in interfering with the inquiry of others without their consent but for their own epistemic good. My forthcoming piece in the Routledge Handbook in the Philosophy of Paternalism, edited by Kalle Grill and Jason Hanna, gives a summary of the case offered in the book.