Published and Forthcoming


Cheating Death in Damascus (w/ Nate Soares)

The Journal of Philosophy, forthcoming. (Preprint)

Evidential and Causal Decision Theory are the leading contenders as theories of rational action, but both face fatal counterexamples. We present some new counterexamples, including one in which the optimal action is causally dominated. We also present a novel decision theory, Functional Decision Theory (FDT), which simultaneously solves both sets of counterexamples. Instead of considering which physical action of theirs would give rise to the best outcomes, FDT agents consider which output of their decision function would give rise to the best outcome. This theory relies on a notion of subjunctive dependence, where multiple implementations of the same mathematical function are considered (even counterfactually) to have identical results for logical rather than causal reasons. Taking these subjunctive dependencies into account allows FDT agents to outperform CDT and EDT agents in, e.g., the presence of accurate predictors. While not necessary for considering classic decision theory problems, we note that a full specification of FDT will require a non-trivial theory of logical counterfactuals and algorithmic similarity.


Some propositions are more epistemically important than others. Further, how important a proposition is is often a contingent matter—some propositions count more in some worlds than in others. Epistemic Utility Theory cannot accommodate this fact, at least not in any standard way. For EUT to be successful, legitimate measures of epistemic utility must be proper, i.e., every probability function must assign itself maximum expected utility. Once we vary the importance of propositions across worlds, however, normal measures of epistemic utility become improper. I argue there isn’t any good way out for EUT.


Imprecise Epistemic Values and Imprecise Credences

Australasian Journal of Philosophy, forthcoming. (Preprint, Published Version)

A number of recent arguments purport to show that imprecise credences are incompatible with accuracy-first epistemology. If correct, this conclusion suggests a conflict between evidential and alethic epistemic norms. In the first part of the paper, I claim that these arguments fail if we understand imprecise credences as indeterminate credences. In the second part, I explore why agents with entirely alethic epistemic values can end up in an indeterminate credal state. Following William James, I argue that there are many distinct alethic values that a rational agent can have. Furthermore, such an agent is rationally permitted not to have settled on one fully precise value function. This indeterminacy in value will sometimes result in indeterminacy in epistemic behavior—that is, because the agent’s values aren’t settled, what she believes might not be.


According to accuracy-first epistemology, accuracy is the fundamental epistemic good. Epistemic norms–Probabilism, Conditionalization, the Principal Principle, and so on–have their binding force in virtue of helping to secure this good. To make this idea precise, accuracy-firsters invoke Epistemic Decision Theory (EPDT) to determine which epistemic policies are the best means toward the end of accuracy. Hilary Greaves and others have recently challenged the tenability of this programme. Their arguments purport to show that EPDT encourages obviously epistemically irrational behavior. We develop firmer conceptual foundations for EPDT. First, we detail a theory of praxic and epistemic good. Then we show that, in light of their very different good-making features, EPDT will evaluate epistemic states and epistemic acts according to different criteria. So, in general, rational preference over states and acts won’t agree. Finally, we argue that based on direction-of-fit considerations, it is preferences over the former that matter for normative epistemology, and that EPDT, properly spelt out, arrives at the correct verdicts in a range of putative problem cases.


We use a theorem from M. J. Schervish to explore the relationship between accuracy and practical success. If an agent is pragmatically rational, she will quantify the expected loss of her credence with a strictly proper scoring rule. Which scoring rule is right for her will depend on the sorts of decisions she expects to face. We relate this pragmatic conception of inaccuracy to the purely epistemic one popular among epistemic utility theorists.


Pettigrew offers new axiomatic constraints on legitimate measures of inaccuracy. His axiom called ‘Decomposition’ stipulates that legitimate measures of inaccuracy evaluate a credence function in part based on its level of calibration at a world. I argue that if calibration is valuable, as Pettigrew claims, then this fact is an explanandum for accuracy-first epistemologists, not an explanans, for three reasons. First, the intuitive case for the importance of calibration isn’t as strong as Pettigrew believes. Second, calibration is a perniciously global property that both contravenes Pettigrew’s own views about the nature of credence functions themselves and undercuts the achievements and ambitions of accuracy-first epistemology. Finally, Decomposition introduces a new kind of value compatible with but separate from accuracy-proper in violation of Pettigrew’s alethic monism.


Permissive Rationality and Sensitivity

Philosophy and Phenomenological Research 2017. (Preprint, Published Version)

Permissivism about rationality is the view that there is sometimes more than one rational response to a given body of evidence. In this paper I discuss the relationship between permissivism, deference to rationality, and peer disagreement. I begin by arguing that—contrary to popular opinion—permissivism supports at least a moderate version of conciliationism. I then formulate a worry for permissivism. I show that, given a plausible principle of rational deference, permissive rationality seems to become unstable and to collapse into unique rationality. I conclude with a formulation of a way out of this problem on behalf of the permissivist.


I exploit formal measures of accuracy to prove two theorems. First, an agent should expect to give her peers equal weight. On one natural understanding of 'peer', that means an agent should expect to split the difference. Second, I show that splitting the difference will nevertheless tend to result in overly uncertain credences---credences too far from 0 or 1. Furthermore, if the agent takes herself and her advisor to be reliable, she should tend to give the party who turned out to have a stronger opinion more weight. These theorems combine to constrain both synchronic expectations and long-run behavior. An agent's response to peer disagreement should over the course of many disagreements average out to equal weight. However, in any particular disagreement, her response should usually deviate from equal weight and depend on the actual credences she and her advisor report.


Leitgeb and Pettigrew argue that (1) agents should minimize the expected inaccuracy of their beliefs and (2) inaccuracy should be measured via the Brier score. They show that in certain diachronic cases, these claims require an alternative to Jeffrey Conditionalization. I claim that this alternative is an irrational updating procedure and that the Brier score, and quadratic scoring rules generally, should be rejected as legitimate measures of inaccuracy.


Drafts


Consequentialist theories determine rightness solely based on real or expected consequences. Although such theories are popular, they often have difficulty with generalizing intuitions, which, in their pre-theoretic form, require concern for the question ``What if everybody did that?'' When generalizing versions of consequentialism have been attempted, as with rule consequentialism, the results are messy. We claim that the conceptual apparatus currently employed in generalizing consequentialism is not adequate to the task. Just as decision theory is crucial to modern consequentialism for handling uncertainty, so too is it crucial for handling generalization. Here, we present a relatively new decision theory, functional decision theory, that will allow us to sketch a theory of generalized act consequentialism. We argue that this theory is superior to rule consequentialism both in modeling the actual reasoning of generalizers and in delivering correct results.


Considerations of accuracy—the epistemic good of having credences close to truth-values—have led to the justification of a host of epistemic norms. These arguments rely on particular ways of measuring accuracy, in particular the accuracy measure should be strictly proper. However, the main argument for strict propriety only supports weak propriety. But strict propriety follows from weak propriety given strict truth-directedness (which is non-negotiable) and additivity (which is both very common and plausible). So no further argument is necessary.


Higher-order evidence is evidence that you have wrongly or rightly handled other evidence according to rational requirements. According to the accommodationist position, you should generally respond to such evidence by adjusting your credences in first-order questions to account for your own potential irrationality. Although accommodationism is intuitive, it recommends some odd behavior, such as violating conditionalization and Good's Theorem. I argue that, on the accommodationist picture, some higher-order evidence is best understood as a kind of information loss akin to forgetting, which results in the same type of epistemic behavior.


We argue that securing informed consent requires not only that patients understand the probabilities of various risks and benefits of proposed treatments or procedures they face but also that securing informed consent requires communicating how probability expressions (or other tools for representing uncertainty) are to be interpreted, the quality and quantity of the evidence for the probabilities reported, and how these probability claims might or might not be relevant to a patient’s decisions. We conclude by considering two possibilities. Either patients cannot understand the difficult concepts and issues we discuss in this paper and so cannot be genuinely informed of their risks, or patients can come to understand the relevant issues when properly advised, such that the issues we discuss are not a principled barrier to obtaining informed consent. If patients cannot understand the relevant issues, then the informed consent requirement must be relaxed so as not to include a requirement of reporting probability claims to patients. If patients can come to understand the relevant issues when properly advised, then the medical community should train physicians to provide that advice and help patients understand the advice (potentially through third-party patient activist decision theorists who are members of broader patient activist groups).