Is science rational, and if so, in virtue of what? Popper (1959), Kuhn (1962), Lakatos (1968), and Feyerabend (1975) took this to be the central question of philosophy of science. This question is traditionally answered by trying to formulate criteria for what it is rational to believe for an individual scientist in light of the evidence available to her. By taking the evidence as given this overlooks other decisions that scientists make.

My research considers what happens after evidence is collected, such as the scientist’s choice of when to share her findings, and how peer review may affect what gets reported. Just like the question of what to believe, these decisions and their outcomes may be epistemically appraised: we would like the social organization of science to be such that properly collected evidence and well-justified hypotheses are shared widely, and their opposites are not.

What is appraised here is not necessarily the decisions of individual scientists, but the way science is socially organized to promote (or fail to promote) the epistemic success of science. This approach to questions of rationality—called systems-oriented social epistemology by Goldman (2010)—is distinct from an approach focused on the epistemic rationality of a group (cf. Gilbert 1987, List and Pettit 2002). The latter considers a group as a single agent whose beliefs may be rational, whereas the former considers a heterogeneous collection of agents and asks whether prevailing practices are likely to promote epistemically praiseworthy beliefs in its members.

The claim is that in order to understand the rationality of science it is at least as important to understand the rationality of science as a social system as it is to understand the rationality of individual scientists. Given its complexities, the system of science must be studied piecemeal. I now discuss two more specific systems on which my research focuses.

The first system is the credit economy, i.e., the reward system of science. The primary reward for good scientific work takes the form of recognition or prestige, often referred to as credit. Credit is crucial to having a successful career as a scientist (Hull 1988, Kitcher 1993).

Because the (conscious or unconscious) desire to receive credit for their work has such a big influence on scientists’ behavior, an important question is whether behavior guided by this desire is likely to be epistemically successful. Previous work (e.g., Kitcher 1993, Strevens 2003) focused on scientists’ choice of methodology and argued that when scientists selfishly aim for credit the progress of science may be greater than when their motivations are more “epistemically pure”. Where Kitcher and Strevens presented the good news, my research identifies both positive and negative effects of the credit economy.

Consider the social norms that scientists take themselves to be held to, such as universalism (scientific claims should be evaluated impartially) and disinterestedness (scientists should be unbiased). Does the credit economy incentivize scientists to conform to these norms? And do these norms contribute to or detract from the rationality of science as a system?

In Heesen (2017) I address these questions for the communist norm, which calls on scientists to share their results widely rather than keep them secret. I argue that the communist norm contributes to the rationality of science, and that credit motivates scientists to conform to it.

Another question about the credit economy regards the role of the priority rule—the winner-takes-all rule under which only the first scientist to make a discovery gets credit for it. This rule can be implemented in different ways. Depending on what counts as a discovery and how much credit is given for different discoveries, different “priority rules” and hence different incentive structures are created. In future research I intend to carry out a systematic investigation of the upsides and downsides of different priority rules.

The second system is peer review. The purpose of peer review is to select epistemically praiseworthy scientific contributions for publication, but it does so imperfectly. In my working paper “Why the Reward Structure of Science Makes Reproducibility Problems Inevitable” I argue that in combination with the credit economy this leads to epistemic problems, in particular difficulties in reproducing scientific results.

In Heesen (forthcoming) I identify circumstances in which peer review can be biased for or against groups of scientists without any individual editor or reviewer being biased. Future work will generalize this finding and identify ways of measuring and addressing “purely statistical” biases in peer review.

My investigations of the social epistemology of science use mathematical models, drawing mainly upon game theory. Game theory is a particularly suitable tool to study the social structure of science, as it studies how individual success interacts with aggregate success in a multi-agent system.

Changing a model parameter or assumption is relatively easy, whereas experimenting with the real social structure of science is virtually impossible. This allows me to ask counterfactual questions of the form: “Would changing this feature of the social structure of science help or harm the rationality of science?” In this way I tie philosophical questions (“What is the epistemic role of the social structure of science?”) closely to practical questions (“How could certain policies help or harm science?”).

The use of mathematical models comes with its own set of methodological questions. Do models teach us anything about the world, and if so, how? In extant views judgments of similarity between the model and the world play an important role (e.g., Sugden 2000, Mäki 2009, Weisberg 2013).

In future work I will develop a view that does not rely on judgments of similarity. My view reinterprets robustness as a property of sets of models rather than a property of particular models. Some of the basic ideas are set out in Heesen et al. (forthcoming).


  • Paul Feyerabend. Against Method. New Left Books, London, 1975.
  • Margaret Gilbert. Modelling Collective Belief. Synthese, 73(1):185–204, 1987. URL
  • Alvin I. Goldman. Systems-Oriented Social Epistemology. In Tamar Szabó Gendler and John Hawthorne, editors, Oxford Studies in Epistemology, volume 3, chapter 8, pages 189–214. Oxford University Press, Oxford, 2010.
  • Remco Heesen. Communism and the Incentive to Share in Science. Philosophy of Science, 84(4):698–716, 2017. URL
  • Remco Heesen. When Journal Editors Play Favorites. Philosophical Studies, forthcoming. URL
  • Remco Heesen, Liam Kofi Bright, and Andrew Zucker. Vindicating Methodological Triangulation. Synthese, forthcoming. URL
  • David L. Hull. Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science. The University of Chicago Press, Chicago, 1988.
  • Philip Kitcher. The Advancement of Science: Science without Legend, Objectivity without Illusions. Oxford University Press, Oxford, 1993.
  • Thomas S. Kuhn. The Structure of Scientific Revolutions. The University of Chicago Press, Chicago, 1962.
  • Imre Lakatos. Criticism and the Methodology of Scientific Research Programmes. Proceedings of the Aristotelian Society, 69:149–186, 1968. URL
  • Christian List and Philip Pettit. Aggregating Sets of Judgments: An Impossibility Result. Economics and Philosophy, 18:89–110, 2002. URL
  • Uskali Mäki. Realistic Realism about Unrealistic Models. In Harold Kincaid and Don Ross, editors, The Oxford Handbook of the Philosophy of Economics, chapter 4, pages 68–98. Oxford University Press, Oxford, 2009.
  • Karl Popper. The Logic of Scientific Discovery. Hutchinson, London, 1959.
  • Michael Strevens. The Role of the Priority Rule in Science. The Journal of Philosophy, 100(2):55–79, 2003. URL
  • Robert Sugden. Credible Worlds: The Status of Theoretical Models in Economics. Journal of Economic Methodology, 7(1):1–31, 2000. URL
  • Michael Weisberg. Simulation and Similarity: Using Models to Understand the World. Oxford University Press, Oxford, 2013.