Wednesday April 12

Room DZ 003

9:00-9:30 Registration
9:30-9:45 Welcome words
9:45-10:35 Roger Cooke
Committees in Uncertainty: the confidence trap
10:35-11:05 Coffee Break (30 min)
11:05-11:45 John Beatty
Consensus: Sometimes It Doesn’t Add Up
11:45-12:25 Lucie Edwards
Regulating designer genes: Is an intergovernmental science panel, a la the IPCC, the solution? [slides]
12:30-14:00 Lunch
14:00-14:40 Jason Alexander & Julia Morley
Extra-deliberational influences on expert decision making [slides]
14:40-15:20 Cyrille Imbert
No need for a secret ballot? How to reduce reputational cascades in expert committees [slides]
15:20-15:50 Coffee Break (30 min)
15:50-16:30 Jan-Willem Romeijn
“Stein’s paradox and group rationality” [slides]
16:30-17:20 Behnam Taebi
Rawls’ Wide Reflective Equilibrium as a method for engaged interdisciplinary collaboration: Potentials and limitations for the context of technological risks [slides]
17:30 -19:– Reception
20:00 – Dinner Restaurant Anvers


Thursday April 13

Room DZ 003

9:30-10:20 Rafaela Hillerbrand
How the IPPC is its own worst enemy. The limits of communicating scientific uncertainties and how they impact on the composition of scientific expert committees [slides]
10:20-11:00 Haris Shekeris
Scientific expert committees, wicked problems and procedure [slides]
11:00-11:30 Coffee Break (30 min)
11:30-12:20 Rida Laraki
Majority Judgment vs. Majority Rule [slides]
12:30-14:00 Lunch
14:00-14:40 Michael Morreau
Diverse grading standards can improve the performance of expert panels [slides]
14:40-15:20 Thomas Boyer-Kassem
Scientific expertise and risk aggregation with threshold [slides]
15:20-15:50 Coffee Break (30 min)
15:50-16:40 Franz Dietrich
A theory of Bayesian groups [slides]
Farewell Drinks
Roger M. Cooke (Delft University of Technology & Washington)
Committees in Uncertainty: the confidence trap
The title echoes the book Experts in Uncertainty. Based on the speaker’s experience as Lead Author for the chapter Risk and Uncertainty in IPCC’s recent Fifth Assessment Report, there has been little progress in committee methods for dealing with uncertainty since Herman Kahn’s deplorable book On Thermonuclear War (the model for Dr Strangelove). Both Kahn and the IPCC rush headlong into the confidence trap: thinking that high confidence in A, high confidence in B….high confidence in Z” is the same as “high confidence in A and  B and …Z”. Not only the natural language, but also several “alternative uncertainties” set and spring the confidence trap. An alternative of science-based uncertainty quantification pivots on the notion of rational consensus: interlocutors agree on a method of uncertainty combination that satisfies necessary conditions of the scientific method – including most notably empirical control. The method is then exercised. Interlocutors needn’t adopt the rational consensus as their personal beliefs, but withdrawal from the rational consensus post hoc incurs a proof burden: show that better instantiations of the scientific method are at hand (see supplementary material). This approach slowly gains ground.
Over three hundred publications referenced in  describe applications in nuclear safety, civil aviation/aerospace, ecosystems/public health, natural hazards, banking and finance, information security and climate. The most recent developments concern out of sample validation: .Franz Dietrich (CNRS Paris)
A theory of Bayesian groups
A group is often construed as a single agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated contribution “Groupthink”, Russell, Hawthorne and Buchak (2015) apply the Bayesian paradigm to groups by requiring group credences to undergo a Bayesian revision whenever new information is learnt, i.e., whenever the individual credences undergo a Bayesian revision based on this information. Bayesians should often strengthen this requirement by extending it to non-public or even private information (learnt by not all or just one individual), or to non-representable information (not corresponding to an event in the algebra on which credences are held). I propose a taxonomy of six kinds of `group Bayesianism’, which differ in the type of information for which Bayesian revision of group credences is required: public representable information, private representable information, public non-representable information, and so on. Six corresponding theorems establish exactly how individual credences must (not) be aggregated such that the resulting group credences obey group Bayesianism of any given type, respectively. Aggregating individual credences through averaging is never permitted. One theorem – the one concerned with public representable information – is essentially Russell et al.’s central result (with minor corrections).
Full text: Hillerbrand (KIT Karlsruhe University)
How the IPPC is its own worst enemy. The limits of communicating scientific uncertainties and how they impact on the composition of scientific expert committees
Since the 1970s the ballpark figure of the predicted temperature increase due to manmade greenhouse gas emission is roughly the same and ranges around 2 °C over the twenty-first century. Despite this fact it seems overwhelmingly difficult to draw political decisions from that. The vast majority of scientists agree that we need to take immediate action in order to prevent unforeseen and unprecedented damages to many areas essential for human life, from changes in growing seasons to changes in coastlines. The IPCC, the Intergovernmental Panel on Climate Change, can be seen as a forum that articulates the view of the scientific community and explicitly addresses policy makers. But why then is making a decision so hard when scientific evidence is overwhelming? There is a vast literature that addresses this question from various angles, from moral psychology explaining the occurrence of free riders to political theory with its focus on injustices in current political climate negotiations.
In this paper I want to look at the question as to why decision making in the face of global warming is so difficult from the perspective of philosophy of science. I want to argue that the current scientific policy advise offered by the IPCC goes astray due to its too narrow disciplinary approach that excludes non-scientist from the panels. Communicating the uncertainties associated with model results is essential in climatology and other areas of applied sciences. It is argued that there are certain limits in communicating uncertainties to people outside one’s own narrow discipline. This, I contend, necessitates a more interdisciplinary setup of expert committees that also includes decision makers and experts form social sciences, humanities and possibly theology.Rida Laraki (Université Paris Dauphine)
Majority Judgment vs. Majority Rule
The validity of majority rule in an election with but two candidates—and so also of Condorcet consistency—is challenged. Axioms based on evaluating candidates—paralleling  those of K. O. May characterizing majority rule for two candidates based on comparing candidates—lead to another method, majority judgment, that is unique in agreeing with the majority rule on pairs of “polarized” candidates. It is a practical method that accommodates any number of candidates, avoids both the Condorcet and Arrow paradoxes, and best resists strategic manipulation. It may also be viewed as a “solution” to Dahl’s (reformulated) intensity problem in that an intense minority sometimes defeats an apathetic majority.
Full text:

Behnam Taebi (Delft University of Technology)
Rawls’ Wide Reflective Equilibrium as a method for engaged interdisciplinary collaboration: Potentials and limitations for the context of technological risks
Based on a paper jointly written with  Neelke Doorn
The introduction of new technologies in society is sometimes met with public resistance. Supported by public policy calls for “upstream engagement” and “responsible innovation”, recent years have seen a notable rise in attempts to attune research and innovation processes to societal needs so that stakeholders’ concerns are taken into account in the design phase of technology. Both within the social sciences and in the ethics of technology, we see many interdisciplinary collaborations being initiated that aim to address tensions between various normative expectations about science and engineering and the actual outcomes. However, despite pleas to integrate social science research into the ethics of technology, effective normative models for assessing technologies are still scarce. Rawls’ Wide Reflective Equilibrium (WRE) is often mentioned as a promising approach to integrate insights from the social sciences in the normative analysis of concrete cases, but an in-depth discussion of how this would work in practice is still lacking. In this paper, we explore to what extent the WRE method can be used in the context of technological risks. Using cases in engineering and technology development, we discuss three issues that are currently neglected in the applied ethics literature on WRE.

Contributing Speakers:

John Beatty, Canada University of British Columbia
Consensus: Sometimes It Doesn’t Add Up
Perhaps the most familiar notion of “consensus” involves some sort of counting, resulting in unanimity or a majority. But some important forms of consensus are very different from this. I will consider one form, practiced in a wide variety of settings, that is more collective than aggregative. I will focus on the manner in which this sort of consensus portrays the epistemic state of a community of experts, without revealing differences among its members. Such apparent consensus can mask considerable disagreement. I will also discuss this form of consensus with reference to the U.N. Intergovernmental Panel on Climate Change.

Jan-Willem Romeijn, Netherlands, Philosophy RuG
“Stein’s paradox and group rationality”
This paper contributes to the lively literature on formal social epistemology. It presents a puzzle from the statistics literature, known as Stein’s paradox, and explains this paradox by reference to a discussion on the aggregation of probabilistic expert judgments. The novelty of the paper resides in applying the lessons from Stein’s paradox in the context of social epistemology. This delivers insights into the role of diversity in the aggregation of judgments.

Thomas Boyer-Kassem,  Netherlands, TiLPS, Tilburg University
Scientific expertise and risk aggregation with threshold
When scientists are asked to give expert advice on pressing risk-related questions deliberation often does not eliminate all disagreements between scientists. I propose to model the remaining discrepancies in an answer to a binary question as differences in risk assessments and/or in risk acceptability thresholds. The normative question I consider, then, is how the individual expert views should be aggregated, and I discuss what a ‘’best” group decision is. In particular, I assess the merits of the majority rule, which is currently often used in expert panels.

Cyrille Imbert, France, Archives Poincaré, CNRS, Université de Lorraine;
Joint with:
    Vincent Chevrier, France, LORIA;
    Christine Bourjot, France, LORIA;
    Thomas Boyer-Kassem, Netherlands, TiLPS, Tilburg University
No need for a secret ballot? How to reduce reputational cascades in expert committees
People sometimes misrepresent their opinions because others have expressed opposite views and public disagreement comes with costs. Arguably, this also affects experts in committees, who may align with other speakers beyond what their trust for them should allow. To assess this effect, we propose a model of sequential deliberation, which enables us to analyse the influence of various parameters, and suggests four ways to reduce the effects of opinion misrepresentation: (i) allow experts to express fine-grained opinions; (ii) have experts speak in specific orders; (iii) hold a sufficient number of table rounds ; (iv) encourage a friendly deliberative atmosphere.

Michael Morreau , Norway, UiT-The Arctic University of Norway
Diverse grading standards can improve the performance of expert planets
The method of supergrading is introduced for deriving a ranking of items from scores or grades awarded by several people. There is no need for a common vocabulary of grades, and diversity in grading standards is an advantage, enabling rankings derived by this method to separate more items from one another. Precise notions are developed of individual and collective ability in solving grading problems. It is shown that the collective ability of a supergrading group with diverse standards can be greater than that of a less diverse group whose members have greater ability.

Lucie Edwards, Canada, Balsillie School of International Affairs
Regulating designer genes: Is an intergovernmental science panel, a la the IPCC, the solution?
George Rosenau called scientists a cadre of “cosmopolitan world citizens” operating above and beyond the nation state, ushering in a “new age” when the “science of modeling through” problems will replace the age-old practice of “muddling through” global issues. This paper analyzes the role of intergovernmental science panels, notably the IPCC, in managing problems where “facts are uncertain, values in dispute, stakes are high, and decisions urgent”. It will explore the feasibility of applying the methods developed by the intergovernmental science panels to another challenging global issue: regulating the development of new technology (CRISPR) to modify the human genome.

Jason Alexander, United Kingdom, London School of Economics and Political Science &
Julia Morley, United Kingdom, London School of Economics and Political Science
Extra-deliberational influences on Expert decision making
Recent work which examines group decision-making in scientific expert committees has generally viewed the decision-making process as a self-contained deliberation problem addressed by ideal rational agents who are solely concerned with reaching the truth. In this paper, we argue that a number of practical, real-world considerations interact with scientific judgements so as to challenge these assumptions in fundamental ways, necessitating important changes in how decision-making processes are designed. We base our argument upon several case studies of scientific decision making by an international financial reporting regulator, the International Accounting Standards Board (IASB), which show routine violations of these idealising assumptions.

Haris Shekeris, France Université Grenoble-Alpes
Scientific expert committees, wicked problems and procedure
I will argue that scientific expert committee deliberation may be adequately analysed as an ideal deliberation in the sense described by the deliberative democracy theorists, and that pure proceduralism as per the decision-making ought to be adopted, with the optimization conditions of disciplinary and cultural diversity as well as sortition. I base my thesis on four assumptions, namely that the problems tackled in such situations are wicked problems, that the deliberators are ideal, that the committees are plural subjects, and that finally the deliberators are accountable and share responsibility for their decisions.