How Modelers Model: the Overlooked Social and Human Dimensions in Model Intercomparison Studies
View/Open
Author
Albanito, Fabrizio
McBey, David
Harrison, Matthew
Smith, Pete
Ehrhardt, Fiona
Bhatia, Arti
Bellocch, Gianni
Brilli, Lorenzo
Carozzi, Marco
Christie, Karen
Dorich, Christopher
Doro, Luca
Grace, Peter
Grant, Brian
Léonard, Joël
Liebig, Mark
Ludemann, Cameron
Martin, Raphael
Meier, Elizabeth
Meyer, Rachelle
De Antoni Migliorati, Massimiliano
Myrgiotis, Vasileios
Recous, Sylvie
Sándor, Renáta
Snow, Val
Soussana, Jean-François
Smith, Ward N.
Fitton, Nuala
Publication date
2022-09-02ISSN
1520-5851
Abstract
There is a growing realization that the complexity of model ensemble studies depends not only on the models used but also on the experience and approach used by modelers to calibrate and validate results, which remain a source of uncertainty. Here, we applied a multi-criteria decision-making method to investigate the rationale applied by modelers in a model ensemble study where 12 process-based different biogeochemical model types were compared across five successive calibration stages. The modelers shared a common level of agreement about the importance of the variables used to initialize their models for calibration. However, we found inconsistency among modelers when judging the importance of input variables across different calibration stages. The level of subjective weighting attributed by modelers to calibration data decreased sequentially as the extent and number of variables provided increased. In this context, the perceived importance attributed to variables such as the fertilization rate, irrigation regime, soil texture, pH, and initial levels of soil organic carbon and nitrogen stocks was statistically different when classified according to model types. The importance attributed to input variables such as experimental duration, gross primary production, and net ecosystem exchange varied significantly according to the length of the modeler’s experience. We argue that the gradual access to input data across the five calibration stages negatively influenced the consistency of the interpretations made by the modelers, with cognitive bias in “trial-and-error” calibration routines. Our study highlights that overlooking human and social attributes is critical in the outcomes of modeling and model intercomparison studies. While complexity of the processes captured in the model algorithms and parameterization is important, we contend that (1) the modeler’s assumptions on the extent to which parameters should be altered and (2) modeler perceptions of the importance of model parameters are just as critical in obtaining a quality model calibration as numerical or analytical details.
Document Type
Article
Document version
Accepted version
Language
English
Subject (CDU)
63 - Agricultura. Silvicultura. Zootècnia. Caça. Pesca
Pages
32
Publisher
American Chemical Society
Is part of
Enviromental Science and Technology
Citation
Albanito,Fabrizio, David McBey, Matthew Harrison, Pete Smith, Fiona Ehrhardt, Arti Bhatia, Gianni Bellocchi, Lorenzo Brilli, Marco Carozzi, Karen Christie, Jordi Doltra, Christopher Dorich, Luca Doro, Peter Grace, Brian Grant, Joël Léonard, Mark Liebig, Cameron Ludemann, Raphael Martin, Elizabeth Meier, Rachelle Meyer, Massimiliano De Antoni Migliorati, Vasileios Myrgiotis, Sylvie Recous, Renáta Sándor, Val Snow, Jean-François Soussana, Ward N. Smith, and Nuala Fitton. 2022. “How Modelers Model: the Overlooked Social and Human Dimensions in Model Intercomparison Studies”. ACS Publications 56. doi: 10.1021/acs.est.2c2023.
Program
Cultius Extensius Sostenibles
This item appears in the following Collection(s)
- ARTICLES CIENTÍFICS [2054]
The following license files are associated with this item:
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by/4.0/