Model calibration and validation via confidence sets

Raffaello Seri, Mario Martinoli*, Davide Secchi, Samuele Centorrino

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review


The issues of calibrating and validating a theoretical model are considered, when it is required to select the parameters that better approximate the data among a finite number of alternatives. Based on a user-defined loss function, Model Confidence Sets are proposed as a tool to restrict the number of plausible alternatives, and measure the uncertainty associated to the preferred model. Furthermore, an asymptotically exact logarithmic approximation of the probability of choosing a model via a multivariate rate function is suggested. A simple numerical procedure is outlined for the computation of the latter and it is shown that the procedure yields results consistent with Model Confidence Sets. The illustration and implementation of the proposed approach is showcased in a model of inquisitiveness in ad hoc teams, relevant for bounded rationality and organisational research.
Original languageEnglish
JournalEconometrics and Statistics
Pages (from-to)62-86
Publication statusPublished - Oct 2021


  • Calibration
  • Large deviations
  • Model confidence set
  • Simulated models
  • Validation


Dive into the research topics of 'Model calibration and validation via confidence sets'. Together they form a unique fingerprint.

Cite this