Model calibration and validation via confidence sets

Raffaello Seri, Mario Martinoli*, Davide Secchi, Samuele Centorrino

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

The issues of calibrating and validating a theoretical model are considered, when it is required to select the parameters that better approximate the data among a finite number of alternatives. Based on a user-defined loss function, Model Confidence Sets are proposed as a tool to restrict the number of plausible alternatives, and measure the uncertainty associated to the preferred model. Furthermore, an asymptotically exact logarithmic approximation of the probability of choosing a model via a multivariate rate function is suggested. A simple numerical procedure is outlined for the computation of the latter and it is shown that the procedure yields results consistent with Model Confidence Sets. The illustration and implementation of the proposed approach is showcased in a model of inquisitiveness in ad hoc teams, relevant for bounded rationality and organisational research.
Original languageEnglish
JournalEconometrics and Statistics
ISSN2468-0389
DOIs
Publication statusE-pub ahead of print - 18. Feb 2020

Keywords

  • Calibration
  • Large deviations
  • Model confidence set
  • Simulated models
  • Validation

Fingerprint Dive into the research topics of 'Model calibration and validation via confidence sets'. Together they form a unique fingerprint.

Cite this