About the Society
Papers, Posters, Syllabi
Submit an Item
Polmeth Mailing List
Below results based on the criteria 'specification uncertainty'
Total number of records returned: 3
Too many Variables? A Comment on Bartels' ModelAveraging Proposal
Erikson, Robert S.
Wright, Gerald C.
McIver, John P.
Bayesian Information Criterion
Abstract: Bartels (1997) popularizes the procedure of model- averaging (Raftery, 1995, 1997), making some important innovations of his own along the way. He offers his methodology as a technology for exposing excessive specification searches in other peoples' research. As a demonstration project, Bartels applied his version of model- averaging to a portion of our work on state policy and purports to detect evidence of considerable model uncertainty. . In response, we argue that Bartels' extensions of model averaging methodology are ill-advised, and show that our challenged findings hold up under the scrutiny of the original Raftery-type model averaging.
Bayesian Model Averaging: Theoretical developments and practical applications
Bayesian model averaging
Political science researchers typically conduct an idiosyncratic search of possible model configurations and then present a single specification to readers. This approach systematically understates the uncertainty of our results, generates concern among readers and reviewers about fragile model specifications, and leads to the estimation of bloated models with huge numbers of controls. Bayesian model averaging (BMA) offers a systematic method for analyzing specification uncertainty and checking the robustness of one's results to alternative model specifications. In this paper, we summarize BMA, review important recent developments in BMA research, and argue for a different approach to using the technique in political science. We then illustrate the methodology by reanalyzing models of voting in U.S. Senate elections and international civil war onset using software that respects statistical conventions within political science.
Estimating Binary Dependent Variable Models Under Conditions of Specification Uncertainty
binary dependent variable
Monte Carlo analysis
Political scientists routinely use logit or probit models when their data involve binary dependent variables (BDVs). Yet the hypotheses we test with logit and probit are rarely specific enough to justify that one of these models is the correct functional form for the process (or true model) generating the data. In this situation of specification uncertainty, it is reasonable to assume that the model being estimated is misspecified. The only issue is the severity of the resulting distortion in results, i.e., whether logit or probit approximates the true model well enough to yield estimated effects that are acceptably close to the true ones. To study estimation in the presence of specification uncertainty, we conduct Monte Carlo analysis using a strategy of purposeful misspecification: we use various logit and probit models with different terms on data sets generated from a wide range of known true models involving a BDV, none of which takes the exact form of a logit or probit model. We find that a widely-employed approach for using logit or probit to test the hypothesis that an independent variable has a positive (or negative) effect on the probability that some event will occur-by estimating the effect of the variable at central values of the independent variables-is highly forgiving of specification uncertainty, yielding reasonably accurate inferences even when the true model is not logit or probit. Unfortunately, other applications of logit and probit-including a common approach to testing a hypothesis that independent variables interact in influencing the probability of event occurrence-are not nearly as forgiving of the uncertainty. In some situations of specification uncertainty, we can improve the quality of estimated effects by relying on the Akaike Information Criterion [AIC] to choose the terms to be included in a model, but even these improved estimates leave much to be desired.