logoPicV1 logoTextV1

Search Results


Below results based on the criteria 'bayesian'
Total number of records returned: 76

1
Paper
Practical Issues in Implementing and Understanding Bayesian Ideal Point Estimation
Bafumi, Joseph
Gelman, Andrew
Park, David K.
Kaplan, Noah

Uploaded 06-11-2004
Keywords Ideal points
Bayesian
Logistic regression
Rasch model
Abstract In recent years, logistic regression (Rasch) models have been used in political science for estimating ideal points of legislators and Supreme Court justices. These models present estimation and identifiability challenges, such as improper variance estimates, scale and translation invariance, reflection invariance, and issues with outliers. We resolve these issues using Bayesian hierarchical modeling, linear transformations, informative regression predictors, and explicit modeling for outliers. In addition, we explore new ways to usefully display inferences and check model fit.

2
Paper
A Random Effects Approach to Legislative Ideal Point Estimation
Bailey, Michael

Uploaded 04-21-1998
Keywords ideal points
random effects models
Bayesian estimation
em algorithm
Abstract Conventionally, scholars use either standard probit/logit techniques or fixed-effect methods to estimate legislative ideal points. However, these methods are unsatisfactory when a limited number of votes are available: standard probit/logit methods are poorly equipped to handle multiple votes and fixed-effect models disregard serious ``incidental parameter'' problems. In this paper I present an alternative approach that moves beyond single-vote probit/logit analysis without requiring the large number of votes needed for fixed-effects models. The method is based on a random effects, panel logit framework that models ideal points as stochastic functions of legislator characteristics. Monte Carlo results and an application to trade politics demonstrate the practical usefulness of the method.

3
Paper
An Automated Method of Topic-Coding Legislative Speech Over Time with Application to the 105th-108th U.S. Senate
Quinn, Kevin
Monroe, Burt
Colaresi, Michael
Crespin, Michael
Radev, Dragomir

Uploaded 07-18-2006
Keywords legislatures
agendas
content analysis
Bayesian
time series
cluster analysis
unsupervised learning
Abstract We describe a method for statistical learning from speech documents that we apply to the Congressional Record in order to gain new insight into the dynamics of the political agenda. Prior efforts to evaluate the attention of elected representatives across topic areas have largely been expensive manual coding exercises and are generally circumscribed along one or more features of detail: limited time periods, high levels of temporal aggregation, and coarse topical categories. Conversely, the Congressional Record has scarcely been used for such analyses, largely because it contains too much information to absorb manually. We describe here a method for inferring, through the patterns of word choice in each speech and the dynamics of word choice patterns across time, (a) what the topics of speeches are, and (b) the probability that attention will be paid to any given topic or set of topics over time. We use the model to examine the agenda in the United States Senate from 1997-2004, based on a new database of over 70 thousand speech documents containing over 70 million words. We estimate the model for 42 topics and provide evidence that we can reveal speech topics that are both distinctive and inter-related in substantively meaningful ways. We demonstrate further that the dynamics our model gives us leverage into important questions about the dynamics of the political agenda.

4
Paper
Prior distributions for Bayesian data analysis in political science
Gelman, Andrew

Uploaded 02-25-2009
Keywords Bayesian inference
hierarchical models
mixture models
prior information
Abstract Prior information is often what makes Bayesian inference work. In the political science examples of which we are aware aware, information needs to come in, whether as regression predictors or regularization (that is, prior distributions) on parameters. We illustrate with a few examples from our own research.

5
Paper
Estimating incumbency advantage and its variation, as an example of a before/after study
Gelman, Andrew
Huang, Zaiying

Uploaded 02-07-2003
Keywords Bayesian inference
before-after study
Congressional elections
Gibbs
Abstract Incumbency advantage is one of the most studied features in American legislative elections. In this paper, we construct and implement an estimate that allows incumbency advantage to vary between individual incumbents. This model predicts that open-seat elections will be less variable than those with incumbents running, an observed empirical pattern that is not explained by previous models. We apply our method to the U.S. House of Representatives in the twentieth century: our estimate of the overall pattern of incumbency advantage over time is similar to previous estimates (although slightly lower), and we also find a pattern of increasing variation. In addition to the application to incumbency advantage, our approach represents a new method, using multilevel modeling, for estimating effects in before/after studies.

6
Paper
Rational Expectations Coordinating Voting in American Presidential and House Elections
Mebane, Walter R.

Uploaded 07-08-1998
Keywords coordinating voting
probabilistic voting
spatial voting
retrospective voting
policy moderation
presidential elections
congressional elections
ticket splitting
rational expectations
voter equilibrium
Bayesian-Nash equilibrium
generalized extreme value model
nonparametric
Monte Carlo integration
maximum likelihood
Abstract I define a probabilistic model of individuals' presidential-year vote choices for President and for the House of Representatives in which there is a coordinating (Bayesian Nash) equilibrium among voters based on rational expectations each voter has about the election outcomes. I estimate the model using data from the six American National Election Study Pre-/Post-Election Surveys of years 1976--1996. The coordinating model passes a variety of tests, including a test against a majoritarian model in which there is rational ticket splitting but no coordination. The results give strong individual-level support to Alesina and Rosenthal's theory that voters balance institutions in order to moderate policy. The estimates describe vote choices that strongly emphasize the presidential candidates. I also find that a voter who says economic conditions have improved puts more weight on a discrepancy between the voter's ideal point and government policy with a Democratic President than on a discrepancy of the same size with a Republican President.

7
Paper
Presidential Approval: the case of George W. Bush
Beck, Nathaniel
Jackman, Simon
Rosenthal, Howard

Uploaded 07-19-2006
Keywords presidential approval
public opinion
polls
house effects
dynamic linear model
Bayesian statistics
Markov chain Monte Carlo
state space
pages of killer graphs
Abstract We use a Bayesian dynamic linear model to track approval for George W. Bush over time. Our analysis deals with several issues that have been usually addressed separately in the extant literature. First, our analysis uses polling data collected at a higher frequency than is typical, using over 1,100 published national polls, and data on macro-economic conditions collected at the weekly level. By combining this much poll information, we are much better poised to examine the public's reactions to events over shorter time scales than can the typical analysis of approval that utilizes monthly or quarterly approval. Second, our statistical modeling explicitly deals with the sampling error of these polls, as well as the possibility of bias in the polls due to house effects. Indeed, quite aside from the question of ``what drives approval?'', there is considerable interest in the extent to which polling organizations systematically diverge from one another in assessing approval for the president. These bias parameters are not only necessary parts of any realistic model of approval that utilizes data from multiple polling organizations, but easily estimated via the Bayesian dynamics linear model.

8
Paper
Spike and Slab Prior Distributions for Simultaneous Bayesian Hypothesis Testing, Model Selection, and Prediction, of Nonlinear Outcomes
Pang, Xun
Gill, Jeff

Uploaded 07-13-2009
Keywords Spike and Slab Prior
Hypothesis Testing
Bayesian Model Selection
Bayesian Model Averaging
Adaptive Rejection Sampling
Generalized Linear Model
Abstract A small body of literature has used the spike and slab prior specification for model selection with strictly linear outcomes. In this setup a two-component mixture distribution is stipulated for coefficients of interest with one part centered at zero with very high precision (the spike) and the other as a distribution diffusely centered at the research hypothesis (the slab). With the selective shrinkage, this setup incorporates the zero coefficient contingency directly into the modeling process to produce posterior probabilities for hypothesized outcomes. We extend the model to qualitative responses by designing a hierarchy of forms over both the parameter and model spaces to achieve variable selection, model averaging, and individual coefficient hypothesis testing. To overcome the technical challenges in estimating the marginal posterior distributions possibly with a dramatic ratio of density heights of the spike to the slab, we develop a hybrid Gibbs sampling algorithm using an adaptive rejection approach for various discrete outcome models, including dichotomous, polychotomous, and count responses. The performance of the models and methods are assessed with both Monte Carlo experiments and empirical applications in political science.

9
Paper
Taking the State Space Seriously: The Dynamic Linear Model and Bayesian Time Series Analysis
Buckley, Jack

Uploaded 08-02-2002
Keywords time series
bayesian
state space
think tanks
business
Abstract No abstract submitted.

10
Paper
The Problem with Quantitative Studies of International Conflict
Beck, Nathaniel
King, Gary
Zeng, Langche

Uploaded 07-15-1998
Keywords Conflict
logit
neural networks
forecasting
Bayesian analysis
Abstract Despite immense data collections, prestigious journals, and sophisticated analyses, empirical findings in the literature on international conflict are frequently unsatisfying. Statistical results appear to change from article to article and specification to specification. Very few relationships hold up to replication with even minor respecification. Accurate forecasts are nonexistent. We provide a simple conjecture about what accounts for this problem, and offer a statistical framework that better matches the substantive issues and types of data in this field. Our model, a version of a ``neural network'' model, forecasts substantially better than any previous effort, and appears to uncover some structural features of international conflict.

11
Paper
Expressive Bayesian Voters, their Turnout Decisions, and Double Probit
Achen, Christopher

Uploaded 07-17-2006
Keywords turnout
expressive
Bayesian
probit
scobit
EITM
Abstract Voting is an expressive act. Since people are not born wanting to express themselves politically, the desire to vote must be acquired, either by learning about the candidates, by using party identification as a cognitive shortcut, or by contact from a trusted source. Modeled as Bayesian updating, this simple explanatory framework has dramatic implications for the understanding of voter turnout. It mathematically implies the main empirical generalizations familiar from the literature, it predicts hitherto unnoticed patterns that appear in turnout data, it provides a better fitting statistical model (double probit) for sample surveys of turnout, and it allows researchers to forecast turnout patterns in new elections when circumstances change. Thus the case is strengthened for the Bayesian voter model as a central organizing principle for public opinion and voting behavior.

12
Paper
Joint Modeling of Dynamic and Cross-Sectional Heterogeneity: Introducing Hidden Markov Panel Models
Park, Jong Hee

Uploaded 07-14-2009
Keywords Bayesian statistics
Fixed-effects
Hidden Markov models
Markov chain Monte Carlo methods
Random-effects
Reversible jump Markov chain Monte Carlo
Abstract Researchers working with panel data sets often face situations where changes in unobserved factors have produced changes in the cross-sectional heterogeneity across time periods. Unfortunately, conventional statistical methods for panel data are based on the assumption that the unobserved cross-sectional heterogeneity is time constant. In this paper, I introduce statistical methods to diagnose and model changes in the unobserved heterogeneity. First, I develop three combinations of a hidden Markov model with panel data models using the Bayesian framework; (1) a baseline hidden Markov panel model with varying fixed effects and varying random effects; (2) a hidden Markov panel model with varying fixed effects; and (3) a hidden Markov panel model with varying intercepts. Second, I present model selection methods to diagnose the dynamic heterogeneity using the marginal likelihood method and the reversible jump Markov chain Monte Carlo method. I illustrate the utility of these methods using two important ongoing political economy debates; the relationship between income inequality and economic growth and the effect of institutions on income inequality.

13
Paper
Unresponsive, Unpersuaded: The Unintended Consequences of Voter Persuasion Efforts
Bailey, Michael
Hopkins, Daniel
Rogers, Todd

Uploaded 08-09-2013
Keywords causal inference
field experiments
persuasion
attrition
multiple imputation
Approximate Bayesian Bootstrap
Abstract Can randomized experiments at the individual level help assess the persuasive effects of campaign tactics? In the contemporary U.S., vote choice is not observable, so one promising research design to assess persuasion involves randomizing appeals and then using a survey to measure vote intentions. Here, we analyze one such field experiment conducted during the 2008 presidential election in which 56,000 registered voters were assigned to persuasion in person, by phone, and/or by mail. Persuasive appeals by canvassers had two unintended consequences. First, they reduced responsiveness to the follow-up survey, lowering the response rate sharply among infrequent voters. Second, various statistical methods to address the resulting biases converge on a counter-intuitive conclusion: the persuasive canvassing reduced candidate support. Our results allow us to rule out even small effects in the intended direction, and illustrate the backlash that persuasion can engender.

14
Paper
Connecting Interest Groups and Congress: A New Approach to Understanding Interest Group Success
Victor, Jennifer Nicoll

Uploaded 07-16-2002
Keywords Interest Groups
Congress
Multiple Imputation
Bayesian Information Criterion
Ordinal Probit
Non-nested Models
Legislative Context
Abstract The primary challenge in explaining interest group legislative success in Congress has been methodological. The discipline requires at least two critical elements to make progress on this important question. First, we need a theory that accounts for the highly interactive spatial game between interest groups and legislators. Second, the discipline needs an empirical model that associates interest groups and their activities with specific congressional bills. In this project I begin to contribute to our understanding of the complex relationship between interest groups and Congress. I develop a theory of group success that is based upon the strategies in which groups engage, the groups' organizational capacity, and the strategic context of legislation. I predict that groups will tailor their activities (and strategically spend their resources) in Congress based upon two critical factors: whether the group supports or opposes the legislation, and the legislative environment for the bill. To test this model I develop a unique sampling procedure and survey design. I use legislative hearings to generate a sample of groups that are associated with specific issues and survey them about their activities on those issues. Then, I associate each group's issue with a specific bill in Congress. I then track the bill to discern its final status. I create a dependent variable of interest group success that is based on the group's position (favor or oppose) and the final status of the bill. This sampling procedure and dependent variable allow me to make inferences about group behavior over specific legislative proposals. I develop independent variables of group activity, group organizational capacity, and legislative context from the survey instrument and objective information about the bills. To fill in gaps in the survey data set, I use a multiple imputation method that generates plausible values based on given distributions of data. I estimate two models-one for groups in favor of legislation, and one for opposition groups. The ordinal probit models generally support the theoretical expectations. In sum, I find that groups can best expend their resources in pursuit of rules that advantage their position rather than fighting for bill content.

15
Paper
Pooling Disparate Observations
Bartels, Larry M.

Uploaded 01-01-1995
Keywords induction
statistical inference
Bayesian statistics
econometrics
observations
F-test
pooling
fractional pooling
Abstract Data analysts frequently face difficult choices about whether to pool disparate observations in their statistical analyses. I explore the inferential ramifications of such choices, and propose a new technique, dubbed "fractional pooling," which provides a simple way to incorporate prior beliefs about the theoretical relevance of disparate observations. The technique is easy to implement and has a plausible rationale in Bayesian statistical theory. I illustrate the potential utility of fractional pooling by applying the technique to political data originally analyzed by Ashenfelter (1994), Powell (1982), and Alesina et al. (1993). These examples demonstrate that conventional approaches to analyzing disparate observations can be seriously misleading, and that the approach proposed here can enrich our understanding of the inferential implications of unavoidably subjective judgments about the theoretical relevance of available data.

16
Paper
Designing and Analyzing Randomized Experiments
Horiuchi, Yusaku
Imai, Kosuke
Taniguchi, Naoko

Uploaded 07-05-2005
Keywords Bayesian inference
causal inference
noncompliance
nonresponse
randomized block design
Abstract In this paper, we demonstrate how to effectively design and analyze randomized experiments, which are becoming increasingly common in political science research. Randomized experiments provide researchers with an opportunity to obtain unbiased estimates of causal effects because the randomization of treatment guarantees that the treatment and control groups are on average equal in both observed and unobserved characteristics. Even in randomized experiments, however, complications can arise. In political science experiments, researchers often cannot force subjects to comply with treatment assignment or to provide the information necessary for the estimation of causal effects. Building on the recent statistical literature, we show how to make statistical adjustments for these noncompliance and nonresponse problems when analyzing randomized experiments. We also demonstrate how to design randomized experiments so that the potential impact of such complications is minimized.

17
Paper
Balancing Competing Demands: Position-Taking and Election Proximity in the European Parliament
Lindstaedt, Rene
Slapin, Jonathan
Vander Wielen, Ryan

Uploaded 07-31-2009
Keywords Legislative Politics
European Parliament
Comparative Politics
Bayesian IRT
Parties
Formal Theory
Abstract Parties value unity, yet, members of parliament face competing demands, giving them incentives to deviate from the party. For members of the European Parliament (MEPs), these competing demands are national party and European party group pressures. Here, we look at how MEPs respond to those competing demands. We examine ideological shifts within a single parliamentary term to assess how European Parliament (EP) election proximity affects party group cohesion. Our formal model of legislative behavior with multiple principals yields the following hypothesis: When EP elections are proximate, national party delegations shift toward national party positions, thus weakening EP party group cohesion. For our empirical test, we analyze roll call data from the fifth EP (1999-2004) using Bayesian item response models. We find significant movement among national party delegations as EP elections approach, which is consistent with our theoretical model, but surprising given the existing literature on EP elections as second-order contests.

18
Paper
State-Level Opinions from National Surveys: Poststratification using Hierarchical Logistic Regression
Park, David K.
Gelman, Andrew
Bafumi, Joseph

Uploaded 07-12-2002
Keywords Bayesian Inference
Hierarchical
Logit
Poststratification
Public Opinion
States
Elections
Abstract Previous researchers have pooled national surveys in order to construct state-level opinions. However, in order to overcome the small n problem for less populous states, they have aggregated a decade or more of national surveys to construct their measures. For example, Erikson, Wright and McIver (1993) pooled 122 national surveys conducted over 13 years to produce state-level partisan and ideology estimates. Brace, Sims-Butler, Arceneaux, and Johnson (2002) pooled 22 surveys over a 25-year period to produce state-level opinions on a number of specific issues. We construct a hierarchical logistic regression model for the mean of a binary response variable conditional on poststratification cells. This approach combines the modeling approach often used in small-area estimation with the population information used in poststratification (see Gelman and Little 1997). We produce state-level estimates pooling seven national surveys conducted over a nine-day period. We first apply the method to a set of U.S pre-election polls, poststratified by state, region, as well as the usual demographic variables and evaluate the model by comparing it to state-level election outcomes. We then produce state-level partisan and ideology estimates by comparing it to Erikson, Wright and McIver's estimates.

19
Paper
A Bayesian Method for the Analysis of Dyadic Crisis Data
Smith, Alastair

Uploaded 11-04-1996
Keywords Bayesian model testing
Censored data
Crisis data
Gibbs sampling
Markov chain Monte Carlo
Ordered discrete choice model
Strategic choice
Abstract his paper examines the level of force that nations use during disputes. Suppose that two nations, A and B, are involved in a dispute. Each nation chooses the level of violence that it is prepared to use in order to achieve its objectives. Since there are two opponents making decisions, the outcome of the crisis is determined by a bivariate rather than univariate process. I propose a bivariate ordered discrete choice model to examine the relationship between nation A's decision to use force, nation B's decision to use force, and a series of explanatory variables. The model is estimated in the Bayesian context using a Markov chain Monte Carlo simulation technique. I analyze Bueno de Mesquita and Lalman's (1992) dyadically coded version of the Militarized Interstate Dispute data (Gochman and Moaz 1984). Various models are compared using Bayes Factors. The results indicate that nation A's and nation B's decisions to use force can not be regarded as independent. Bayesian model comparison show that variables derived from Bueno de Mesquita's expected utility theory (1982, 1985; Bueno de Mesquita and Lalman 1986, 1992) provide the best explanatory variables for decision making in crises.

20
Paper
Bayesian and Likelihood Inference for 2 x 2 Ecological Tables: An Incomplete Data Approach
Imai, Kosuke
Lu, Ying
Strauss, Aaron

Uploaded 12-16-2006
Keywords Coarse data
Contextual effects
Data augmentation
EM algorithm
Missing information principle
Nonparametric Bayesian Modeling.
Abstract Ecological inference is a statistical problem where aggregate-level data are used to make inferences about individual-level behavior. Recent years have witnessed resurgent interest in ecological inference among political methodologists and statisticians. In this paper, we conduct a theoretical and empirical study of Bayesian and likelihood inference for 2 x 2 ecological tables by applying the general statistical framework of incomplete data. We first show that the ecological inference problem can be decomposed into three factors: distributional effects which address the possible misspecification of parametric modeling assumptions about the unknown distribution of missing data, contextual effects which represent the possible correlation between missing data and observed variables, and aggregation effects which are directly related to the loss of information caused by data aggregation. We then examine how these three factors affect inference and offer new statistical methods to address each of them. To deal with distributional effects, we propose a nonparametric Bayesian model based on a Dirichlet process prior which relaxes common parametric assumptions. We also specify the statistical adjustments necessary to account for contextual effects. Finally, while little can be done to cope with aggregation effects, we offer a method to quantify the magnitude of such effects in order to formally assess its severity. We use simulated and real data sets to empirically investigate the consequences of these three factors and to evaluate the performance of our proposed methods. C code, along with an easy-to-use R interface, is publicly available for implementing our proposed methods.

21
Paper
Bayesian statistical decision theory and a critical test for substantive significance
Esarey, Justin

Uploaded 09-09-2009
Keywords inference
t-test
substantive significance
Bayesian
Abstract I introduce a new critical test statistic, c*, that uses Bayesian statistical decision theory to help an analyst determine whether quantitative evidence supports the existence of a substantively meaningful relationship. Bayesian statistical decision theory takes a rational choice perspective toward evidence, allowing researchers to ask whether it makes sense to believe in the existence of a statistical relationship given how they value the consequences of correct and incorrect decisions. If a relationship of size c* is not important enough to influence future research and policy advice, then the evidence does not support the existence of a substantively significant effect. A replication of findings from the American Journal of Political Science and Journal of Politics illustrates that statistical significance at conventional levels is neither necessary nor sufficient to accept a hypothesis of substantive significance using c*. I also make software packages available for Stata and R that allow political scientists to easily use c* for inference in their own research.

22
Paper
Moving Mountains: Bayesian Forecasting As Policy Evaluation
Brandt, Patrick T.
Freeman, John R.

Uploaded 04-24-2002
Keywords Bayesian vector autoregression
VAR
policy evaluation
conditional forecasting
Abstract Many policy analysts fail to appreciate the dynamic, complex causal nature of political processes. We advocate a vector autoregression (VAR) based approach to policy analysis that accounts for various multivariate and dynamic elements in policy formulation and for both dynamic and specification uncertainty of parameters. The model we present is based on recent developments in Bayesian VAR modeling and forecasting. We present an example based on work in Goldstein et al. (2001) that illustrates how a full accounting of the dynamics and uncertainty in multivariate data can lead to more precise and instructive results about international mediation in Middle Eastern conflict.

23
Paper
Not Asked and Not Answered: Multiple Imputation for Multiple Surveys
Gelman, Andrew
King, Gary
Liu, Chuanhai

Uploaded 10-27-1997
Keywords Bayesian inference
cluster sampling
diagnostics
hierarchical models
ignorable nonresponse
missing data
political science
sample surveys
stratified sampling
multiple imputation
Abstract We present a method of analyzing a series of independent cross-sectional surveys in which some questions are not answered in some surveys and some respondents do not answer some of the questions posed. The method is also applicable to a single survey in which different questions are asked, or different sampling methods used, in different strata or clusters. Our method involves multiply-imputing the missing items and questions by adding to existing methods of imputation designed for single surveys a hierarchical regression model that allows covariates at the individual and survey levels. Information from survey weights is exploited by including in the analysis the variables on which the weights were based, and then reweighting individual responses (observed and imputed) to estimate population quantities. We also develop diagnostics for checking the fit of the imputation model based on comparing imputed to non-imputed data. We illustrate with the example that motivated this project --- a study of pre-election public opinion polls, in which not all the questions of interest are asked in all the surveys, so that it is infeasible to impute each survey separately.

24
Paper
Modeling Foreign Direct Investment as a Longitudinal Social Network
Jensen, Nathan
Martin, Andrew
Westveld, Anton

Uploaded 07-13-2007
Keywords foreign direct investment
social network data
longitudinal data
hierarchical modeling
mixture modeling
Bayesian inference.
Abstract An extensive literature in international and comparative political economy has focused on the how the mobility of capital affects the ability of governments to tax and regulate firms. The conventional wisdom holds that governments are in competition with each other to attract foreign direct investment (FDI). Nation-states observe the fiscal and regulatory decisions of competitor governments, and are forced to either respond with policy changes or risk losing foreign direct investment, along with the politically salient jobs that come with these investments. The political economy of FDI suggests a network of investments with complicated dependencies. We propose an empirical strategy for modeling investment patterns in 24 advanced industrialized countries from 1985-2000. Using bilateral FDI data we estimate how increases in flows of FDI affect the flows of FDI in other countries. Our statistical model is based on the methodology developed by Westveld & Hoff (2007). The model allows the temporal examination of each notion's activity level in investing, attractiveness to investors, and reciprocity between pairs of nations. We extend the model by treating the reported inflow and outflow data as independent replicates of the true value and allowing for a mixture model for the fixed effects portion of the network model. Using a fully Bayesian approach, we also impute missing data within the MCMC algorithm used to fit the model.

25
Paper
Characterizing the variance improvement in linear Dirichlet random effects models
Kyung, Minjung
Gill, Jeff
Casella, George

Uploaded 09-11-2009
Keywords Dirichlet processes
mixture models
Bayesian nonparametrics
Abstract An alternative to the classical mixed model with normal random effects is to use a Dirichlet process to model the random effects. Such models have proven useful in practice, and we have observed a noticeable variance reduction, in the estimation of the fixed effects, when the Dirichlet process is used instead of the normal. In this paper we formalize this notion, and give a theoretical justification for the expected variance reduction. We show that for almost all data vectors, the posterior variance from the Dirichlet random effects model is smaller than that form the normal random effects model. Forthcoming: Statistics and Probability Letters

26
Paper
Did Illegally Counted Overseas Absentee Ballots Decide the 2000 U.S. Presidential Election?
Imai, Kosuke
King, Gary

Uploaded 02-13-2002
Keywords 2000 U.S. Presidential Election
Ecological Inference
Bayesian Model Averaging
Abstract Although not widely known until much later, Al Gore received 202 more votes than George W. Bush on election day in Florida. George W. Bush is president because he overcame his election day deficit with overseas absentee ballots that arrived and were counted after election day. In the final official tally, Bush received 537 more votes than Gore. These numbers are taken from the official results released by the Florida Secretary of State's office and so do not reflect overvotes, undervotes, unsuccessful litigation, butterfly ballot problems, recounts that might have been allowed but were not, or any other hypothetical divergence between voter preferences and counted votes. After the election, the New York Times conducted a six month long investigation and found that 680 of the overseas absentee ballots were illegally counted, and no partisan, pundit, or academic has publicly disagreed with their assessment. In this paper, we describe the statistical procedures we developed and implemented for the Times to ascertain whether disqualifying these 680 ballots would have changed the outcome of the election. The methods involve adding formal Bayesian model averaging procedures to King's (1997) ecological inference model. Formal Bayesian model averaging has not been used in political science but is especially useful when substantive conclusions depend heavily on apparently minor but indefensible model choices, when model generalization is not feasible, and when potential critics are more partisan than academic. We show how we derived the results for the Times so that other scholars can use these methods to make ecological inferences for other purposes. We also present a variety of new empirical results that delineate the precise conditions under which Al Gore would have been elected president, and offer new evidence of the striking effectiveness of the Republican effort to convince local election officials to count invalid ballots in Bush counties and not count them in Gore counties.

27
Paper
Testing Theories Involving Strategic Choice: The Example of Crisis Escalation
Smith, Alastair

Uploaded 07-23-1997
Keywords Strategic choice
Bayesian model testing
Markov chain Monte Carlo simulation
multi-variate probit
crisis escalation
war
Abstract If we believe that politics involves a significant amount of strategic interaction then classical statistical tests, such as Ordinary Least Squares, Probit or Logit, cannot give us the right answers. This is true for two reasons: The dependent variables under observation are interdependent-- that is the essence of game theoretic logic-- and the data is censored -- that is an inherent feature of off the path expectations that leads to selection effects. I explore the consequences of strategic decision making on empirical estimation in the context of international crisis escalation. I show how and why classical estimation techniques fail in strategic settings. I develop a simple strategic model of decision making during crises. I ask what this explanation implies about the distribution of the dependent variable: the level of violence used by each nation. Counterfactuals play a key role in this theoretical explanation. Yet, conventional econometric techniques take no account of unrealized opportunities. For example, suppose a weak nation (B) is threatened by a powerful neighbor (A). If we believe that power strongly influences the use of force then the weak nation realizes that the aggressor's threats are probably credible. Not wishing to fight a more powerful opponent, nation B is likely to acquiesce to the aggressor's demands. Empirically, we observe A threaten B. The actual level of violence that A uses is low. However, the theoretical model suggests that B acquiesced precisely because A would use force. Although the theoretical model assumes a strong relationship between strength and the use of force, traditional techniques find a much weaker relationship. Our ability to observe whether nation A is actually prepared to use force is censored when nation B acquiesces. I develop a Strategically Censored Discrete Choice (SCDC) model which accounts for the interdependent and censored nature of strategic decision making. I use this model to test existing theories of dispute escalation. Specifically, I analyze Bueno de Mesquita and Lalman's (1992) dyadically coded version of the Militarized Interstate Dispute data (Gochman and Moaz 1984). I estimate this model using a Bayesian Markov chain Monte Carlo simulation method. Using Bayesian model testing, I compare the explanatory power of a variety of theories. I conclude that strategic choice explanations of crisis escalation far out-perform non-strategic ones.

28
Paper
Bayesian Analysis of Structural Changes: Historical Changes in US Presidential Uses of Force Abroad
Park, Jong Hee

Uploaded 07-16-2007
Keywords structural changes
changepoint models
discrete time series data
use of force data
state space models
time-varying parameter models
Bayesian inference
Abstract While many theoretical models in political science are inspired by structural changes in politics, most empirical methods assume stable patterns of causal processes and fail to capture dynamic changes in theoretical relationships. In this paper, I introduce an efficient Bayesian approach to the multiple changepoint problem presented by Chib (1998) and discuss the utility of the Bayesian changepoint models in the context of generalized linear models. As an illustration, I revisit the debate over whether and how U.S. presidents have used forces abroad in response to domestic factors since 1890.

29
Paper
Penalized Regression, Standard Errors, and Bayesian Lassos
Kyung, Minjung
Gill, Jeff
Ghosh, Malay
Casella, George

Uploaded 02-23-2010
Keywords model selection
lassos
Bayesian hierarchical models
LARS algorithm
EM/Gibbs sampler
Geometric Ergodicity
Gibbs Sampling
Abstract Penalized regression methods for simultaneous variable selection and coefficient estimation, especially those based on the lasso of Tibshirani (1996), have received a great deal of attention in recent years, mostly through frequentist models. Properties such as consistency have been studied, and are achieved by different lasso variations. Here we look at a fully Bayesian formulation of the problem, which is flexible enough to encompass most versions of the lasso that have been previously considered. The advantages of the hierarchical Bayesian formulations are many. In addition to the usual ease-of-interpretation of hierarchical models, the Bayesian formulation produces valid standard errors (which can be problematic for the frequentist lasso), and is based on a geometrically ergodic Markov chain. We compare the performance of the Bayesian lassos to their frequentist counterparts using simulations and data sets that previous lasso papers have used, and see that in terms of prediction mean squared error, the Bayesian lasso performance is similar to and, in some cases, better than, the frequentist lasso.

30
Paper
Random Coefficient Models for Time-Series--Cross-Section Data: The 2001 Version
Beck, Nathaniel
Katz, Jonathan

Uploaded 07-17-2001
Keywords random coefficients
generalized least squares
empirical Bayesian
Stein-rule
TCSC
Abstract This paper considers random coefficient models (RCMs) for time-series--cross-section data. These models allow for unit to unit variation in the model parameters. After laying out the various models, we assess several issues in specifying RCMs. We then consider the finite sample properties of some standard RCM estimators, and show that the most common one, associated with Hsaio, has very poor properties. These analyses also show that a somewhat awkward combination of estimators based on Swamy's work performs reasonably well; this awkward estimator and a Bayes estimator with an uninformative prior (due to Smith) seem to perform best. But we also see that estimators which assume full pooling perform well unless there is a large degree of unit to unit parameter heterogeneity. We also argue that the various data driven methods (whether classical or empirical Bayes or Bayes with gentle priors) tends to lead to much more heterogeneity than most political scientists would like. We speculate that fully Bayesian models, with a variety of informative priors, may be the best way to approach RCMs.

31
Paper
Too many Variables? A Comment on Bartels' ModelAveraging Proposal
Erikson, Robert S.
Wright, Gerald C.
McIver, John P.

Uploaded 07-18-1997
Keywords Bayes Factor
Bayesian Information Criterion
Bayesian statistics
model averaging
model specification
specification uncertainty
Bartels
Abstract Abstract: Bartels (1997) popularizes the procedure of model- averaging (Raftery, 1995, 1997), making some important innovations of his own along the way. He offers his methodology as a technology for exposing excessive specification searches in other peoples' research. As a demonstration project, Bartels applied his version of model- averaging to a portion of our work on state policy and purports to detect evidence of considerable model uncertainty. . In response, we argue that Bartels' extensions of model averaging methodology are ill-advised, and show that our challenged findings hold up under the scrutiny of the original Raftery-type model averaging.

32
Paper
The Spatial Probit Model of Interdependent Binary Outcomes: Estimation, Interpretation, and Presentation
Franzese, Robert
Hays, Jude

Uploaded 07-20-2007
Keywords Spatial Probit
Bayesian Gibbs-Sampler Estimator
Recursive Importance-Sampling Estimator
Interdependence
Diffusion
Contagion
Emulation
Abstract We have argued and shown elsewhere the ubiquity and prominence of spatial interdependence in political science research and noted that much previous practice has neglected this interdependence or treated it solely as nuisance to the serious detriment of sound inference. Previously, we considered only linear-regression models of spatial and/or spatio-temporal interdependence. In this paper, we turn to binary-outcome models. We start by stressing the ubiquity and centrality of interdependence in binary outcomes of interest to political and social scientists and note that, again, this interdependence has been ignored in most contexts where it likely arises and that, in the few contexts where it has been acknowledged, the endogeneity of the spatial lag has not be recognized. Next, we explain some of the severe challenges for empirical analysis posed by spatial interdependence in binary-outcome models, and then we follow recent advances in the spatial-econometric literature to suggest Bayesian or recursive-importance-sampling (RIS) approaches for tackling estimation. In brief and in general, the estimation complications arise because among the RHS variables is an endogenous weighted spatial-lag of the unobserved latent outcome, y*, in the other units; Bayesian or RIS techniques facilitate the complicated nested optimization exercise that follows from that fact. We also advance that literature by showing how to calculate estimated spatial effects (as opposed to parameter estimates) in such models, how to construct confidence regions for those (adopting a simulation strategy for the purpose), and how to present such estimates effectively.

33
Paper
No News is News: Non-Ignorable Non-Response in Roll-Call Data Analysis
Rosas, Guillermo
Shomer, Yael
Haptonstahl, Stephen

Uploaded 07-10-2010
Keywords rollcall
voting
abstention
missing
Bayesian
IRT
Abstract Roll-call votes are widely employed to infer the ideological proclivities of legislators, even though inferences based on roll-call data are accurate reflections of underlying policy preferences only under stringent assumptions. We explore the consequences of violating one such assumption, namely, the ignorability of the process that generates non-response in roll calls. We offer a reminder of the inferential consequences of ignoring certain processes of non-response, a basic estimation framework to model non-response and vote choice concurrently, and models for two theoretically relevant processes of non-ignorable missingness. We reconsider the "most liberal Senator" question that comes up during election times every four years in light of our arguments and show how we inferences about ideal points can improve by incorporating a priori information about the process that generates abstentions.

34
Paper
Pre-Election Polls in Nation and State: A Dynamic Bayesian Hierarchical Model
Franklin, Charles

Uploaded 07-17-2001
Keywords campaigns
polling
aggregation
Bayesian
hierarchical models
Abstract A vast number of national trial heat polls are conducted in the months preceding a presidential election. But as was dramatically demonstrated in 2000, candidates must win states to win the presidency, not just win popular votes. The density of state level polling is much less than that for the nation as a whole. This makes efforts to track candidate support at the state level, and to estimate campaign effects in the states, very difficult. This paper develops a Bayesian hierarchical model of trial heat polls which uses state and national polling data, plus measures of campaign effort in each state, to estimate candidate support between observed state polls. At a technical level, the Bayesian approach provides not only estimates of support but also easily understood estimates of the uncertainty of those estimates. At an applied level, this method can allow campaigns to target polling in states that are most likely to be changing while being alerted to potential shifts in states that are not as frequently polled.

35
Paper
Recent Developments in Econometric Modelling: A Personal Viewpoint
Maddala, G.S.

Uploaded 07-17-1997
Keywords dynamic panel data models
dynamic models with limited dependent variables
unit roots
cointegration
VAR's
Bayesian
Abstract The quotation above (more than three thousand years ago) essentially summarizes my perception of what is going on in econometrics. Dynamic economic modelling is a comprehensive term. It covers everything except pure cross-section analysis. Hence, I have to narrow down the scope of my paper. I shall not cover duration models, event studies, count data and Markovian models. The areas covered are: dynamic panel data models, dynamic models with limited dependent variables, unit roots, cointegration, VAR’s and Bayesian approaches to all these problems. These are areas I am most familiar with. Also, the paper is not a survey of recent developments. Rather, it presents what I feel are important issues in these areas. Also, as far as possible, I shall relate the issues with those considered in the work on Political Methodology. I have a rather different attitude towards econometric methods which my own colleagues in the profession may not share. In my opinion, there is too much technique and not enough discussion of why we are doing what we are doing. I am often reminded of the admonition of the queen to Pollonius in Shakespeare’s Hamlet, “More matter, less art.”

36
Paper
A default prior distribution for logistic and other regression models
Gelman, Andrew
Jakulin, Aleks
Pittau, Maria Grazia
Su, Yu-Sung

Uploaded 08-03-2007
Keywords Bayesian inference
generalized linear model
least squares
hierarchical model
linear regression
logistic regression
multilevel model
noninformative prior distribution
Abstract We propose a new prior distribution for classical (non-hierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Student-$t$ prior distributions on the coefficients. As a default choice, we recommend the Cauchy distribution with center 0 and scale 2.5, which in the simplest setting is a longer-tailed version of the distribution attained by assuming one-half additional success and one-half additional failure in a logistic regression. We implement a procedure to fit generalized linear models in R with this prior distribution by incorporating an approximate EM algorithm into the usual iteratively weighted least squares. We illustrate with several examples, including a series of logistic regressions predicting voting preferences, an imputation model for a public health data set, and a hierarchical logistic regression in epidemiology. We recommend this default prior distribution for routine applied use. It has the advantage of always giving answers, even when there is complete separation in logistic regression (a common problem, even when the sample size is large and the number of predictors is small) and also automatically applying more shrinkage to higher-order interactions. This can be useful in routine data analysis as well as in automated procedures such as chained equations for missing-data imputation.

37
Paper
Formal Tests of Substantive Significance for Linear and Non-Linear Models
Esarey, Justin
Danneman, Nathan

Uploaded 07-16-2010
Keywords statistical decision theory
substantive significance
marginal effects
bayesian
Abstract We propose a critical statistic c^{*} for determining the substantive significance of an empirical result, which we define as the degree to which it justifies a particular decision (such as the decision to accept or reject a theoretical hypothesis), and provide software tools for calculating c^{*} for a wide variety of models. Our procedure, which is built on ideas from Bayesian statistical decision theory, helps researchers improve the objectivity, transparency, and consistency of their assessments of substantive significance.

38
Paper
Time Series Cross-Sectional Analyses with Different Explanatory Variables in Each Cross-Section
Girosi, Federico
King, Gary

Uploaded 07-11-2001
Keywords Bayesian hierarchical model
time series
cross-section
Abstract The current animosity between quantitative cross-national comparativists and area studies scholars originated in the expanding geographic scope of data collection in the 1960s. As quantitative scholars sought to include more countries in their regressions, the measures they were able to find for all observations became less comparable, and those which were available (or appropriate) for fewer than the full set were excluded. Area studies scholars appropriately complain about the violence these procedures do to the political reality they find from their in depth analyses of individual countries, but as quantitative comparativists continue to seek systematic comparisons, the conflict continues. We attempt to eliminate a small piece of the basis of this conflict by developing models that enable comparativists to include different explanatory variables, or the same variables with different meanings, in the time-series regression in each country. This should permit more powerful statistical analyses and encourage more context-sensitive data collection strategies. We demonstrate the advantages of this approach in practice by showing how out-of-sample forecasts of mortality rates in 25 countries, 17 age groups, and 17 causes of death in males and 20 in females from this model out-perform a standard regression approach.

39
Paper
Multilevel (hierarchical) modeling: what it can and can't do
Gelman, Andrew

Uploaded 01-26-2005
Keywords Bayesian inference
hierarchical model
multilevel regression
Abstract Multilevel (hierarchical) modeling is a generalization of linear and generalized linear modeling in which regression coefficients are themselves given a model, whose parameters are also estimated from data. We illustrate the strengths and limitations of multilevel modeling through an example of the prediction of home radon levels in U.S. counties. The multilevel model is highly effective for predictions at both levels of the model but could easily be misinterpreted for causal inference.

40
Paper
Bayesian Model Averaging: Theoretical developments and practical applications
Montgomery, Jacob
Nyhan, Brendan

Uploaded 01-22-2008
Keywords Bayesian model averaging
BMA
model robustness
specification uncertainty
Abstract Political science researchers typically conduct an idiosyncratic search of possible model configurations and then present a single specification to readers. This approach systematically understates the uncertainty of our results, generates concern among readers and reviewers about fragile model specifications, and leads to the estimation of bloated models with huge numbers of controls. Bayesian model averaging (BMA) offers a systematic method for analyzing specification uncertainty and checking the robustness of one's results to alternative model specifications. In this paper, we summarize BMA, review important recent developments in BMA research, and argue for a different approach to using the technique in political science. We then illustrate the methodology by reanalyzing models of voting in U.S. Senate elections and international civil war onset using software that respects statistical conventions within political science.

41
Paper
Measuring Political Support and Issue Ownership Using Endorsement Experiments, with Application to the Militant Groups in Pakistan
Bullock, Will
Imai, Kosuke
Shapiro, Jacob

Uploaded 07-18-2010
Keywords endorsement experiment
survey experiment
bayesian
pakistan
militant groups
issue ownership
social desirability
Abstract To measure the levels of support for political actors (e.g., candidates and parties) and the strength of their issue ownership, survey experiments are often conducted in which respondents are asked to express their opinion about a particular policy endorsed by a randomly selected political actor. These responses are contrasted with those from a control group that receives no endorsement. This survey methodology is particularly useful for studying sensitive political attitudes. We develop a Bayesian hierarchical measurement model for such endorsement experiments, demonstrate its statistical properties through simulations, and use it to measure support for Islamist militant groups in Pakistan. Our model uses item response theory to estimate support levels on the same scale as the ideal points of respondents. The model also estimates the strength of political actors' issue ownership for speci c policies as well as the relationship between respondents' characteristics and support levels. Our analysis of a recent survey experiment in Pakistan reveals three key patterns. First, citizens' attitudes towards militant groups are geographically clustered. Second, once these regional di fferences are taken into account, respondents' characteristics have little predictive power for their support levels. Finally, militant groups tend to receive less support in the areas where they operate.

42
Paper
Flexible Prior Specifications for Factor Analytic Models with an Application to the Measurement of American Political Ideology
Quinn, Kevin M.

Uploaded 04-20-2000
Keywords factor analysis
intrinsic autoregression
hierarchical modeling
Bayesian inference
political ideology
Abstract Factor analytic measurement models are widely used in the social sciences to measure latent variables and functions thereof. Examples include the measurement of: political preferences, liberal democracy, latent determinants of exchange rates, and latent factors in arbitrage pricing theory models and the corresponding pricing deviations. Oftentimes, the results of these measurement models are sensitive to distributional assumptions that are made regarding the latent factors. In this paper I demonstrate how prior distributions commonly used in image processing and spatial statistics provide a flexible means to model dependencies among the latent factor scores that cannot be easily captured with standard prior distributions that treat the factor scores as (conditionally) exchangeable. Markov chain Monte Carlo techniques are used to fit the resulting models. These modeling techniques are illustrated with a simulated data example and an analysis of American political attitudes drawn from the 1996 American National Election Study.

43
Paper
Validation of software for Bayesian models using posterior quantiles
Cook, Samantha
Gelman, Andrew
Rubin, Donald

Uploaded 08-16-2005
Keywords Bayesian inference
Markov chain Monte Carlo
simulation
computation
hierarchical models
Abstract We present a simulation-based method designed to establish the computational correctness of software developed to fit a specific Bayesian model, capitalizing on properties of Bayesian posterior distributions. We illustrate the validation technique with two examples. The validation method is shown to find errors in software when they exist and, moreover, the validation output can be informative about the nature and location of such errors.

44
Paper
Democratic Compromise: A Latent Variable Analysis of Ten Measures of Regime Type
Pemstein, Daniel
Meserve, Stephen
Melton, James

Uploaded 02-07-2008
Keywords democracy
measurement
democracy measurement
regime
regime type
latent variable analysis
Bayesian latent variable analysis
UDS
Unified Democracy Scores
multi-rater ordinal probit
Abstract Using a Bayesian latent variable approach, we synthesize a new measure of democracy, the Unified Democracy Scores (UDS), from ten extant scales. We accompany this new scale with quantitative estimates of uncertainty, provide estimates of the relative reliability of the constituent indicators, and quantify what the ordinal levels of each of the existing measures mean in relationship to one another. Our method eschews the difficult -- and often arbitrary -- decision to use one existing democracy scale over another in favor of a cumulative approach that allows us to simultaneously leverage the measurement efforts of numerous scholars.

45
Paper
Seven Deadly Sins of Contemporary Quantitative Political Analysis
Schrodt, Philip

Uploaded 08-23-2010
Keywords collinearity
prediction
explanation
Bayesian
frequentist
control variables
pedagogy
philosophy of science
logical positivists
significance test
Hempel
Thor
Abstract A combination of technological change, methodological drift and a certain degree of intellectual sloth and sloppiness, particularly with respect to philosophy of science,has allowed contemporary quantitative political analysis to accumulate a series of dysfunctional habits that have rendered a great deal of contemporary research more or less scientifically useless. The cure for this is not to reject quantitative methods -- and the cure is most certainly not a postmodernist nihilistic rejection of all systematic method -- but rather to return to some fundamentals, and take on some hard problems rather than expecting to advance knowledge solely through the ever-increasing application of fast-twitch muscle fibers to computer mice. In this paper, these "seven deadly sins" are identified as 1. Kitchen sink models that ignore the effects of collinearity; 2. Pre-scientific explanation in the absence of prediction; 3. Reanalyzing the same data sets until they scream; 4. Using complex methods without understanding the underlying assumptions; 5. Interpreting frequentist statistics as if they were Bayesian; 6. Linear statistical monoculture at the expense of alternative structures; 7. Confusing statistical controls and experimental controls. The answer to these problems is solid, thoughtful, original work driven by an appreciation of both theory and data. Not postmodernism. The paper closes with a review of how we got to this point from the perspective of 17th through 20th century philosophy of science, and provides suggestions for changes in philosophical and pedagogical approaches that might serve to correct some of these problems.

46
Paper
Post-stratification without population level information on the post-stratifying variable, with application to political polling
Gelman, Andrew
Katz, Jonathan
Riley, Cavan

Uploaded 02-10-2000
Keywords Bayesian Inference
Post-stratification
Sample surveys
State-space models
Abstract We investigate the construction of more precise estimates of a collection of population means using information about a related variable in the context of repeated sample surveys. The method is illustrated using poll results concerning presidential approval rating (our related variable is political party identification). We use post-stratification to construct these improved estimates, but since we don't have population level information on the post-stratifying variable, we construct a model for the manner in which the post-stratifier develops over time. In this manner, we obtain more precise estimates without making possibly untenable assumptions about the dynamics of our variable of interest, the presidential approval rating.

47
Paper
Making Inferences from 2x2 Tables: The Inadequacy of the Fisher Exact\r\nTest for Observational Data and a Principled Bayesian Alternative
Sekhon, Jasjeet

Uploaded 08-17-2005
Keywords Fisher exact test
randomization inference
permutation tests
Bayesian tests
difference of proportions
observational data
Abstract The Fisher exact test is the dominant method of making inferences from 2x2 tables where the number of observations is small. Although the Fisher test and approximations to it are used in a large number of studies, these tests rest on a data generating process which is inappropriate for most applications for which they are used. The canonical Fisher test assumes that both of the margins in a 2x2 table are fixed by construction---i.e., both the treatment and outcome margins are fixed a priori. If the data were generated by an alternative process, such as binomial, negative binomial or Poisson binomial sampling, the Fisher exact test and approximations to it do not have correct coverage. A Bayesian method is offered which has correct coverage, is powerful, is consistent with a binomial process and can be extended easily to other distributions. A prominent 2x2 table which has been used in the literature by Geddes (1990) and Sekhon (2004) to explore the relationship between foreign threat and social revolution (Skocpol, 1979) is reanalyzed. The Bayesian method finds a significant relationship even though the Fisher and related tests do not. A Monte Carlo sampling experiment is provided which shows that the Bayesian method dominates the usual alternatives in terms of both test coverage and power when the data are generated by a binomial process.

48
Paper
Why we (usually) don't have to worry about multiple comparisons
Gelman, Andrew
Hill, Jennifer
Yajima, Masanao

Uploaded 06-01-2008
Keywords Bayesian inference
hierarchical modeling
multiple comparisons
type S error
statistical significance
Abstract The problem of multiple comparisons can disappear when viewed from a Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. These address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low group-level variation, which is where multiple comparisons are a particular concern. Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the p-values corresponding to intervals of fixed width). Multilevel estimates make comparisons more conservative, in the sense that intervals for comparisons are more likely to include zero; as a result, those comparisons that are made with confidence are more likely to be valid.

49
Paper
A Statistical Method for Empirical Testing of Competing Theories
Imai, Kosuke
Tingley, Dustin

Uploaded 08-24-2010
Keywords EITM
finite mixture model
Bayesian statistics
multiple testing
false discovery rate
EM algorithm
Abstract Empirical testing of competing theories lies at the heart of social science research. We demonstrate that a very general and well-known class of statistical models, called finite mixture models, provides an effective way of rival theory testing. In the proposed framework, each observation is assumed to be generated from a statistical model implied by one of the theories under consideration. Researchers can then estimate the probability that a specific observation is consistent with either of the competing theories. By directly modeling this probability with the characteristics of observations, one can also determine the conditions under which a particular theory applies. We discuss a principled way to identify a list of observations that are statistically significantly consistent with each theory. Finally, we propose several measures of the overall performance of a particular theory. We illustrate the advantages of our method by applying it to an influential study on trade policy preferences.

50
Paper
The Insignificance of Null Hypothesis Significance Testing
Gill, Jeff

Uploaded 02-06-1999
Keywords hypothesis testing
inverse probability
Fisher
Neyman-Pearson
Bayesian approaches
confidence sets
meta-analysis
Abstract The current method of hypothesis testing in the social sciences is under intense criticism yet most political scientists are unaware of the important issues being raised. Criticisms focus on the construction and interpretation of a procedure that has dominated the reporting of empirical results for over fifty years. There is evidence that null hypothesis significance testing as practiced in political science is deeply flawed and widely misunderstood. This is important since most empirical work in political science argues the value of findings through the use of the null hypothesis significance test. In this article I review the history of the null hypothesis significance testing paradigm in the social sciences and discuss major problems, some of which are logical inconsistencies while others are more interpretive in nature. I suggest alternative techniques to convey effectively the importance of data-analytic findings. These recommendations are illustrated with examples using empirical political science publications.


< prev 1 2 next>
   
wustlArtSci