logoPicV1 logoTextV1

Search Results

Below results based on the criteria 'direct and indirect e'
Total number of records returned: 968

Congress and the Environment: A Longitudinal Analysis
Shipan, Charles R.
Lowry, William R.

Uploaded 08-01-1997
Keywords Congress
interest group ratings
Abstract ach year since 1970 the League of Conservation Voters has identified a series of key votes on public health, energy, wilderness, and other environmental issues and used these votes to calculate "ratings" for members of Congress. While these ratings are useful for comparing the level of environmental support by members within a given year and a given chamber, they are less useful for making cross-chamber or longitudinal comparisons, as they are based on different votes and thus computed on a different scale in each year and in each chamber. In this paper we make use of an innovative new methodological approach developed by Groseclose, Levitt, and Snyder (1997) to adjust these scores in a way that allows for interchamber and intertemporal comparisons. We then use these adjusted scores to examine patterns in congressional voting on environmental issues. In addition to presenting these trends, we propose and evaluate some explanations for the most striking trend: the finding that the parties have diverged on environmental issues over the past twenty-five years.

Non-Parametric Analysis of Binary Choice Data
Poole, Keith T.

Uploaded 06-16-1997
Keywords discrete choice analysis
non-parametric unfolding
Abstract This paper shows a general non-parametric technique for maximizing the correct classification of binary choice or two-category data. Two general classes of data are analyzed. The first consists of binary choice matrices such as congressional roll calls or preferential rank ordering of stimuli gathered from individuals. For this class of data a general non-parametric unfolding procedure is developed. To unfold binary choice data two subproblems must be solved. First, given a set of chooser or legislator points a cutting plane through the space for the binary choice must be found such that it divides the legislators into two sets that reproduce the actual choices as closely as possible. Second, given a set of cutting planes for the binary choices a point for each chooser or legislator must be found that reproduces the actual choices as closely as possible. Solutions for these two problems are shown in this paper. The second class of data analyzed consists of a two-category dependent variable and a set of independent variables. This class of data is a subset of the binary choice unfolding problem. The cutting plane procedure can be used to estimate a cutting plane through the space of the independent variables that maximizes the number of correct classifications. The normal vector to this cutting plane closely corresponds to the beta vector from a standard probit, logit, or linear probability analysis.

Antitrust and Markets
Hayes, Jeffery W.

Uploaded 03-18-1997
Keywords antitrust
principal-agent theory
financial markets
Abstract Antitrust Division of the Department of Justice in order to test a theory of differential information asymmetry which posits that different political actors are capable of controlling different kinds of bureaucratic performance. Information asymmetry between the bureaucracy and political principals is an essential but infrequently tested foundation of principal--agent theory. I address the problem by generating a novel outcome measure of performance based upon how financial markets react to antitrust agency actions. By comparing measures of internal agency processes to an outcome-based market measure, we can shed light on the problem of information asymmetry when we realize that controlling processes requires less information resources than shaping outcomes. It is thus possible to make inferences not only about who controls the bureaucracy but also about what control means to the President and Congress. My preliminary findings suggest that a critical perspective on the meaning of bureaucratic performance illuminates interesting aspects of regulatory change. Examining the new market measure, I demonstrate that the entrenchment of Chicago school antitrust ideas resulted in a massivedecline in enforcement against major firms in the U.S. economy. I then argue that this decline in government intervention in the economy had stabilized by the late 1970s, well before Ronald Reagan's so-called antitrust revolution. Finally, I show that this second claim is not inconsistent with the notion of political control of the bureaucracy. By modeling a mid-range theory of differential information asymmetry, I find both a long-run equilibrium relationship between presidential ideology and agency outcomes as well as a consistent effect of Congressional ideology on agency processes. I conclude with a discussion of the implications of these findings for assessments of antitrust policy and bureaucracy research more generally.

Efficiency, Equity, and Timing in Voting Mechanisms
Battaglini, Marco
Palfrey, Thomas
Morton, Rebecca

Uploaded 06-19-2005
Keywords sequential voting
simultaneous voting
costly voting
Abstract In many voting situations some participants know the choices of earlier voters. We show that in such cases and voting is costly, later voters?' decisions are dependent on both the choices of previous voters and the cost of voting and are significantly different from the choices when voting is simultaneous. Using experiments we find support for our predictions. We also ?find that increasing the cost of voting decreases both informational and economic efficiency and subsidizing voting can increase efficiency. We find a tradeoff between efficiency and equity in sequential voting: Although sequential voting is generally more advantageous for all voters than simultaneous voting, there are significant additional advantages to later voters in sequential voting even when early voters are theoretically predicted to benefit.

Understanding Interaction Models: Improving Empirical Analyses
Brambor, Thomas
Clark, William
Golder, Matt

Uploaded 07-26-2005
Abstract Multiplicative interaction models are common in the quantitative political science literature. This is so for good reason. Institutional arguments frequently imply that the relationship between political inputs and outcomes varies depending on the institutional context. Models of strategic interaction typically produce conditional hypotheses as well. Although conditional hypotheses are ubiquitous in political science and multiplicative interaction models have been found to capture their intuition quite well, a survey of the top three political science journals from 1998 to 2002 suggests that the execution of these models is often flawed and inferential errors are common. We believe that considerable progress in our understanding of the political world can occur if scholars follow the simple check list of dos and don'ts for using multiplicative interaction models presented in this article. Only 10% of the articles in our survey followed the check list.

Covariate Functional Form in Cox Models
Keele, Luke

Uploaded 10-25-2005
Keywords Cox model
event history
survival models
duration models
Abstract In most event history models, the effect of a covariate on the hazard is assumed to have a log-linear functional form. For continuous covariates, this assumption is often violated as the effect is highly nonlinear. Assuming a log-linear functional form when the nonlinear form applies causes specification errors leading to erroneous statistical conclusions. Scholars can, instead of ignoring the presence of nonlinear effects, test for such nonlinearity and incorporate it into the model. I review methods to test for and model nonlinear functional forms for covariates in the Cox model. Testing for such nonlinear effects is important since such nonlinearity can appear as nonproportional hazards, but time varying terms will not correct the misspecification. I investigate the consequences of nonlinear function forms using data on international conflicts from 1950-1985. I demonstrate that the conclusions drawn from this data depend on fitting the correct functional form for the covariates.

Conditional Partisanship: Looking for Partisan Effects on Roll Call Votes in the U.S. House
Patty, John

Uploaded 07-15-2006
Keywords Roll call voting
House Journal
Abstract In this paper, I examine a simple procedure in the United States House of Representatives, approving the Journal, and its implications for legislative business. In particular, following a suggestion made by Sinclair (1995), I examine the hypothesis that such votes are more than simply pro forma motions or dilatory tactics by the minority party. Rather, the taking of such a vote represents a signal (perhaps to members of the House, but at least to the analyst) that the day’s ensuing business is important to at least one party’s leadership and that it is expected to be a close vote. Considering the 102nd-107th Congresses, I show that a recorded vote on the Speaker’s approval of the Journal indicates that the legislative day’s business will be both more contentious (i.e., recorded votes have a smaller margin of passage) and more partisan (i.e., recorded votes are more likely to be “party unity” votes). In addition, the fit of Poole’s Optimal Classification estimates for legislators’ preferences is higher for recorded votes taken on such days. In addition, I discuss the marginal effect of the type and timing of legislative business on these findings, as well as the identity of who calls for the vote on the Journal. Of particular interest are the differential effects for appropriations and “procedural” matters.

A 'Politically Robust' Experimental Design for Public Policy Evaluation, with Application to the Mexican Universal Health Insurance Program
King, Gary
Gakidou, Emmanuela
Ravishankar, Nirmala
Moore, Ryan
Lakin, Jason
Vargas, Manett
Tellez-Rojo, Martha Mari­a
Avila, Juan Eugenio Hernandez
Avila, Mauricio Hernandez
Llamas, Hector Hernandez

Uploaded 09-05-2006
Abstract We develop an approach to conducting large scale randomized public policy experiments intended to be more robust to the political interventions that have ruined some or all parts of many similar previous efforts. Under our proposed design, the benefits of random assignment would remain even if we lose observations; our inferences can still be unbiased even if politics disrupts two of the three steps in our analytical procedures; and other empirical checks are available to validate the overall design. We illustrate with a design and empirical validation of a planned evaluation of the Mexican Seguro Popular de Salud (Universal Health Insurance) program. Seguro Popular, which is intended to grow to provide medical care, drugs, preventative services, and financial health protection to the 50 million Mexicans without health insurance, is one of the largest health reforms of any country in the last two decades. The evaluation is also large scale, constituting one of the largest policy experiments to date and what may be the largest randomized health policy experiment ever.

Synthetic Control Methods for Comparative Case Studies: Estimating the Effect of California's Tobacco Control Program
Abadie, Alberto
Diamond, Alexis
Hainmueller, Jens

Uploaded 01-20-2007
Keywords comparative case studies
causal inference
placebo tests
program evaluation
Abstract Building on an idea in Abadie and Gardeazabal (2003), this article investigates the application of synthetic control methods to comparative case studies. We discuss the advantages of these methods and apply them to study the effects of Proposition 99, a large-scale tobacco control program that California implemented in 1988. We demonstrate that following Proposition 99 tobacco consumption fell markedly in California relative to a comparable synthetic control region. We estimate that by the year 2000 annual per-capita cigarette sales in California were about 26 packs lower than what they would have been in the absence of Proposition 99. Given that many policy interventions and events of interest in social sciences take place at an aggregate level (countries, regions, cities, etc.) and affect a small number of aggregate units, the potential applicability of synthetic control methods to comparative case studies is very large, especially in situations where traditional regression methods are not appropriate. The methods proposed in this article produce informative inference regardless of the number of available comparison units, the number of available time periods, and whether the data are individual (micro) or aggregate (macro). Software to compute the estimators proposed in this article is available at the authors web-pages.

Data Experiments: Model Specifications as Treatments
Clarke, Kevin A.

Uploaded 07-16-2007
Keywords Nonnested model testing
randomized complete block designs
Abstract This paper introduces the first model discrimination test for three or more competing nonnested models. The key to the approach is treating rival model specifications as experimental treatments applied to a set of observations. Viewed in this way, competing specifications can be tested using techniques designed for analyzing multiple related samples. To this end, two such procedures are adapted for the purpose of comparing three or more nonnested model specifications. The models to be compared may be nonnested either in their covariates or in their functional forms. The tests are straightforward and can be implemented using standard statistical software. The usefulness of the tests is demonstrated through real-world applications drawn from comparative politics and international relations.

Genetic Variation in Political Participation
Fowler, James
Baker, Laura
Dawes, Christopher

Uploaded 10-12-2006
Abstract The decision to vote has puzzled scholars for decades. Theoretical models predict little or no variation in participation in large population elections and empirical models have typically explained only a relatively small portion of individual-level variance in turnout behavior. However, these models have not considered the hypothesis that part of the variation in voting behavior can be attributed to genetic effects. Matching public voter turnout records in Los Angeles to a twin registry, we study the heritability of political behavior in monozygotic and dizygotic twins. The results show that genes account for a significant proportion of the variation in voter turnout. We also replicate these results with data from the National Longitudinal Study of Adolescent Health and show that they extend to a broad class of acts of political participation. These are the first findings to suggest that humans exhibit genetic variation in their tendency to participate in political activities.

Measurement Error as a Threat to Causal Inference: Acquiescence Bias and Deliberative Polling
Weiksner, G. Michael

Uploaded 06-29-2008
Keywords Causal inference
acquiescence bias
deliberative polling
measurement error
questionnaire design
Abstract Experiments, unlike observational studies, are rarely criticized for yielding invalid causal inferences. However, I identify measurement error as a threat to causal inference of an experiment. In particular, acquiescence bias, a common and substantial source of measurement error within surveys, may be correlated with experimental manipulations. Using data from a survey experiment embedded in a Deliberative Poll, I find that acquiescence bias causes significant measurement error and that the bias differs before and after deliberation. I conclude that even experimental researchers should heed the recommendation by questionnaire design researchers to refrain from asking agree/disagree questions completely and instead ask only construct-specific questions to avoid this threat to validity.

Causal Inference of Repeated Observations: A Synthesis of the Propensity Score Methods and Multilevel Modeling
Su, Yu-Sung

Uploaded 07-03-2008
Keywords causal inference
balancing score
multilevel modeling
propensity score
time-series-cross-sectional data
Abstract The fundamental problem of causal inference is that an individual cannot be simultaneously observed in both the treatment and control states (Holland 1986). The propensity score methods that compare the treatment and control groups by discarding the unmatched units are now widely used to deal with this problem. In some situations, however, it is possible to observe the same individual or unit of observation in the treatment and control states at different points in time. The data has the structure that is often refer to as time-series-cross-sectional (TSCS) data. While multilevel modeling is often applied to analyze TSCS data, this paper proposes that synthesizing the propensity score methods and multilevel modeling is preferable. The paper conducts a Monte Carlo simulation with 36 different scenarios to test the performance of the two combined methods. The result shows that synthesizing the propensity score matching with multilevel modeling performs better in that such method yields less biased and more efficient estimates. An empirical case study that reexamine the model of Przeworksi et al (2000) on democratization and development also shows the advantage of this synthesis.

Estimating Treatment Effects in the Presence of Noncompliance and Nonresponse: The Generalized Endogenous Treatment Model
Esterling, Kevin
Neblo, Michael
Lazer, David

Uploaded 02-14-2008
Keywords Average Treatment Effects
Principal Stratification
Selection on Unobservables
Latent Variable Models
Deliberation Experiment
Political Efficacy
Abstract If ignored, non-compliance with a treatment and nonresponse on outcome measures can bias estimates of treatment effects in a randomized experiment. To identify treatment effects in the case where compliance and response are conditioned on unobservables, we propose the parametric generalized endogenous treatment (GET) model. As a multilevel random effect model, GET improves on current approaches to principal stratification by incorporating behavioral responses within an experiment to measure each subjects' latent compliance type. We use Monte Carlo methods to show GET has a lower MSE for treatment effect estimates than existing approaches to principal stratification that impute, rather than measure, compliance type for subjects assigned to the control. In an application, we use data from a recent field experiment to assess whether exposure to a deliberative session with their member of Congress changes constituents' levels of internal and external efficacy. Since it conditions on subjects' latent compliance type, GET is able to test whether exposure to the treatment is ignorable after balancing on covariates via matching methods. We show that internally efficacious subjects disproportionately select into the deliberative sessions, and that matching apparently does not break the latent dependence between treatment compliance and outcome. The results suggest that exposure to the deliberative sessions improves external, but not internal, efficacy.

Can October Surprise? A Natural Experiment Assessing Late Campaign Effects
Meredith, Marc
Malhotra, Neil

Uploaded 10-14-2008
Keywords Vote by mail
natural experiment
campaign effects
convenience voting
regression discontinuity
Abstract One consequence of the proliferation of vote-by-mail (VBM) in certain areas of the United States is the opportunity for voters to cast ballots weeks before Election Day. Understanding the ensuing effects of VBM on late campaign information loss has important implications for both the study of campaign dynamics and public policy debates on the expansion of convenience voting. Unfortunately, the self-selection of voters into VBM makes it difficult to casually identify the effect of VBM on election outcomes. We overcome this identification problem by exploiting a natural experiment, in which some precincts are assigned to be VBM-only based on an arbitrary threshold of the number of registered voters. We assess the effects of VBM on candidate performance in the 2008 California presidential primary via a regression discontinuity design. We show that VBM both increases the probability of selecting candidates who withdrew from the race in the interval after the distribution of ballots but before Election Day and affects the relative performance of candidates remaining in the race. Thus, we find evidence of late campaign information loss, pointing to the influence of campaign events and momentum in American politics, as well as the unintended consequences of convenience voting.

We're Not Lost, But How Did We Get Here?
Jackson, John

Uploaded 07-07-2009
Keywords Society for Political Methodology
Abstract This narrative recounts the beginning and early years of the Society for Political Methodology and what an initially small group of young, naive and energetic scholars did to and for Political Science.

The MAOA Gene Predicts Credit Card Debt
De Neve, Jan-Emmanuel
Fowler, James

Uploaded 08-18-2009
Abstract This article presents the first evidence of a specific gene predicting real world economic behavior. Using data from the National Longitudinal Study of Adolescent Health, we show that individuals with a polymorphism of the MAOA gene that has lower transcriptional efficiency are significantly more likely to report having credit card debt. Having one or both MAOA alleles of the low efficiency type raises the average likelihood of having credit card debt by 7.8% and 15.9% respectively. About half of our population has one or both MAOA alleles of the low type. Prior research has linked this genetic variation to lack of conscientiousness, impulsivity, and addictive behavior.

Penalized Regression, Standard Errors, and Bayesian Lassos
Kyung, Minjung
Gill, Jeff
Ghosh, Malay
Casella, George

Uploaded 02-23-2010
Keywords model selection
Bayesian hierarchical models
LARS algorithm
EM/Gibbs sampler
Geometric Ergodicity
Gibbs Sampling
Abstract Penalized regression methods for simultaneous variable selection and coefficient estimation, especially those based on the lasso of Tibshirani (1996), have received a great deal of attention in recent years, mostly through frequentist models. Properties such as consistency have been studied, and are achieved by different lasso variations. Here we look at a fully Bayesian formulation of the problem, which is flexible enough to encompass most versions of the lasso that have been previously considered. The advantages of the hierarchical Bayesian formulations are many. In addition to the usual ease-of-interpretation of hierarchical models, the Bayesian formulation produces valid standard errors (which can be problematic for the frequentist lasso), and is based on a geometrically ergodic Markov chain. We compare the performance of the Bayesian lassos to their frequentist counterparts using simulations and data sets that previous lasso papers have used, and see that in terms of prediction mean squared error, the Bayesian lasso performance is similar to and, in some cases, better than, the frequentist lasso.

Inferring Strategic Voting
Kawai, Kei
Watanabe, Yasutora

Uploaded 07-16-2010
Keywords Strategic Voting
Set Estimation
Partial Identification
Abstract We estimate a model of strategic voting and quantify the impact it has on election outcomes. Because the model exhibits multiplicity of outcomes, we adopt a set estimator. Using Japanese general-election data, we find a large fraction [75.3%, 80.3%] of strategic voters, only a small fraction [2.4%, 5.5%] of whom voted for a candidate other than the one they most preferred (misaligned voting). Existing empirical literature has not distinguished between the two, estimating misaligned voting instead of strategic voting. Accordingly, while our estimate of strategic voting is high, our estimate of misaligned voting is comparable to previous studies.

When is it rational to redistribute? A cross-national examination of attitudes toward redistribution
Dion, Michelle

Uploaded 07-22-2010
Keywords income inequality
public opinion
Abstract Much political economy work on the politics of public policy and particularly redistribution builds on an assumption that individual income is negatively related to demand for redistribution. Since aggregate income inequality is not positively related to aggregate redistribution cross-nationally, various efforts seek to understand why political institutions fail to efficiently aggregate citizen preferences. This paper approaches this puzzle from a different perspective, instead seeking to understand the ways economic, social, and political context may shape preference formation and condition the individual-level relationship between income and demand for government redistribution. Using 300 country-surveys in 50 countries between 1985 and 2008 to model the relationships among country-level characteristics, individual income and support for redistribution, this paper finds evidence to suggest that not only do political institutions, inequality, and existing redistribution shape the formation of preferences, but that social diversity and dominant cultural values do as well.

Testing Interaction Hypotheses: Determining and Controlling the False Positive Rate
Esarey, Justin
Lawrence, Jane

Uploaded 07-13-2012
Keywords interaction
hypothesis testing
Abstract When a researcher suspects that the marginal effect of x on y varies with z, the usual approach is to plot dy/dx at different values of z (along with a confidence interval) in order to assess its magnitude and statistical significance. In this paper, we demonstrate that this approach results in inconsistent false positive (Type I error) rates that can be many times larger or smaller than advertised. Condtioning inference on the statistical significance of the interaction term does not solve this problem. However, we demonstrate that the problem can be avoided by exercising qualitative caution in the interpretation of marginal effects and via simple adjustments to exisiting test procedures.

The 2011 Debt Ceiling Crisis and the 2012 House Elections: A Research Design
Monogan, Jamie

Uploaded 11-06-2012
Keywords registration
causal inference
coarsened exact matching
congressional elections
Abstract On August 1, 2011, the House of Representatives voted to raise the federal debt ceiling as well as to make cuts in discretionary spending. Although this vote allowed the federal government to avoid default, raising the debt ceiling was unpopular with the public and the vote cut across party lines. This paper proposes a research design for evaluating the effect of a House member's vote on the debt ceiling on two outcomes: the member's ability to retain his or her seat through the 2012 general election, and the incumbent's share of the two-party vote for members who face a general election competitor.

First Do No Harm: The Risks of Modeling Temporal Dependence
Dafoe, Allan

Uploaded 07-17-2013
Keywords temporal dependence
repeated events
event history models
lagged dependent variable
Abstract Scholars analyzing repeated-events time-series cross-sectional (TSCS) data for causal inference routinely employ event history models or temporal control variables to address temporal dependence. These methods condition on functions of lags of the outcome, f(LY), to improve causal inference. The appropriate use of such techniques for causal inference rely on assumptions about the data-generating process. The first set of assumptions concerns how one should control for temporal dependence: the functional form, f(LY), must be correctly specified. I examine the study of interstate conflict, showing some ways in which f(LY) is misspecified, and offer suggestions for improving it. The second, deeper, set of assumptions concerns whether one should control for temporal dependence: depending on the cause of temporal dependence, conditioning on f(LY) can worsen estimates. Through the analysis of non-parametric causal graphs and simulations I show that conditioning on f(LY) can reduce bias when temporal dependence arises solely from event dependence, can induce bias when there are unobserved persistent causes of the outcome, and will have ambiguous effects when both causes are present. I present results about the direction of the expected bias, including a bounding result specifying the conditions under which estimates with and without temporal controls will bound the truth. Outside of the experimental ideal, clear causal inference depends on substantive assumptions. Absent strong beliefs about the DGP which justify temporal controls, or a measurable exhaustive un-confounded mechanism generating the temporal dependence, I recommend that scholars routinely report estimates with and without temporal controls; sensitivity to temporal specification implies that a result can not be understood without a better understanding of the temporal dynamics of the phenomenon of interest.

Replication in Political Science: The Case of Sigobr
Filho, Dalson
Rocha, Enivaldo
Paranhos, Ranulfo
Júnior, José

Uploaded 03-19-2014
Keywords replication
political science
Abstract The principal aim of this paper is to diffuse replication standard in the Brazilian Political Science. We argue that replication has three dimensions: (1) substantive, since it contributes to both quality and accumulation of scientific knowledge; (2) pedagogical, since it facilitates the understanding of basic principles of data analysis and (3) transparency, since it protects scientific community not only from honest mistakes but also from intentional frauds. In addition, we present the characteristics of the Government Information System Project (SIGOBR), conducted by the Political Science Department (DCP) of the Federal University of Pernambuco (UFPE), which has replication standard as one of its key features.

Why Does the American National Election Study Overestimate Voter Turnout?
Jackman, Simon
Spahn, Bradley

Uploaded 07-23-2014
Keywords turnout
vote validation
social desirability
voter files
Abstract For decades the American National Election Studies has consistently produced large over-estimates of voter turnout. The cause of this persistent bias is poorly understood. This is profoundly awkward for a well-resourced, NSF-funded scientific enterprise that is otherwise regarded as the "gold standard'' in survey-based studies of political behavior, utilizing a probability-based sampling design and in-person interviews administered by trained field staff. The face-to-face component of ANES produced a turnout estimate of somewhere between 71.0\% and 75.3\%, an overestimate of somewhere between 9 and 17 points. We consider three explanations for the large over-estimate of turnout in the face-to-face component of the 2012 ANES: non-response bias (or self-selection), over-reporting and the possibility that the ANES survey experience constitutes is an inadvertent GOTV treatment. Three separate analyses — of a customized data set built from records supplied by three leading voter and consumer data providers — suggests all three phenomenon are at work in the 2012 ANES, in roughly equal magnitudes of about five percentage points each.

Reconsidering Tests for Ambivalence in Political Choice Survey Data
Glasgow, Garrett

Uploaded 03-21-2004
Keywords ambivalence
heteroskedastic discrete choice
Abstract The concept of ambivalence challenges the assumption that individuals combine their positive and negative attitudes towards objects in their choice set into unidimensional attitudes, instead maintaining that individuals can simultaneously hold conflicting attitudes. Unfortunately, most tests for ambivalence in political choice survey data are inconclusive. In particular, the empirical results of these tests could also be explained by a choice model with unidimensional attitudes. There are two related reasons for this. First, individuals who appear to be close to neutrality or indifference in a choice model with unidimensional attitudes are expected to have observed choice behavior identical to that expected from ambivalent individuals. Second, the measures of ambivalence developed and used in survey-based studies of ambivalence in political choice are closely related to measures of neutrality or indifference in a unidimensional attitude choice model. Taken together, these two observations point out the need to reconsider our empirical tests of ambivalence if we wish to determine if and how ambivalence influences individual political choice behavior.

Models of Intertemporal Choice
Wand, Jonathan

Uploaded 07-26-2004
Keywords choice
extremal process
utility maximizing
lagged dependent variable
Abstract In this paper, I consider the behavior of individuals making repeated choices over a finite set of discrete alternatives. Individuals are assumed to maximize utility each time they are faced with a choice, without affecting the utility or availability of future choices. I build on a class of models where serial correlation in choices is due to a process of learning over time about the merits of alternatives, rather than due to unobserved persistent effects. I provide new analytical results for characterizing transition probabilities between choices without imposing restrictions on how the systematic component of utilities may change over time.

Primaries and Turnout
Kanthak, Kristen
Morton, Becky

Uploaded 07-09-2003
Keywords primaries
bivariate probit
selection model
treatment effects
Abstract We consider the effects of differences in primary systems on voter turnout in primaries as well as the effect of holding primaries on general election turnout and support for candidates chosen in primaries. The analysis is based on a group majority voting model of turnout where candidates from two major parties simultaneously make strategic entry decisions and mobilize voters strategically in primaries and general elections if they choose to enter. We evaluate the model's predictions using data from midterm Congressional primaries and general elections in the 1980s. We use a two-stage estimation process. First, the model's predictions concerning the effects of primary system differences on whether primaries occur and the vote totals in primaries is estimated using a maximum likelihood bivariate probit selection model. We find that primary system variables do have significant effects on whether primaries are held and to some extent affect vote totals in primaries, although there are interesting party specific differences suggesting that Republicans see advantages from mobilizing voters in open primary systems while Democrats benefit in semi-closed primary systems. Second, the estimated vote totals in the primaries are used as treatment variables via an instrumental variables approach in a simultaneous equation system with two dependent variables general election vote totals and the vote share of the Democratic party's candidate. We find that voting in primaries has a positive and significant effect on voting in general elections and significantly increase the vote share of the party holding the primary, suggesting that the arguments that primaries by their existence decrease voter turnout and hurt parties holding them have no support.

The Ordinary Election of Adolf Hitler: A Modern Voting Behavior Approach
King, Gary
Rosen, Ori
Wagner, Alexander F.

Uploaded 08-23-2002
Keywords Voting Behavior
Ecological Inference
Abstract How did free and fair democratic elections lead to the extrordinarily anti-democratic Nazi Party winning control of the Weimar Republic? The profound implications of this question have led scholars to make the Weimar elections the most studied elections in history and ``who voted for Hitler'' the single most asked question in elections research. Yet, despite this overwhelming attention, mostly from historians, the Nazi voting literature has treated these elections as largely unique events and thus comparison with other democratic elections as mostly irrelevant. The literature has also ignored most voting behavior theory and research in political science, and it has only rarely used modern statistical methods. In this paper, we adapt existing political science theories and new methods and find that many of the explanations offered in the Nazi voting literature, while probably correct, do not distinguish this election from almost any other, occuring in any country. For example, the prevailing explanation in the literature, that the Nazis were a ``catch all party'' because most social groups shifted in their favor by roughly the same amount, is a characteristic of the vast majority of election swings in every democracy, and so does not provide a useful explanation. We also show that a standard ``retrospective voting'' account of Nazi voting fits the distinctive aspects of this election well, once we recognize that the voters who were most hurt by the economic depression and hence most likely to oppose the government fall into two separate groups that have divergent interests. Those who were unemployed or at high risk of becomming unemployed shifted to the Communists, whose platform was designed to appeal mainly to this group, whereas the working poor, those at low risk of unemployment but still poor because of the economy (such as self-employed shop keepers and professionals, domestic workers, and helping family members), shifted disproportionately towards the Nazis, and accounted for most of the unusual dynamics of this election. The consequences of the election of Hitler were extraordinary, but the voting behavior that led to it was not.

Enhancing the Validity and Cross-cultural Comparability of Survey Research
King, Gary
Murray, Christopher J. L.
Salomon, Joshua A.
Tandon, Ajay

Uploaded 07-10-2002
Keywords survey research
Abstract We offer a new approach to writing survey questions and a new statistical model that together at least partially ameliorate two long-standing problems in survey research. The first is how to measure complicated concepts, such as freedom, health, political efficacy, pornography, etc., that researchers know how to define clearly only with reference to examples. The second problem is when different respondents interpret identical survey questions in incomparable ways, as can occur when comparing respondents in different countries speaking different languages, but it also occurs frequently with different groups in the same country. Our approach to these problems is to ask respondents for self-assessments of the concept being measured along with assessments, on the same scale, of each of several hypothetical individuals described by short vignettes. The actual (but not necessarily reported) levels for the people in the vignettes are, by the design of the survey, invariant over respondents and thus provide anchors for our statistical model to transform the self-assessments to a comparable scale. With analysis, simulations, and real surveys in several countries, we show how ignoring these problems can lead to the wrong substantive conclusions and how our approach can fix them. Our methods build on insights from application-specific research on voters and legislators in political science to produce a more general measurement device.

Individual Choice and Ecological Analysis
McCue, Kenneth F.

Uploaded 12-02-2001
Keywords ecological regression
voter transitions
multivariate multinomial
split-ticket voting
aggregation bias
liner probability model
Abstract The use of the linear probability model in aggregate voting analysis has now received widespread attention in political science. This article shows that when the linear probability model is assumed to be consistent for the choice of the individual, it is actually a member of a general class of models for estimating individual responses from aggregate data. This class has the useful property that it defines the aggregate analysis problem as a function of the individual choice decisions, and allows the placement of most aggregate voting models into a common probabilistic framework. This framework allows the solution of such problems as inference of individual responses from aggregate data, estimation of the transition model, and the joint estimation and inference from individual and aggregate data. Examples with actual data are provided for these techniques with excellent results.

Time Series Cross-Sectional Analyses with Different Explanatory Variables in Each Cross-Section
Girosi, Federico
King, Gary

Uploaded 07-11-2001
Keywords Bayesian hierarchical model
time series
Abstract The current animosity between quantitative cross-national comparativists and area studies scholars originated in the expanding geographic scope of data collection in the 1960s. As quantitative scholars sought to include more countries in their regressions, the measures they were able to find for all observations became less comparable, and those which were available (or appropriate) for fewer than the full set were excluded. Area studies scholars appropriately complain about the violence these procedures do to the political reality they find from their in depth analyses of individual countries, but as quantitative comparativists continue to seek systematic comparisons, the conflict continues. We attempt to eliminate a small piece of the basis of this conflict by developing models that enable comparativists to include different explanatory variables, or the same variables with different meanings, in the time-series regression in each country. This should permit more powerful statistical analyses and encourage more context-sensitive data collection strategies. We demonstrate the advantages of this approach in practice by showing how out-of-sample forecasts of mortality rates in 25 countries, 17 age groups, and 17 causes of death in males and 20 in females from this model out-perform a standard regression approach.

Selection Bias in Studies of Sanctions Efficacy
Nooruddin, Irfan

Uploaded 04-05-2001
Keywords sanctions
strategic censoring
censored probit
Abstract Sanctions rarely work but they continue to be used frequently by policymakers. Previous research on the determinants of sanctions identifies various factors that are thought to contribute to sanctions success but do not give us an answer to the original puzzle of why this ineffective policy is so commonly used. I argue that this is because studies of sanctions have ignored the problem of strategic censoring by focusing only on cases of observed sanctions. In this paper, I develop a unified model of sanction imposition and success and test it using a simultaneous equation censored probit model. The selection- corrected sanction model finds that the process by which sanctions are imposed is linked to the process by which some succeed while others fail, and that the unmeasured factors that lead to sanction imposition are negatively related to their success.

A Practical Statistical Model for Multiparty Electoral Data
Honaker, James
Katz, Jonathan
King, Gary

Uploaded 08-23-2000
Keywords compositional data
multiparty electoral data
EM algorithms
Abstract Katz and King (1999) develop a model for predicting or explaining aggregate electoral results in multiparty democracies. This model is, in principle, analogous to what least squares regression provides American politics researchers in that two-party system. Katz and King applied this model to three-party elections in England and revealed a variety of new features of incumbency advantage and where each party pulls support from. Although the mathematics of their statistical model covers any number of political parties, it is computationally very demanding, and hence slow and numerically imprecise, with more than three. The original goal of our work was to produce an approximate method that works quicker in practice with many parties without making too many theoretical compromises. As it turns out, the method we offer here improves on Katz and King's (in bias, variance, numerical stability, and computational speed) even when the latter is computationally feasible. We also offer easy-to-use software that implements our suggestions.

A Frailty Model of Negatively Dependent Competing Risks
Gordon, Sanford C.

Uploaded 07-02-2000
Keywords duration
competing risks
cabinet survival
Abstract "Competing Risks" is a term used to describe duration models in which an individual spell may terminate via more than one outcome. Numerous applications in political science exist: For example, the term of a cabinet may end either with or without an election; criminal investigations may terminate either with prosecution or abandonment of a case; wars persist until the loss or victory of the aggressor state. Analysts typically assume stochastic independence among risks. However, many political examples are characterized by negative risk dependence: A high hazard rate for termination via one risk implies a low rate for termination via another. Ignoring this dependence can potentially bias inference. This paper suggests a class of bivariate (i.e. two risk), negatively dependent competing risks models. Negative risk dependence enters through an individual-specific random effect that simultaneously increases the hazard rate for one risk while decreasing the hazard for the second. Monte Carlo simulation reveals this specification to be superior to a naive model in which risks are assumed independent. Finally, I examine an application of the negative dependence model using Strom's (1985) and King et. al.'s (1990) data on cabinet survival.

Mixed Logit Models for Multiparty Elections
Glasgow, Garrett

Uploaded 02-24-2000
Keywords mixed logit
random parameters logit
multinomial probit
Abstract This is a significantly updated version of my February 24 submission, with several mathematical errors corrected and new information on multinomial probit models and IIA violations. In this paper I introduce the mixed logit (MXL), a flexible discrete choice model based on random utility maximization. Mixed logit is the most flexible discrete choice model available for the study of multiparty and multicandidate elections --- even more flexible than multinomial probit (MNP), the discrete choice model currently favored for the study of elections of this type. Like MNP, MXL does not assume IIA, and can thus estimate realistic substitution patterns between alternatives. In fact, MXL can be specified to estimate the same substitution patterns as any specification of MNP. Further, since the unobserved components of MXL are not constrained to follow a normal distribution, and are not estimated as elements in a covariance matrix, MXL can include any number of random coefficients or error components that can follow any distribution. MXL is no more difficult to estimate than MNP. An empirical example using data from the 1987 British general election demonstrates the utility of MXL in the study of multicandidate and multiparty elections.

Testing for Interaction in Models with Binary Dependent Variables
Berry, William D.

Uploaded 04-08-1999
Keywords probit
binary dependent variables
Abstract Over the last decade, political scientists have proposed several strategies for testing hypotheses about interaction in models with binary dependent variables. I argue that these strategies are incomplete, and propose an alternative approach. Consistent with Nagler's (1991; 1994) and Frant's (1991) advice, this approach involves including multiplicative terms in probit, logit and scobit models to specify interaction. However, the information used to test the hypotheses about interaction depends on whether the dependent variable of conceptual interest is the observed dichotomous variable or a latent, unbounded, continuous variable which the observed dichotomy is assumed to measure. In the latter case, hypotheses about interaction are tested by examining directly the maximum likelihood estimates of the coefficients for the multiplicative terms. In the former situation, the propositions are tested by analyzing changes in the predicted probability that the observed dependent variable will equal one of its values associated with changing values of the independent variables.

Suspension of the Rules in the House Committee Process
Grant, J. Tobin
Hasecke, Edward B.

Uploaded 05-11-1999
Keywords congressional procedures
legislative organizaiton
event history
Abstract Theories of legislative organization offer competing explanations for the existence and practice of committees in Congress. Not only should these theories explain how bills proceed through committees but also why some bills are considered on the floor without formal committee approval. This paper examines the suspension of the rules procedure, the most common means of considering legislation before formal approval by committees. Using data on every public bill that introduced to the House of Representatives during the 105th Congress, we estimate an event history model of the Speaker's use of suspension of the rules. We test the implications of majority dominant and party dominant theories of legislative organization. We find strong support for the party dominant but not for the majority dominant theory. This finding has implications both our understanding of suspension of the rules and of theories of legislative organization.

Age-Period-Cohort Analysis with Noisy, Lumpy Data
Brady, Henry E.
Elms, Laurel

Uploaded 07-14-1999
Keywords cohort analysis
political participation
Abstract We have developed several relatively simple methods for doing age-period-cohort analysis with noisy, lumpy data. The first method, using additional information from the Census, does not work well with our data constraints because the age composition of the population does not vary enough over relatively short periods of time. The second method, approximating APC surfaces with polynomial functions, smooths the data too much. This approach is very much a brute force curve-fitting exercise that makes a very general assumption about the functional form of the APC surface and then fits it to the data. However, a third technique we evaluate starts with a theoretically informed model of how APC effects operate for a given dependent variable. This method allows for hypothesis testing and a reasonable amount of smoothing, but probably does not smooth period effects enough. It also yields interesting results about age, period, and cohort effects. The last method we discuss briefly, combining the third technique with additional smoothing, needs more development but may improve our estimates.

The In-and-Outers Revisited: Duration Analysis and Presidential Appointee Tenure
Tomlinson, Andrew R.
Anderson, William D.

Uploaded 11-09-1999
Keywords presidency
event history
competing risks
Abstract Much has been written about the "fraying" of the Presidential appointments system (NAPA, 1985), but little research has been conducted which takes advantage of recent advancements in event history modeling. The questions posed by many authors in this field focus on two variables -- the time someone spends in office (Joyce, 1990), and the reasons they give for leaving their job (Bonafede, 1987). Not only do event history models avoid the common pitfalls of using OLS to model temporal data (Box-Steffensmeier and Jones, 2000), but certain models allow the researcher to combine temporal data with categorical choice models. We run a simple version of a competing risks model to test the effects of theoretically-chosen covariates on the likelihood of presidential appointees leaving office at a given time. We find support for some hypotheses, specifically those dealing with stress related factors and financial pressures on staffer tenure.

Uncertainty and Candidate Personality Traits
Alvarez, R. Michael
Glasgow, Garrett

Uploaded 04-16-1998
Keywords uncertainty
direct measures of uncertainty
survey response
ordered probit
candidate evaluation
candidate traits
presidential elections
Abstract Recently, some scholars have focused attention on the role of uncertainty in elections (Alvarez 1997, Bartels 1986, Franklin 1991). They reveal that there is a great deal of uncertainty about the issue positions of candidates, and thus the costs of issue voting are burdensome for the average citizen. Further, this uncertainty affects how voters evaluate candidates in two ways. First, voters are less likely to evaluate a candidate in terms of an issue when they are uncertain about the candidate's position on that issue. Second, uncertainty about candidate issue positions has a negative impact on voter evaluations of a candidate. However, it is important to realize that for most individuals, information about the personality traits of candidates comes from the same sources as information about the issue positions of the candidates, generally media outlets. This means that information about the the personalities of candidates is passed through the same noisy channels as information about their issue positions, and is thus subject to the same types of distortions and biases that contribute to the cost of issue information. Although it is likely easier to interpret than issue information, trait information is still subject to uncertainty. In this paper we introduce direct survey measures of candidate personality trait uncertainty. Using survey data drawn from the 1995 and 1996 National Election Studies, we first establish that the direct measure of uncertainty used in this paper is a valid measure. We then examine the effect of trait opinions on candidate evaluations and test the effects that uncertainty about those opinions has on the use of traits in candidate evaluation.

Sensitivity of GARCH Estimates: Effects of Model Specification on Estimates of Macropartisan Volatility
Maestas, Cherie
Gleditsch, Kristian S.

Uploaded 05-24-1998
Keywords volatility of aggregate partisanship
Abstract This paper explores the volatility of aggregate partisanship using a generalized autoregressive conditional heteroskedasticity (GARCH) model of the variance. We are particularly interested in how different specifications of the mean model affect the variance estimates. Modeling the variance of macropartisanship is theoretically interesting because such a model can capture periods of greater and lesser volatility in aggregate party identification. However, given the widespread debate over the dynamic properties of the aggregate partisanship time series, a range of plausible specifications for the mean model should be considered before drawing conclusions about variance estimates. We find similar estimates of the variance effects using ARMA-GARCH, ARFIMA-GARCH, ARIMA-GARCH and ECM-GARCH models. Weak ties to party consistently predict greater volatility in all four models, while presidential election quarters are associated with greater volatility in three of the four models. Counter to our expectations, the candidate centered era of the last few decades is associated with lower average variance. Finally, all four models indicate that volatility tends to persist beyond the duration of the shock that sparks it.

Inferring Micro- from Macrolevel Change: Ecological Panel Inference in Surveys
Penubarti, Mohan
Schuessler, Alexander

Uploaded 07-20-1998
Keywords Ecological panel inference (EPI)
public opinion
Abstract To draw panel inferences at the microlevel from cross-sectional surveys invites an ecological inference problem. In this paper we derive from King's ecological inference solution a method of ecological panel inference (EPI) which allows researchers to estimate microlevel change from macrolevel measures of change. We verify our approach in panel data where magnitudes of microlevel change are known, and we subsequently apply and illustrate our method using public opinion data on presidential approval. EPI should be of interest to researchers seeking to explain microlevel change in the absence of microlevel data. It should equally be of interest to researchers seeking to explain macrolevel change as it makes visible to them the microlevel components that drive such aggregate-level change.

Estimating voter preference distributions from individual-level voting data (with application to split-ticket voting
Lewis, Jeffrey B.

Uploaded 09-15-1998
Keywords split ticket voting
ideal point estimation
spatial voting models
EM algorithm
Abstract In the last decade a great deal of progress has been made in estimating spatial models of legislative roll-call voting. There are now several well-known and effective methods of estimating the ideal points of legislators from their roll-call votes. Similar progress has not been made in the empirical modeling of the distribution of preferences in the electorate. Progress has been slower, not because the question is less important, but because of limitations of data and a lack of tractable methods. In this paper, I present a method for inferring the distribution of voter ideal points on a single dimension from individual-level voting returns on ballot propositions. The statistical model and estimation technique draw heavily on the psychometric literature on test taking and, in particular, on the work of Bock and Aitkin (1981}. The method yields semi-parametric estimates of the distribution of voters along an unobserved spatial dimension. The model is applied to data from the 1992 general election in Los Angeles County. I present the distribution of voter ideal points of each of 17 Congressional districts. Finally, I consider the issue of split-ticket voting estimating for two Congressional districts the distribution of voters that split their tickets and of those that did not.

Issues, Economics and the Dynamics of Multi-Party Elections: The British 1987 General Election
Alvarez, R. Michael
Nagler, Jonathan
Bowler, Shaun

Uploaded 00-00-0000
Keywords Elections
Multinomial Probit
Economic Voting
Issue Voting
Spatial Model
Multicandidate Elections
British Elections
Abstract This paper offers a model of three-party elections which allows voters to combine retrospective economic evaluations with considerations of the positions of the parties in the issue-space as well as the issue-preferences of the voters. We describe a model of British elections which allows voters to consider simultaneously all three parties, rather than limiting voters to choices among pairs of parties as is usually done. Using this model we show that both policy issues and the state of the national economy matter in British elections. We also show how voters framed their decisions. Voters first made a retrospective evaluation of the Conservative party based on economic performance; and those voters that rejected the Conservative party chose between Labour and Alliance based on issue positions. Through simulations of the effects of issues -- we move the parties in the issue space and re-estimate vote-shares -- and the economy -- we hypothesize an alternative distribution of views of the economy for voters -- we show that Labour has virtually no chance to win with the Alliance as a viable alternative. Even if the Alliance (or the Liberal Democrats) disappears, Labour will need to significantly moderate its policy positions to have a chance of competing with the Conservative party. We argue that the methodological technique we employ, multinomial probit, is a superior mechanism for studying three-party elections as it allows for a richer formulation of politics than do competing methods.

Perceptions of Candidate Viability: Media Effects During the Presidential Nomination Process
Paolino, Philip

Uploaded 00-00-0000
Keywords variance function
Abstract One way that the media influence the presidential nomination process is through focusing upon the horse-race (e.g. Patterson, 1980; Robinson and Sheehan, 1983). A number of studies have shown that voters' preferences are affected by their perception of candidate viability (e.g. Abramson et al. 1992; Bartels, 1988; Brady and Johnston, 1987). This means that the media's ability to communicate information about candidate viability will have a great effect upon the outcome of the nomination process. Accordingly, we need to know just how strongly the media influence voters' perceptions about candidate viability in order to better understand the media's effect upon the nomination process. In this paper, I will examine how the media influence voters' perceptions of candidate viability with respect to both the direction and the clarity of perceptions. The importance of this is research is to help our understanding of the factors that help fuel candidate momentum during the nomination process.

Explaining the Gender Gap in U.S. Presidential Elections, 1980-1992
Chaney, Carole
Alvarez, R. Michael
Nagler, Jonathan

Uploaded 08-22-1996
Keywords presidential elections
gender gap
issue voting
economic evaluations
general-extreme value model
Abstract This paper compares the voting behavior of women and men in presidential elections since 1980 to test competing explanations for the gender gap. We show that, consistent with prior research on individual elections, women placed more emphasis on the national economy than men, and men placed more emphasis on pocketbook voting than women. We add evidence showing that women have consistently more negative assessments of the economy than do men, suggesting that a part of what has been considered a Republican-Democratic gender gap is really an anti-incumbent bias on the part of women. Our multivariate analysis demonstrates that neither the differences between men and women's preferences nor emphasis on any single issue explains the significant gender gap in vote choice; but that a combination of respondent views on the economy, social programs, military action, abortion, and ideology can consistently explain at least three-fourths of the gender gap in the 1984, 1988, and 1992 elections. We also clarify the interpretation of partisan identification in explaining the gender gap.

Primary Election Systems and Policy Divergence
Morton, Becky
Gerber, Elisabeth R.
Fowler, James

Uploaded 10-03-1997
Keywords election laws
candidate nomination procedures
policy divergence
United States Congress
Abstract We examine how differences in the institutions that regulate candidate nomination procedures - specifically direct primary election laws -- affect elite control over candidate nominations and ultimately affect candidate policy divergence. We hypothesize that in more closed primary systems, control over candidate nominations by ideological extremists will translate into a higher likelihood that extreme candidates win in the general election. We hypothesize that in more open systems, participation by a wider spectrum of the electorate means that candidates must appeal to more moderate voters, leading to the election of more moderate candidates. Using pooled cross-section time-series regression analysis, we find that US Representatives from states with closed primaries take policy positions that are furthest from their districts' estimated median voter's ideal positions. Representatives from states with semi-closed primaries are the most moderate. We conclude that the opportunities for strategic behavior created by electoral institutions have important consequences for electoral outcomes.

Partisan Strength and Uncertain Presidential Evaluations
Gronke, Paul

Uploaded 08-22-1997
Keywords presidential approval
Abstract American presidents, as do all democratic political leaders, rely on popular support in order to promote their political agenda, gain legislative victories, and succeed at the ballot box. Presidential approval, however, displays more than just a mean value, it also has a variance. And even a well regarded political leader would prefer to avoid widely variant support. At the individual level, variance is analogous to the level of uncertainty that an individual has about presidential performance This paper demonstrates the central role that partisan attachments play in fostering clarity in presidential approval. In general, respondents with stronger partisan attachments, combined with issue positions favorable to the president, are far more certain in their approval response. Fascinating variations in the role that party played during the Reagan years, compared to Carter, Bush, and Clinton, suggest a complex interaction between partisan ties, presidential performance, and the particular occupant of the oval office. The paper draws on data from the National Election Studies, 1980--1994. Ordinary least squares regression models are estimated, and clear evidence of heteroskedasticity is shown. A more general model that includes both a model for the mean and for the variance is presented and estimated using the same set of data. The main hypotheses regarding partisan strength and response uncertainty are confirmed.

Testing Theories Involving Strategic Choice: The Example of Crisis Escalation
Smith, Alastair

Uploaded 07-23-1997
Keywords Strategic choice
Bayesian model testing
Markov chain Monte Carlo simulation
multi-variate probit
crisis escalation
Abstract If we believe that politics involves a significant amount of strategic interaction then classical statistical tests, such as Ordinary Least Squares, Probit or Logit, cannot give us the right answers. This is true for two reasons: The dependent variables under observation are interdependent-- that is the essence of game theoretic logic-- and the data is censored -- that is an inherent feature of off the path expectations that leads to selection effects. I explore the consequences of strategic decision making on empirical estimation in the context of international crisis escalation. I show how and why classical estimation techniques fail in strategic settings. I develop a simple strategic model of decision making during crises. I ask what this explanation implies about the distribution of the dependent variable: the level of violence used by each nation. Counterfactuals play a key role in this theoretical explanation. Yet, conventional econometric techniques take no account of unrealized opportunities. For example, suppose a weak nation (B) is threatened by a powerful neighbor (A). If we believe that power strongly influences the use of force then the weak nation realizes that the aggressor's threats are probably credible. Not wishing to fight a more powerful opponent, nation B is likely to acquiesce to the aggressor's demands. Empirically, we observe A threaten B. The actual level of violence that A uses is low. However, the theoretical model suggests that B acquiesced precisely because A would use force. Although the theoretical model assumes a strong relationship between strength and the use of force, traditional techniques find a much weaker relationship. Our ability to observe whether nation A is actually prepared to use force is censored when nation B acquiesces. I develop a Strategically Censored Discrete Choice (SCDC) model which accounts for the interdependent and censored nature of strategic decision making. I use this model to test existing theories of dispute escalation. Specifically, I analyze Bueno de Mesquita and Lalman's (1992) dyadically coded version of the Militarized Interstate Dispute data (Gochman and Moaz 1984). I estimate this model using a Bayesian Markov chain Monte Carlo simulation method. Using Bayesian model testing, I compare the explanatory power of a variety of theories. I conclude that strategic choice explanations of crisis escalation far out-perform non-strategic ones.

< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 next>