logoPicV1 logoTextV1

Search Results


Below results based on the criteria 'Prisoner'
Total number of records returned: 909

1
Paper
Unifying Political Metrology: A Probilistic Model of Measurement
Grant, J. Tobin

Uploaded 07-21-2004
Keywords Measurement
Abstract Political science needs an improved metrology, which includes both measurement theory and applied assessments of measurement procedures.  I discuss central metrological concepts and their application to political science.  I present a probilistic model of measurement that is grounded in well-established measurement theory.  The model incorporates recent work in metrology that emphasizes the uncertainty of all measurements.  This model has implications for political science measures, including the criteria used to evaluate measurements, the role of qualitative measurements, and the tasks needed to improve measurements.  I conclude with a discussion of how political science can improve its metrology.

2
Paper
Lagging the Dog?: The Robustness of Panel Corrected Standard Errors in the Presence of Serial Correlation and Observation Specific Effects
Kristensen, Ida
Wawro, Gregory

Uploaded 07-13-2003
Keywords time-series cross-section data
serial correlation
fixed effects
panel data
lag models
Monte Carlo experiments
Abstract This paper examines the performance of the method of panel corrected standard errors (PCSEs) for time-series cross-section data when a lag of the dependent variable is included as a regressor. The lag specification can be problematic if observation-specific effects are not properly accounted for, leading to biased and inconsistent estimates of coefficients and standard errors. We conduct Monte Carlo studies to assess how problematic the lag specification is, and find that, although the method of PCSEs is robust when there is little to no correlation between unit effects and explanatory variables, the method's performance declines as that correlation increases. A fixed effects estimator with robust standard errors appears to do better in these situations.

3
Paper
Estimating incumbency advantage and its variation, as an example of a before/after study
Gelman, Andrew
Huang, Zaiying

Uploaded 02-07-2003
Keywords Bayesian inference
before-after study
Congressional elections
Gibbs
Abstract Incumbency advantage is one of the most studied features in American legislative elections. In this paper, we construct and implement an estimate that allows incumbency advantage to vary between individual incumbents. This model predicts that open-seat elections will be less variable than those with incumbents running, an observed empirical pattern that is not explained by previous models. We apply our method to the U.S. House of Representatives in the twentieth century: our estimate of the overall pattern of incumbency advantage over time is similar to previous estimates (although slightly lower), and we also find a pattern of increasing variation. In addition to the application to incumbency advantage, our approach represents a new method, using multilevel modeling, for estimating effects in before/after studies.

4
Paper
Degeneracy and Inference for Social Networks
Handcock, Mark S.

Uploaded 07-15-2002
Keywords Random graph models
log-linear network model
Markov fields
Markov Chain Monte Carlo
Abstract Networks are a form of "relational data". Relational data arise in many social science fields and graph models are a natural approach to representing the structure of these relations. This framework has many applications including, for example, the structure of social networks, the behavior of epidemics, the interconnectedness of the WWW, and long-distance telephone calling patterns. We review stochastic models for such graphs, with particular focus on sexual and drug use networks. Commonly used Markov models were introduced by Frank and Strauss (1986) and were derived from developments in spatial statistics (Besag 1974). These models recognize the complex dependencies within relational data structures. To date, the use of graph models for networks has been limited by three interrelated factors: the complexity of realistic models, lack of use of simulation studies, and a poor understanding of the properties of inferential methods. In this talk we discuss these factors and the degeneracy of commonly promoted models. We also review the role of Markov Chain Monte Carlo (MCMC) algorithms for simulation and likelihood-based inference. These ideas are applied to a sexual relations network from Colorado Springs with the objective of understanding the social determinants of HIV spread. In this talk we focus on stochastic models for such graphs that can be used to represent the structural characteristics of the networks. In our applications, the nodes usually represent people, and the edges represent a specified relationship between the people.

5
Paper
An Automated Information Extraction Tool For International Conflict Data with Performance as Good as Human Coders: A Rare Events Evaluation Design
King, Gary
Lowe, Will

Uploaded 02-25-2002
Keywords rare events
Bayes
international conflict data
computer
Abstract Despite widespread recognition that aggregated summary statistics on international conflict and cooperation miss most of the complex interactions among nations, the vast majority of scholars continue to employ annual, quarterly, or occasionally monthly observations. Daily events data, coded from some of the huge volume of news stories produced by journalists, have not been used much for the last two decades. We offer some reason to change this practice, which we feel should lead to considerably increased use of these data. We address advances in event categorization schemes and software programs that automatically produce data by ``reading'' news stories without human coders. We design a method that makes it feasible for the first time to evaluate these programs when they are applied in areas with the particular characteristics of international conflict and cooperation data, namely event categories with highly unequal prevalences, and where rare events (such as highly conflictual actions) are of special interest. We use this rare events design to evaluate one existing program, and find it to be as good as trained human coders, but obviously far less expensive to use. For large scale data collections, the program dominates human coding. Our new evaluative method should be of use in international relations, as well as more generally in the field of computational linguistics, for evaluating other automated information extraction tools. We believe that the data created by programs similar to the one we evaluated should see dramatically increased use in international relations research. To facilitate this process, we will be releasing with this article data on 4.3 million international events, covering the entire world for the last decade.

6
Paper
Delaying Justice(s): A Duration Analysis of Supreme Court Nominations
Shipan, Charles R.
Shannon, Megan L.

Uploaded 07-16-2001
Keywords Hazard Model
Spatial Model
Supreme Court Nominations
Abstract Presidents have great success when nominating justices to the Supreme Court, with confirmation being the norm and rejection being the rare exception. In this paper we show that while the end result of the confirmation process is that the nominee taking a seat on the Court, there is a great deal of variance in the amount of time it takes the Senate to approve the nominee. To derive a theoretical explanation of this underlying dynamic in the confirmation process, we draw on a spatial model of presidential nominations to the Court. We then use a hazard rate model to test this explanation, using data on all Supreme Court nominations and confirmations since the end of the Civil War. The hazard model is superior to alternative models such as probit, where information on right-censored nominations in our data would be lost. More specifically, the Cox proportional hazards model is a better fit for our data as compared to the Weibull, exponential, and log-logistic hazard models. Our paper thus makes two key contributions. First, it identifies the political factors that influence Supreme Court confirmations and the duration of the confirmation process. Second, it demonstrates the ways in which the nomination process affects the confirmation process.

7
Paper
The Rules of Inference
King, Gary
Epstein, Lee

Uploaded 06-25-2001
Keywords inference
empirical research
legal research
Abstract Although the term "empirical research" has become commonplace in legal scholarship over the past two decades, law professors have, in fact, been conducting research that is empirical-i.e., learning about the world using quantitative data or qualitative information-for almost as long as they have been conducting research. For just as long, however, they have been proceeding with little awareness of, much less compliance with, the rules of inference, and without paying heed to the key lessons of the revolution in empirical analysis that has been taking place over the last century in other disciplines. The sustained, self-conscious attention to the methodology of empirical analysis so present in the journals in traditional academic fields is virtually nonexistent in the nation's law reviews. As a result, readers learn considerably less accurate information about the empirical world than the studies' stridently stated, but overly confident, conclusions suggest. To remedy this situation-both for the producers and consumers of empirical work-we adapt the rules of inference used in the natural and social sciences to the special needs, theories, and data in legal scholarship, and explicate them with extensive illustrations from existing research. We also offer suggestions on how to reorganize the infrastructure of teaching and research at law schools so that it can better support the creation of first-rate empirical research without compromising other important objectives.

8
Paper
A Unified Theory and Test of Extended Immediate Deterrence
Signorino, Curtis S.
Tarar, Ahmer

Uploaded 09-05-2000
Abstract We present a unified theory and test of extended immediate deterrence --- unified in the sense that we employ our theoretical deterrence model as our statistical model in the empirical analysis. The theoretical model is a straightforward formalization of the extended immediate deterrence logic in Huth (1988), coupled with private information concerning utilities. Contrary to Huth (1988), our empirical analysis suggests that nuclear weapons, military alliances, military arms transfers, and foreign trade all affect deterrence success. Our model correctly predicts almost 97% of the potential Attacker's actions and over 91% of the crisis outcomes. Finally, we find strong evidence that the likelihood of deterrence success and of war are not monotonically related to the variables involved in the deterrence calculus. This contradicts a fundamental assumption of most previous studies.

9
Paper
Application of Panel Data Analysis to Kramer's Economic Voting Problem
Yoon, David

Uploaded 07-16-2000
Keywords economic voting
panel data
Abstract Although the health of a nation's economy has come to be seen as a reliable predictor of election outcome at the national level (e.g., Fair 1978, 1988), the corollary link between economic conditions and electoral behavior at the individual level remains less clear. Kinder and Kiewiet (1979) concluded that while the ups and downs of personal finances had negligible effect on an individual's voting behavior in national elections, the trajectory of the national economy had a significant effect. The hypothesis of the ``sociotropic'' voter was to be preferred over the ``pocketbook'' voter in thinking about whose economy mattered in elections. In an influential critique, Kramer (1983) argued that such a conclusion could not be drawn from purely cross-sectional survey data (data type used by Kinder and Kiewiet). According to Kramer, only the analysis of aggregate-level time-series data provide unbiased estimates of the effects of economic conditions on votes. Unfortunately, the two main competing hypotheses cannot be tested since individual-level economic factors cannot be studied with aggregate-level time series data alone. In contrast to previous analyses, I employ panel data (also known as longitudinal data) and analytical methods sensitive to the individual-level time-series structure of the data to estimate the relative magnitudes of the sociotropic and pocketbook effects, and test the merits of the respective hypotheses. Others have attempted to solve the Kramer problem by pooling cross-sectional data (e.g., Markus (1988, 1992)). Although pooled cross-sectional data allow investigators to compare sociotropic and pocketbook effects, they suffer from many of the same shortcomings of purely cross-sectional data. I use the 1993-1996 NES panel study to demonstrate the robustness of the sociotropic model and the strengths of panel analysis. I explain the battery of tests, estimators, and statistical assumptions used and relate these in detail to prevalent substantive political assumptions. And finally an uncommonly long panel from an Italian Nielsen survey is analyzed to demonstrate the utility of such

10
Paper
Bayesian Inference for Heterogeneous Event Counts
Martin, Andrew D.

Uploaded 04-20-2000
Keywords hierarchical models
Poisson
event count
heterogeneity
Abstract This paper presents a handful of Bayesian tools one can use to model heterogeneous event counts. In many political science applications we are interested in modeling the number of times a particular event takes place. While models for event count cross-sections are now widely used in political science (King, 1988, 1989b), little has been written about how to model counts when contextual factors introduce heterogeneity. I begin with a discussion of Bayesian cross-sectional count models and introduce an alternative model for counts with overdispersion. To illustrate the Bayesian framework, I model event counts of the number of discharge petitions from the 61st to the 105th House, and the number of women's rights bills cosponsored by each member in the 92nd House. I then generalize the model to allow for contextual heterogeneity and posit a hierarchical Poisson regression model, fitting this model to the number of women rights cosponsorships for each member of the 83rd to 102nd House. I demonstrate the advantages of this approach over pooled and independent Poisson regressions. The hierarchical model allows one to explicitly model contextual factors and test alternative contextual explanations. Additionally, I discuss software one can use to easily implement these models with little start-up cost.

11
Paper
Tests of the Validity of Complete-Unit Analysis in Surveys Subject to Item Nonresponse or Attrition
Sherman, Robert P.

Uploaded 03-12-1999
Keywords MCAR
MAR
item nonresponse
attrition
odds-ratios
Abstract nalysts of cross-sectional or panel surveys often base inferences about relationships between variables on complete units, excluding units that are incomplete due to item nonresponse or attrition. This practice is justifiable if exclusion is ignorable in an appropriate sense. This paper characterizes certain types of ignorable exclusion in surveys subject to item nonresponse and develops tests based on these characterizations. These tests are applied to data from several National Election Study (NES) panels and evidence is found of violations of assumptions of ignorable exclusion. Characterizations and tests of ignorable attrition in standard panel surveys are also presented.

12
Paper
Logistic Regression in Rare Events Data (revised)
King, Gary
Zeng, Langche

Uploaded 07-09-1999
Keywords rare events
logit
logistic regression
binary dependent variables
bias correction
case-control
choice-based
endogenous selection
selection bias
Abstract This paper is for the \r\nmethods conference; it \r\nis a revised version of \r\na paper that was \r\npreviously sent to the \r\npaper server.

13
Paper
The Robustness of Statistical Abstractions: A Look Under the Hood of Statistical Models and Software
Altman, Micah
McDonald, Michael P.

Uploaded 07-13-1999
Keywords computational abstractions
numerical accuracy
benchmark
replication
random numbers
Abstract Models rest upon abstractions that are accepted, prima facie, as routine and robust. In the course of model development and estimation, political methodologists routinely assume the reliability of the computational that abstractions they use. All computational abstractions can fall short in their implementation; some because their implementation is complicated and the precision of computation is limited, others because they assume knowledge of solutions to mathematical problems that are inherently difficult to solve. We measure the accuracy of statistical abstractions as implemented in statistical packages popular among political methodologists, such as Gauss, Stata, SST and Excel. We evaluate the use of these abstractions in the context of evaluating complex statistical procedures, such as Jonathan Nagler's (1994) scobit estimator and Gary King's (1997) solution to ecological inference. We find that widely used implementations of common statistical abstractions are at times prone to error. We show that errors in inference in complex models can result from failures to understand the implementation and limitations of computation. We then offer tools to test statistical results to improve the accuracy of many statistical implementations and to test the implementation-robustness of many statistical results. We conclude by offering recommendations to help users of statistical software avoid the pitfalls of computational abstractions and offer guidelines to aid replication

14
Paper
What's Your Temperature? Thermometer Ratings and Political Analysis
Berinsky, Adam
Winter, Nicholas

Uploaded 09-09-1999
Keywords thermometer rating
measurement
NES
group evaluations
interpersonal comparability
Abstract This paper considers the measurement properties of the NES’ feeling thermometer ratings of political groups and individuals. We explore the degree to which the thermometer contains interval-level information, the degree to which t-scores are interpersonally comparable, and discuss the measure’s use in political analysis. nwinter@umich.edu

15
Paper
Models of Monetary Policy Decision-Making: Arthur Burns and the Federal Open Market Committee
Chappell, Jr., Henry W.
McGregor, Rob Roy
Vermilyea, Todd

Uploaded 04-01-1998
Keywords monetary policy
Federal Reserve
median voter
Abstract This paper investigates decision-making within the Federal Open Market Committee of the Federal Reserve, focusing on the competing pressures of majority rule, consensus-building, and the power of the Chairman. To undertake this analysis, we have constructed a data set recording desired Federal funds rates for each member of the Committee over the 1970-1978 period. We empirically link individuals' policy preferences to adopted policies using generalized versions of the median voter model and alternative specifications. Our results confirm a persistent attraction of the median voter's ideal point; they also confirm a disproportionate influence of the Chairman in the policy process. The voting weight of the Chairman is estimated to be between 0.38 and 0.58 in preferred specifications. Results also suggest that district Federal Reserve Bank presidents have somewhat greater influence over adopted policies than Governors.

16
Paper
Is Instrumental Rationality a Universal Phenomenon?
Bennett, D. Scott
Stamm, Allan C.

Uploaded 04-22-1998
Keywords rational
expected utility
preferences
game theory
Abstract This paper examines whether the expected utility theory of war explains international conflict equally well across all regions and time-periods as a way of examining whether instrumental rationality is a universal phenomenon. In the rational choice literature, scholars typically assume that decision-makers are purposive egoistic decision-makers with common preferences across various outcomes. However, critics of the assumption have suggested that preferences and decision structures vary as a function of polity type, culture and learning among state leaders. There have been few attempts to directly examine this assumption and evaluate whether it seems empirically justified. In this paper we attempt to test the assumption of common instrumental rationality, examining several competing hypotheses about the nature of decision making in international relations and expectations about where and when instrumental rationality should be most readily observable. In particular, we want to explore the effects of regional learning to discover if there is a difference by region and over time in the outbreak of war and the predictions of the expected utility model. We find important differences both over regions and over time in how the predictions of expected utility theory fit actual conflict occurrence.

17
Paper
Listwise Deletion is Evil: What to Do About Missing Data in Political Science
King, Gary
Honaker, James
Joseph, Anne
Scheve, Kenneth

Uploaded 07-13-1998
Keywords missing data
imputation
IP
EM
EMs
EMis
data augmentation
MCMC
importance sampling
item nonresponse
Abstract We address a substantial discrepancy between the way political scientists analyze data with missing values and the recommendations of the statistics community. With a few notable exceptions, statisticians and methodologists have agreed on a widely applicable approach to many missing data problems based on the concept of ``multiple imputation,'' but most researchers in our field and other social sciences still use far inferior methods. Indeed, we demonstrate that the threats to validity from current missing data practices rival the biases from the much better known omitted variable problem. This discrepancy is not entirely our fault, as the computational algorithms used to apply the best multiple imputation models have been slow, difficult to implement, impossible to run with existing commercial statistical packages, and demanding of considerable expertise on the part of the user (indeed, even experts disagree on how to use them). In this paper, we adapt an existing algorithm, and use it to implement a general-purpose, multiple imputation model for missing data. This algorithm is between 20 and 100 times faster than the leading method recommended in the statistics literature and is very easy to use. We also quantify the considerable risks of current political science missing data practices, illustrate how to use the new procedure, and demonstrate the advantages of our approach to multiple imputation through simulated data as well as via replications of existing research.

18
Paper
Listwise Deletion is Evil: What to Do About Missing Data in Political Science (revised)
King, Gary
Honaker, James
Joseph, Anne
Scheve, Kenneth

Uploaded 08-19-1998
Keywords missing data
imputation
IP
EM
EMs
EMis
data augmentation
MCMC
importance sampling
item nonresponse
Abstract We propose a remedy to the substantial discrepancy between the way political scientists analyze data with missing values and the recommendations of the statistics community. With a few notable exceptions, statisticians and methodologists have agreed on a widely applicable approach to many missing data problems based on the concept of ``multiple imputation,'' but most researchers in our field and other social sciences still use far inferior methods. Indeed, we demonstrate that the threats to validity from current missing data practices rival the biases from the much better known omitted variable problem. As it turns out, this discrepancy is not entirely our fault, as the computational algorithms used to apply the best multiple imputation models have been slow, difficult to implement, impossible to run with existing commercial statistical packages, and demanding of considerable expertise on the part of the user (even experts disagree on how to use them). In this paper, we adapt an existing algorithm, and use it to implement a general-purpose, multiple imputation model for missing data. This algorithm is between 65 and 726 times faster than the leading method recommended in the statistics literature and is very easy to use. We also quantify the considerable risks of current political science missing data practices, illustrate how to use the new procedure, and demonstrate the advantages of our approach to multiple imputation through simulated data as well as via replications of existing research. We also offer easy-to-use public domain software that implements our approach.

19
Paper
Correlated Disturbances in Discrete Choice Models:A Comparison of Multinomial Probit Models
Alvarez, R. Michael
Nagler, Jonathan

Uploaded 01-01-1995
Keywords econometrics
logit
multinomial probit
gev
discrete-choice
monte-carlo
Abstract Correlated Disturbances in Discrete Choice Models: A Comparison of Multinomial Probit Models and Logit Models In political science, there are many cases where individuals make discrete choices from more than two alternatives. This paper uses Monte Carlo analysis to examine several questions about one class of discrete choice models --- those involving both alternative-specific and individual-specific variables on the right-hand side --- and demonstrates several findings. First, the use of estimation techniques assuming uncorrelated disturbances across alternatives in discrete choice models can lead to significantly biased parameter estimates. This point is tempered by the observation that probability estimates based on the full choice set generated from such estimates are not likely to be biased enough to lead to incorrect inferences. However, attempts to infer the impact of altering the choice set -- such as by removing one of the alternatives -- will be less successful. Second, the Generalized Extreme Value (GEV) model is extremely unreliable when the pattern of correlation among the disturbances is not as restricted as the GEV model assumes. GEV estimates may suggest grouping among the choices that is in fact not present in the data. Third, in samples the size of many typical political science applications -- 1000 observations -- Multinomial Probit (MNP) is capable of recovering precise estimates of the parameters of the systemic component of the model, though MNP is not likely to generate precise estimates of the relationship among the disturbances in samples of this size. Paradoxically, MNP's primary benefit is its ability to uncover relationships among alternatives and to correctly estimate the affect of removing an alternative from the choice set. Thus this paper suggests the increased use of MNP by political scientists examining discrete choice problems when the central question of interest is the effect of removing an alternative from the choice set. We demonstrate that for other questions, models positing independent disturbances may be `close enough.'

20
Paper
Explaining the Gender Gap in the 1992 U.S. Presidential Election
Alvarez, R. Michael
Nagler, Jonathan
Chaney, Carole

Uploaded 00-00-0000
Keywords voting
elections
gender-gap
women
GEV
Abstract This paper compares the voting behavior of women and men in the 1992 presidential election. We show that consistent with behavior in previous elections, women placed more emphasis on the national economy than men, and men placed more emphasis on pocketbook voting than women. We also show that while the difference between men and women's preferences and emphasis on no single issue explains the significant gender-gap in vote-choice; a combination of issues examining respondents views on the economy, social programs, military action, abortion, and ideology, can explain almost three-fourths of the gender-gap.

21
Paper
Measurement Models for Time Series Analysis: Estimating Dynamic Linear Errors in Variables
McAvoy, Gregory

Uploaded 07-12-1999
Keywords measurement models
time series analysis
errors-in-variables
Rolling Thunder
Abstract This paper uses state space modelling and Kalman filtering to estimate a dynamic linear errors-in-variables model with random measurement error in both the dependent and independent variables. I begin with a general description of the dynamic errors-in-variables model, translate it into state space form, and show how it can be estimated via the Kalman filter. I then use the model in a substantive example to examine the effects of aggregate partisanship on evaluations of President Reagan's job performance using data from the 1984 primary campaign and compare the OLS estimates for this example to those derived from maximum likelihood estimates of the dynamic shock-error setup. Next, I report the results of a simulation in which the amount of random measurement error is varied and, thus, demonstrate the importance of estimating measurement error models and the superiority that Kalman filtering has over regression. Finally, I estimate a dynamic linear errors-in-variables model using multiple indicators for the latent variables.

22
Paper
Iff the Assumption Fits...: A Comment on the King Ecological Inference Solution
Tam, Wendy

Uploaded 12-08-1997
Keywords ecological inference
Abstract I examine a recently proposed solution to the ecological inference problem (King 1997). It is asserted that the proposed model is able to reconstruct individual-level behavior from aggregate data. I discuss in detail both the benefits and limitations of this model. The assumptions of the basic model are often inappropriate for instances of aggregate data. The extended version of the model is able to correct for some of these limitations. However, it is difficult in most cases to apply the extended model properly.

23
Paper
Pauline, the Mainstream, and Political Elites: the place of race in Australian political ideology
Jackman, Simon

Uploaded 08-25-1997
Keywords public opinion
political ideology
political elites
race
immigration
Australian politics
factor analysis
ideological locations
density estimation
plotting highest density regions
Abstract An often heard claim in the current ``race debate'' is that Australia's major political parties are out of touch with ``mainstream'' Australia on issues related to race. Parallel surveys of the electorate and candidates in the 1996 Federal election allow this claim to be tested, with items tapping general ideological dispositions, but including questions about Aboriginal Australians, immigration, and links with Asia. I make three critical findings: egin{itemize} item the electorate holds quite conservative opinions on these issues relative to the candidates, and is quite distant from ALP candidates in particular; item attitudes on racial issues are a powerful component of the electorate's otherwise relatively loosely organized political ideology, so much so that any categorisation of Australian political ideology ignoring race must be considered incomplete; item racial attitudes cut across other components of the electorate's ideology, placing all the parties under internal ideological strains, but the ALP appears particularly vulnerable on this score. end{itemize} While the data show the Coalition to be the net beneficiary of the ideological tensions posed by race, the formation of Pauline Hanson's One Nation party has exposed the Coalition's vulnerability to race as a cross-cutting political issue. Racial issues thus have many characteristics of a realigning dimension in Australian politics.

24
Paper
The Spatial Theory of Voting and the Presidential Election of 1824
Jenkins, Jeffery A.
Sala, Brian R.

Uploaded 08-15-1997
Keywords spatial voting theory
ideological voting
presidential selection
Nominate scores
Abstract One recent analysis claims that in at least five p residential contests since the end of World War II a relatively minor vote shift in a small number of states would have produced Electoral College deadlock, leading to a House election for president (Longley and Peirce 1996). A presidential contest in the House would raise fundamental questions from agency theory - do members "shirk" the collective preferences of their constituent-principals on highly salient votes and, if so, what explains the choices they do make? Can vote choices be rationalized in a theory of ideological voting, or are legislators highly susceptible to interest-group pressures and enticements? We apply a spatial-theoretic model of voting to the House balloting for president in 1825 in order to test competing hypotheses about how MCs would likely vote in a presidential ballot. We find that a sincere voting model based on ideal points for MCs and candidates derived from Nominate scores closely matches the choices made by MCs in 1825.

25
Paper
A Statistical Model for Multiparty Electoral Data
Katz, Jonathan
King, Gary

Uploaded 07-16-1997
Keywords multiparty elections
compositional data
multivariate-t
Abstract We propose an internally consistent and comprehensive statistical model for analyzing multiparty, district-level aggregate election data. This model can be used to explain or predict how the geographic distribution of electoral results depends upon economic conditions, neighborhood ethnic compositions, campaign spending, and other features of the election campaign or characteristics of the aggregate areas. We also provide several new graphical representations for help in data exploration, model evaluation, and substantive interpretation. Although the model applies more generally, we use it to resolve an important controversy over the size of and trend in the electoral advantage of incumbency in Great Britain. Contrary to previous analyses, which are all based on measures now known to be biased, we demonstrate that the incumbency advantage is about 1% for the major parties and 4% for the Liberal party and its successors. Also contrary to previous research, we show that these effects have not grown in recent years. Finally, we are able to estimate from which party each party's incumbency advantage is predominantly drawn.

26
Paper
Tau-b or Not Tau-b: Measuring Alliance Portfolio Similarity
Signorino, Curtis S.
Ritter, Jeffery M.

Uploaded 04-02-1997
Keywords alliances
tau-b
similarity
utility
risk
ordinal
Abstract The pattern of alliance commitments among states is commonly assumed to reflect the extent to which states have common or conflicting security interests. For the past twenty years, Kendall's tau-b has been used to measure the similarity between two nations' ``portfolios'' of alliance commitments. Widely employed indicators of systemic polarity, state utility, and state risk propensity all rely upon tau-b. We demonstrate that tau-b is inappropriate for measuring the similarity of states' alliance commitments. We develop an alternative measure of alliance portfolio similiarity, S, which avoids many of the problems associated with tau-b, and we use data on alliances among European states to compare the effects of S versus tau-b in measures of utility and risk propensity. Finally, we identify several problems with inferring state interest from alliance commitments and we provide a method to overcome those problems using S in combination with data on alliances, trade, UN votes, diplomatic missions, and other types of state interaction.

27
Paper
Multilevel (hierarchical) modeling: what it can and can't do
Gelman, Andrew

Uploaded 01-26-2005
Keywords Bayesian inference
hierarchical model
multilevel regression
Abstract Multilevel (hierarchical) modeling is a generalization of linear and generalized linear modeling in which regression coefficients are themselves given a model, whose parameters are also estimated from data. We illustrate the strengths and limitations of multilevel modeling through an example of the prediction of home radon levels in U.S. counties. The multilevel model is highly effective for predictions at both levels of the model but could easily be misinterpreted for causal inference.

28
Paper
Trade and Militarized Conflict: How Modeling Strategic Interactions Between States Makes a Difference
Rowan, Shawn E.

Uploaded 07-19-2005
Keywords trade
conflict
interdependence
asymmetry
strategic
Abstract The study between the interaction of war and foreign trade has occupied scholars from political science and economics for thousands of years. I contribute to the trade and conflict debate by accounting for the strategic interaction between states that most or all theories in international relations (IR) assume. I use a strategic statistical model (Signorino 1999, 2003b) that endogenizes the actions that leads states to militarized conflict and peace. The results of the strategic probit model reveal non-linear, asymmetric relationships between trade dependence and militarized conflict for each state in the dyad. Not only are these effects non-linear, but, in equilibrium, also depend on the actions taken by the other state in the dyad. The trade dependence of one state on another can have either a pacifying or a positive effect on militarized conflict. Additionally, these effects are only realized for initial increases in trade dependence and that once a threshold is reached, the effects of trade dependence are constant.

29
Paper
Demographic Forecasting
Girosi, Federico
King, Gary

Uploaded 07-10-2003
Keywords forecasting
Abstract We introduce a new framework for forecasting age-sex-country-cause-specific mortality rates that incorporates considerably more information, and thus has the potential to forecast much better, than any existing approach. Mortality forecasts are used in a wide variety of academic fields, and for global and national health policy making, medical and pharmaceutical research, and social security and retirement planning. As it turns out, the tools we developed in pursuit of this goal also have broader statistical implications, in addition to their use for forecasting mortality or other variables with similar statistical properties. First, our methods make it possible to include different explanatory variables in a time series regression for each cross-section, while still borrowing strength from one regression to improve the estimation of all. Second, we show that many existing Bayesian (hierarchical and spatial) models with explanatory variables use prior densities that incorrectly formalize prior knowledge. Many demographers and public health researchers have fortuitously avoided this problem so prevalent in other fields by using prior knowledge only as an ex post check on empirical results, but this approach excludes considerable information from their models. We show how to incorporate this demographic knowledge into a model in a statistically appropriate way. Finally, we develop a set of tools useful for developing models with Bayesian priors in the presence of partial prior ignorance. This approach also provides many of the attractive features claimed by the empirical Bayes approach, but fully within the standard Bayesian theory of inference. The latest version of this manuscript is available at http://gking.harvard.edu.

30
Paper
Network Analysis and the Law: Measuring the Legal Importance of Supreme Court Precedents
Fowler, James
Johnson, Timothy
Spriggs II, James F.
Jeon, Sangick
Wahlbeck, Paul

Uploaded 06-02-2006
Abstract We construct the complete network of 28,951 majority opinions written by the U.S. Supreme Court and the cases they cite from 1792 to 2005. We illustrate some basic properties of this network and then describe a method for creating importance scores using the data to identify the most important Court precedents at any point in time. This method yields dynamic rankings that can be used to predict the future citation behavior of state courts, the U.S. Courts of Appeals, and the U.S. Supreme Court, and these rankings outperform several commonly used alternative measures of case importance.

31
Paper
Bargaining and Society: A Statistical Model of the Ultimatum Game
Signorino, Curtis
Ramsay, Kristopher

Uploaded 07-20-2006
Keywords bargaining
ultimatum
game theory
statistics
strategic
rationality
Abstract In this paper we derive a statistical estimator for the popular Ultimatum bargaining game. Using monte carlo data generated by a strategic bargaining process, we show that the estimator correctly recovers the relationship between dependent variables, such as the proposed division and bargaining failure, relative to substantive variables that comprise players' utilities. We then use the model to analyze bargaining data in a number of contexts. The current example examines the effects of demographics on bargaining behavior in experiments conducted on U.S. and Russian participants.

32
Paper
Statistical Analysis of Randomized Experiments with Nonignorable Missing Binary Outcomes
Imai, Kosuke

Uploaded 07-24-2006
Keywords Causal Inference
Instrumental Variables
Intention-to-Treat Effect
Latent Ignorability
Noncompliance
Treatment Effect
Sensitivity Analysis
Abstract Missing data are frequently encountered in the statistical analysis of randomized experiments. In this article, I propose statistical methods that can be used to analyze randomized experiments with a nonignorable missing binary outcome where the missing-data mechanism may depend on the unobserved values of the outcome variable itself. I first introduce an identification strategy for the average treatment effect and compare it with the existing alternative approaches in the literature. I then derive the maximum likelihood estimator and its asymptotic properties, and discuss possible estimation methods. Furthermore, since the proposed identification assumption is not directly verifiable from the data, I show how to conduct a sensitivity analysis based on the parameterization that links the key identification assumption with the causal quantities of interest. Then, the proposed methodology is extended to the analysis of randomized experiments with noncompliance. Although the method introduced in this article may not directly apply to randomized experiments with non-binary outcomes, I briefly discuss possible identification strategies in more general situations. Finally, I apply the proposed methodology to analyze data from the German election experiment and the influenza vaccination study, which originally motivated the methodological problems addressed in this article.

33
Paper
Potential Ambiguities in a Directed Dyad Approach to State Policy Emulation
Boehmke, Frederick

Uploaded 07-10-2007
Keywords state politics
state policy
diffusion
emulation
monte carlo
health policy
dyadic
Abstract In this paper I discuss circumstances under which the dyadic model of policy diffusion can produce misleading estimates in favor of policy emulation. These circumstances arise in the context of state pain management policy, and correspond generally to policies that states are uniformly expanding. When this happens, dyadic models of policy diffusion conflate policy emulation and policy adoption: since early adopters are policy leaders, later adopters will appear to emulate them, even if they are merely stragglers acting on their own accord. I demonstrate the possibility of this ambiguity analytically and through Monte Carlo simulation. Both start with the assumption that the data are generated according to a standard, monadic model of policy adoption and then converted to a dyadic model, which can incorrectly produce evidence of emulation. I propose a simple modification of the dyadic emulation model --- conditioning on the opportunity to emulate --- and show that it is much less likely to produce inaccurate findings. I then return to the study of pain management policy and find substantial differences between the dyadic emulation model and the conditional emulation model.

34
Paper
A default prior distribution for logistic and other regression models
Gelman, Andrew
Jakulin, Aleks
Pittau, Maria Grazia
Su, Yu-Sung

Uploaded 08-03-2007
Keywords Bayesian inference
generalized linear model
least squares
hierarchical model
linear regression
logistic regression
multilevel model
noninformative prior distribution
Abstract We propose a new prior distribution for classical (non-hierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Student-$t$ prior distributions on the coefficients. As a default choice, we recommend the Cauchy distribution with center 0 and scale 2.5, which in the simplest setting is a longer-tailed version of the distribution attained by assuming one-half additional success and one-half additional failure in a logistic regression. We implement a procedure to fit generalized linear models in R with this prior distribution by incorporating an approximate EM algorithm into the usual iteratively weighted least squares. We illustrate with several examples, including a series of logistic regressions predicting voting preferences, an imputation model for a public health data set, and a hierarchical logistic regression in epidemiology. We recommend this default prior distribution for routine applied use. It has the advantage of always giving answers, even when there is complete separation in logistic regression (a common problem, even when the sample size is large and the number of predictors is small) and also automatically applying more shrinkage to higher-order interactions. This can be useful in routine data analysis as well as in automated procedures such as chained equations for missing-data imputation.

35
Paper
Objections to Bayesian statistics
Gelman, Andrew

Uploaded 06-01-2008
Keywords comparison of methods
foundations of statistics
Abstract Bayesian inference is one of the more controversial approaches to statistics. The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience. The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference. This article presents a series of objections to Bayesian inference, written in the voice of a hypothetical anti-Bayesian statistician. The article is intended to elicit elaborations and extensions of these and other arguments from non-Bayesians and responses from Bayesians who might have different perspectives on these issues.

36
Paper
Binary and Ordinal Time Series with AR(p) Errors: Bayesian Model Determination for Latent High-Order Markovian Processes
Pang, Xun

Uploaded 07-06-2008
Keywords Autoregressive Errors
Auxiliary Particle Filter
Fixed-lag Smoothing
Markov Chain Monte Carlo (MCMC)
Political Science
Sampling Importance Resampling(SIR)
Abstract To directly and adequately correct serial correlation in binary and ordinal response data, this paper proposes a probit model with errors following a pth-order autoregressive process, and develops simulation-based methods in the Bayesian context to handle computational challenges of posterior estimation, model comparison, and lag order determination. Compared to the extant methods, such as quasi-ML, GCM, and and simulation-based ML estimators, the current method does not rely on the properties of the big variance-covariance matrix or the shape of the likelihood function. In addition, the present model efficiently handles high-order autocorrelated errors that raise computationally formidable difficulties to the conventional methods. By applying a mixed sampler of the Gibbs and Metropolis-Hastings algorithm, the posterior distributions of the parameters do not depend on initial observations. The auxiliary particle filter, complemented by the fixed-lag smoothing, is extended to approximate Bayes Factors for models with latent high-order Markov processes. Computational methods are tested with empirical data. Energy cooperation policies of the International Energy Agency are analyzed in terms of their effects on global oil-supply security. The current model with different lag orders, together with other competitive models, is estimated and compared.

37
Paper
Investigate at extreme right : Between total immersion and participant observations, the example of French National Front (2006-2008)
Mermat, Djamel

Uploaded 07-18-2008
Keywords France
Far right
electoral campaigns
Methodology
Immersion
Participant observation
Political party.
Abstract There is a particular situation involving the NF that has been noticeably neglected to date in France: the capture live of the motivations and actions of these new partisans who rallied to the movement during the last three years (Glenn, 2005: 35-43). We must also recognize that enabling us to understand this party in "campaigning mode," there is insufficient knowledge. Nonetheless, if we hope to remedy these basic two weaknesses, what methods could researchers employ? Consequently, what can political science methodology eventually learn from an adjustment in the status of the researcher on the ground and at the time of the inquiry? More precisely: what advantages do participant observation employed almost daily offer? What are the basic contributions of total immersion in the "Frontist" environment? Given these questions, we wanted, based on comparative qualitative research, to explain what fund the validity of the results obtained (Kent, 2001), through establishing a cost-benefit analysis of the use of two different methods, of two inherently quite distinct presentations. Indeed, the result very rarely mentions the researcher's many ups and downs. However, the successes and inevitable failures of the ethnographic investigation condition the nature of the data collected. Therefore, this is an attempt to address several methodological deficiencies or silences, and to reverse certain epistemological biases, through returning to concepts whose substance needs clarification: "participant observation," "empathy," "total immersion," and "infiltration." All the underpinnings of the research do, however, draw attention to the manner in which the political analyst created his methodology and analytical categories, as well as his own approach to the subject under study. As a result, at first we will emphasize the difference in scale between our two research fields, since it led to our adoption of another approach to the subject (I). Thus, we first chose as our research location the North Flanders Federation from June 2006 to the start of November 2007, the beginning of the presidential campaign, up to the presentation of the assessment of the local councillors. Still, from the month of June 2007, and without abandoning our initial site, we progressively accorded increasing attention to the "new partisans" supporting Marine Le Pen and Steeve Briois in the 14th constituency of Pas-de-Calais, in particular in the city of Henin-Beaumont. In the first week of December 2007, this led us to commence our exploration of the diversity of actors of the General Headquarters of the "Henin-Beaumont pour Vous" list campaign. Henin-Beaumont belongs to the Federation of Mayors of Mid-Sized Cities. Well, to date, no study on the NF has been interested in its "propaganda" strategy (Kalinowski, 2005) for a mid-sized city and during an election campaign, even less for a municipal. The idea was to slide, over a period of several weeks from Flanders to Pas-de-Calais, from the status of participant observer outside of the group, to that of active member at the periphery of the central group, thus, integrated in the group (Strauss, Corbin, and Soulet, 2004). This process offered the researcher the opportunity to situate himself somewhere between simply "taking part" and being "uncovered." Thus, the necessity of reacting, at the spur of the moment, when confronted with the unexpected (II), was the most challenging aspect. Moreover, it is this absence, of a recent localized investigation through direct observation over an extended period, of a political enterprise still provoking concerns and anathemas that propelled us to study what the FN electoral campaigns do to the researcher and his analytical tools.

38
Paper
Cosponsorship in the U.S. Senate: A Multilevel Approach to Detecting Subtle Social Predictors of Legilslative Support
Gross, Justin

Uploaded 09-14-2008
Keywords Congress
cosponsorship
social network analysis
multilevel models
mixed effects
GLMM
Abstract Why do members of Congress choose to cosponsor legislation proposed by their colleagues and what can we learn from their patterns of cosponsorship? To answer these questions properly requires models that respect the relational nature of the relevant data and the resulting interdependence among observations. We show how the inclusion of carefully selected random effects can capture network-type dependence, allowing us to more confidently investigate senators' propensity to support colleagues' proposals. To illustrate, we examine whether certain social factors such as demographic similarities, opportunities for interaction, and institutional roles are associated with varying odds of cosponsorship during the 2003-04 (108th) Senate.

39
Paper
The political consequences of transitions out of marriage in Great Britain
Kern, Holger

Uploaded 11-20-2007
Keywords causal inference
matching
Great Britain
marriage
divorce
widowhood
turnout
Abstract This paper uses British Household Panel Survey data to estimate the effects of divorce and widowhood on political attitudes and political behavior. In contrast to previous research, which mostly relied on cross-sectional data, a matched propensity score analysis does not find any effects of transitions out of marriage on policy preferences, party identification, and vote choice. The results also show that divorce (but not widowhood) substantially reduces electoral participation. Some preliminary evidence suggests that this effect of divorce on turnout is partially attributable to the increased residential mobility that accompanies divorce.

40
Paper
The Power to Propose: A Natural Experiment in Politics
Loewen, Peter
Koop, Royce
Fowler, James

Uploaded 07-22-2009
Abstract In the study of democracy, an enduring question is whether citizens pay attention to what lawmakers do. Legislators frequently propose new laws, but observational studies cannot elucidate the effect such proposals have on citizen reactions to specific lawmakers, since any effects on electoral outcomes are confounded by unobserved individual differences in legislative and political skill. Here, we take advantage of a unique natural experiment in the Canadian House of Commons that allows us to estimate how the power to propose legislation affects elections. In the two most recent parliaments, the right of non-cabinet members to propose has been assigned by lottery. Comparing outcomes between those who were granted the right to propose and those who were not, we show that incumbents of the governing party enjoy a three and a half percentage point bonus in the electoral vote count following the allowed introduction of a single piece of legislation. This effect translates to a nine percent increase in the probability of winning the election. We also show that the causal effect does not result from media exposure or deterred entry of quality challengers who might otherwise have opposed the incumbent. Instead, government MPs who pass legislation receive more campaign donations, and money is associated with higher vote totals. These results are the first ever to show that what politicians do as lawmakers has a causal effect on the electorate.

41
Paper
No News is News: Non-Ignorable Non-Response in Roll-Call Data Analysis
Rosas, Guillermo
Shomer, Yael
Haptonstahl, Stephen

Uploaded 07-10-2010
Keywords rollcall
voting
abstention
missing
Bayesian
IRT
Abstract Roll-call votes are widely employed to infer the ideological proclivities of legislators, even though inferences based on roll-call data are accurate reflections of underlying policy preferences only under stringent assumptions. We explore the consequences of violating one such assumption, namely, the ignorability of the process that generates non-response in roll calls. We offer a reminder of the inferential consequences of ignoring certain processes of non-response, a basic estimation framework to model non-response and vote choice concurrently, and models for two theoretically relevant processes of non-ignorable missingness. We reconsider the "most liberal Senator" question that comes up during election times every four years in light of our arguments and show how we inferences about ideal points can improve by incorporating a priori information about the process that generates abstentions.

42
Paper
Analyzing the robustness of semi-parametric duration models for the study of repeated events models
Box-Steffensmeier, Janet
Linn, Suzanna
Smidt, Corwin

Uploaded 08-25-2010
Keywords repeated events
event history analysis
Abstract Estimators within the Cox family are often used to estimate models for repeated events. Yet there is much we do not know about the performance of these estimators. In particular, we do not know how they perform given time dependence, different censoring rates, varying number of events experienced, and varying sample sizes. We use Monte Carlo simulations to demonstrate the performance of a variety of popular semi-parametric estimators as these things change and also under conditions of event dependence and heterogeneity, both, or neither. We conclude that the conditional frailty model outperforms other standard estimators under a wide array of data-generating processes and conditions.

43
Paper
Computerized Adaptive Testing for Public Opinion Surveys
Montgomery, Jacob
Cutler, Josh

Uploaded 06-19-2012
Keywords surveys
item response
CAT
dynamic surveys
Abstract Survey researchers avoid using large multi-item scales to measure latent traits due to both the financial costs and the risk of driving up non-response rates. Typically, investigators select a subset of available scale items rather than asking the full battery. Reduced batteries, however, can sharply reduce measurement precision and introduce bias. In this paper, we present computerized adaptive testing (CAT) as a method for minimizing the number of questions each respondent must answer while preserving measurement accuracy and precision. CAT algorithms respond to individuals' previous answers to select subsequent questions that most efficiently reveal respondents' position on a latent dimension. We introduce the basic stages of a CAT algorithm and present the details for one approach to item-selection appropriate for public opinion research. We then demonstrate the advantages of CAT via simulation and by empirically comparing dynamic and static measures of political knowledge.

44
Paper
Scientific Progress in the Absence of New Data: A Procedural Replication of Ross (2006)
Martel GarcĂ­a, Fernando

Uploaded 11-11-2013
Keywords replication
democracy
mortality
Abstract Estimating the effect of macro variables like democracy on aggregate outcomes like child mortality remains a formidable challenge. When new data are limited, theories imprecise, and experimentation impossible, findings are mostly driven by modeling assumptions and inadvertent errors. Here I propose procedural replication as a method of generating objective evidence capable of demonstrating new insights even in the absence of new data. I develop a simple Bayesian framework for replication studies, distinguish five different types of replication, and show how procedural replication can improve answers to existing research questions, tether inferences to data, and generate checklists for cumulative research. A procedural replication of Ross's (2006) controversial finding that democracy has no effect on child mortality shows this null finding to be an artifact of the way quinquennial averages were computed, and the static nature of the preferred model. I address other shortcomings and provide a procedural checklist to inform future studies.

45
Paper
Randomization Inference with Natural Experiments: An Analysis of Ballot Effects in the 2003 California Recall Election
Imai, Kosuke
Ho, Daniel

Uploaded 07-21-2004
Keywords casual inference
Fisher/'s exact test
inversion
political science
voting behavior
elections
Abstract Since the 2000 U.S. Presidential election, social scientists have rediscovered a long tradition of research that investigates the effects of ballot format on voting. Using a new dataset collected by the New York Times, we investigate the causal effects of being listed on the first ballot page in the 2003 California gubernatorial recall election. California law mandates a complex randomization procedure of ballot order that approximates a classical randomized experiment in real world settings. The recall election also poses particular statistical challenges with an unprecedented 135 candidates running for the office. We apply (nonparametric) randomization inference based on Fisher's exact test, which incorporates the complex randomization procedure and yields accurate confidence intervals. Conventional asymptotic model-based inferences are found to be highly sensitive to assumptions and model specification. Randomization inference suggests that roughly half of the candidates gained more votes when listed on the first page of ballot.

46
Paper
Noncommutative harmonic analysis of voting in small committees
Lawson, Brian
Orrison, Michael
Uminsky, David

Uploaded 07-13-2003
Keywords spectral analysis
noncommutative harmoinc analysis
voting analysis
supreme court
Abstract This paper introduces a new method, noncommutative harmonic analysis, as a tool for political scientists. The method is based on recent results in mathematics which systematically identify coalitions in voting data. The first section shows how this new approach, noncommutative harmonic analysis is a generalization of classical spectral analysis. The second section shows how noncommutative harmonic analysis is applied to a hypothetical example. The third section uses noncommutative harmonic analysis to analyze coalitions on the Supreme Court. The final section suggests ideas for extending the approach presented here to the study of voting in legislatures and preferences over candidates in multicandidate mass elections.

47
Paper
Causal inference with general treatment regimes: Generalizing the propensity score
Imai, Kosuke
van Dyk, David A.

Uploaded 11-18-2002
Keywords causal inference
income
medical expenditure
non-random treatment
observational studies
schooling
smoking
subclassification
Abstract In this article, we develop the theoretical properties of the propensity function which is a generalization of the propensity score of Rosenbaum and Rubin (1983). Methods based on the propensity score have long been used for causal inference in observational studies; they are easy to use and can effectively reduce the bias caused by non-random treatment assignment. Although treatment regimes are often not binary in practice, the propensity score methods are generally confined to binary treatment scenarios. Two possible exceptions were suggested by Joffe and Rosenbaum (1999) and Imbens (2000) for ordinal and categorical treatments, respectively. In this article, we develop theory and methods which encompass all of these techniques and widen their applicability by allowing for arbitrary treatment regimes. We illustrate our propensity function methods by applying them to two data sets; we estimate the effect of smoking on medical expenditure and the effect of schooling on wages. We also conduct Monte Carlo experiments to investigate the performance of our methods.

48
Paper
A Monte Carlo Analysis for Recurrent Events Data
Box-Steffensmeier, Janet M.
De Boef, Suzanna

Uploaded 07-13-2002
Keywords survival analysis
repeated events
heterogeneity
event dependence
simulations
Abstract Scholars have long known that multiple events data, which occur when subjects experience more than one event, cause a problem when analyzed without taking into consideration the correlation among the events. In particular there has not been a solution about the best way to model the common occurrence of repeated events, where the subject experiences the same type of event more than once. Many event history model variations based on the Cox proportional hazards model have been proposed for the analysis of repeated events and it is well known that these models give different results (Clayton 1994; Lin 1994; Gao and Zhou 1997; Klein and Moeschberger 1997; Therneau and Hamilton 1997; Wei and Glidden 1997; Box-Steffensmeier and Zorn 1999; Hosmer and Lemeshow 1999; Kelly and Lim 2000). Our paper focuses on the two main alternatives for modeling repeated events data, variance corrected and frailty (also referred to as random effects) approaches, and examines the consequences these different choices have for understanding the interrelationship between dynamic processes in multivariate models, which will be useful across disciplines. Within political science, the statistical work resulting from this project will help resolve some important theoretical and policy debates about political dynamics, such as the liberal peace, by commenting on the reliability of the different modeling strategies used to test those theories and applying those models. Specifically, the results of the project will help assess whether one of the two primary approaches is better able to account for within-subject correlation. We evaluate the various modeling strategies using Monte Carlo evidence to determine whether and under what conditions alternative modeling strategies for repeated events are appropriate. The question as to the best modeling strategy for repeated events data is an important one. Our understanding of political processes, as in all studies, depends on the quality of the inferences we can draw from our models. There is currently little guidance about which approach or model is appropriate and so, not surprisingly, we see analysts unsure of the best way to analyze their data. Given the dramatic substantive differences that result from using the different models and approaches, this is a problem that will be of interest across research communities.

49
Paper
Did Illegally Counted Overseas Absentee Ballots Decide the 2000 U.S. Presidential Election?
Imai, Kosuke
King, Gary

Uploaded 02-13-2002
Keywords 2000 U.S. Presidential Election
Ecological Inference
Bayesian Model Averaging
Abstract Although not widely known until much later, Al Gore received 202 more votes than George W. Bush on election day in Florida. George W. Bush is president because he overcame his election day deficit with overseas absentee ballots that arrived and were counted after election day. In the final official tally, Bush received 537 more votes than Gore. These numbers are taken from the official results released by the Florida Secretary of State's office and so do not reflect overvotes, undervotes, unsuccessful litigation, butterfly ballot problems, recounts that might have been allowed but were not, or any other hypothetical divergence between voter preferences and counted votes. After the election, the New York Times conducted a six month long investigation and found that 680 of the overseas absentee ballots were illegally counted, and no partisan, pundit, or academic has publicly disagreed with their assessment. In this paper, we describe the statistical procedures we developed and implemented for the Times to ascertain whether disqualifying these 680 ballots would have changed the outcome of the election. The methods involve adding formal Bayesian model averaging procedures to King's (1997) ecological inference model. Formal Bayesian model averaging has not been used in political science but is especially useful when substantive conclusions depend heavily on apparently minor but indefensible model choices, when model generalization is not feasible, and when potential critics are more partisan than academic. We show how we derived the results for the Times so that other scholars can use these methods to make ecological inferences for other purposes. We also present a variety of new empirical results that delineate the precise conditions under which Al Gore would have been elected president, and offer new evidence of the striking effectiveness of the Republican effort to convince local election officials to count invalid ballots in Bush counties and not count them in Gore counties.

50
Paper
Analyzing the dynamics of international mediation processes
Schrodt, Philip A.
Gerner, Deborah J.

Uploaded 07-16-2001
Keywords event data
cross-correlation
mediation
Cox proportional hazard
pattern recognition
Abstract This paper presents initial results from a project that will formally test a number of the hypotheses embedded in the theoretical and qualitative literatures on mediation, using automated coding of event data from news-wire sources. In contrast to most of the existing quantitative literature, which emphasizes the structural aspects of mediation, we will focus on the dynamics. The initial part of the paper focuses on two issues of design. First, we discuss the advantages of generating data using fully automated methods, which increases the transparency and replicability of the research. This transparency is extended to the development of more complex variables that cannot be captured as single events: these are defined as pattern of the underlying event data. We also suggest that these can be usefully studied using conventional inferential statistics rather than computational pattern recognition. Second, we justify the "statistical case study" approach which focuses on a small number of cases that are limited in geographical and temporal scope. While the risk of this approach is that one will find patterns of behavior that apply only in those circumstances, we point out that the more conventional large-N time-series cross-sectional studies also carry inferential risks. The statistical tests reported in this paper look at three different issues using data on the Israel-Lebanon and Israel-Palestinian conflicts in the Levant (1979-1999), and the Serbia-Croatia and Serbia-Bosnia conflicts in the Balkans (1991-1999). First, cross- correlation is used to look at the effects of mediation on the level of violence over time. Second, we test the "sticks-or-carrots" hypothesis on whether mediation is more effective in reducing violence if accompanied by cooperative or conflictual behavior by the mediator. Finally, we estimate Cox proportional hazard models to assess the factors that influence (1) whether mediation is accepted by the parties in a conflict, (2) whether formal agreements are reached, and (3) whether the agreements reduce the level of conflict. Future work in the project involves development of a new event coding scheme specifically designed for the study of mediation, and expansion of the list of cases to include other mediated conflicts in the Middle East and West Africa.


< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 next>
   
wustlArtSci