logoPicV1 logoTextV1

Search Results


Below results based on the criteria 'direct and indirect e'
Total number of records returned: 911

1
Paper
Randomization Inference with Natural Experiments: An Analysis of Ballot Effects in the 2003 California Recall Election
Imai, Kosuke
Ho, Daniel

Uploaded 07-21-2004
Keywords casual inference
Fisher/'s exact test
inversion
political science
voting behavior
elections
Abstract Since the 2000 U.S. Presidential election, social scientists have rediscovered a long tradition of research that investigates the effects of ballot format on voting. Using a new dataset collected by the New York Times, we investigate the causal effects of being listed on the first ballot page in the 2003 California gubernatorial recall election. California law mandates a complex randomization procedure of ballot order that approximates a classical randomized experiment in real world settings. The recall election also poses particular statistical challenges with an unprecedented 135 candidates running for the office. We apply (nonparametric) randomization inference based on Fisher's exact test, which incorporates the complex randomization procedure and yields accurate confidence intervals. Conventional asymptotic model-based inferences are found to be highly sensitive to assumptions and model specification. Randomization inference suggests that roughly half of the candidates gained more votes when listed on the first page of ballot.

2
Paper
Noncommutative harmonic analysis of voting in small committees
Lawson, Brian
Orrison, Michael
Uminsky, David

Uploaded 07-13-2003
Keywords spectral analysis
noncommutative harmoinc analysis
voting analysis
supreme court
Abstract This paper introduces a new method, noncommutative harmonic analysis, as a tool for political scientists. The method is based on recent results in mathematics which systematically identify coalitions in voting data. The first section shows how this new approach, noncommutative harmonic analysis is a generalization of classical spectral analysis. The second section shows how noncommutative harmonic analysis is applied to a hypothetical example. The third section uses noncommutative harmonic analysis to analyze coalitions on the Supreme Court. The final section suggests ideas for extending the approach presented here to the study of voting in legislatures and preferences over candidates in multicandidate mass elections.

3
Paper
Causal inference with general treatment regimes: Generalizing the propensity score
Imai, Kosuke
van Dyk, David A.

Uploaded 11-18-2002
Keywords causal inference
income
medical expenditure
non-random treatment
observational studies
schooling
smoking
subclassification
Abstract In this article, we develop the theoretical properties of the propensity function which is a generalization of the propensity score of Rosenbaum and Rubin (1983). Methods based on the propensity score have long been used for causal inference in observational studies; they are easy to use and can effectively reduce the bias caused by non-random treatment assignment. Although treatment regimes are often not binary in practice, the propensity score methods are generally confined to binary treatment scenarios. Two possible exceptions were suggested by Joffe and Rosenbaum (1999) and Imbens (2000) for ordinal and categorical treatments, respectively. In this article, we develop theory and methods which encompass all of these techniques and widen their applicability by allowing for arbitrary treatment regimes. We illustrate our propensity function methods by applying them to two data sets; we estimate the effect of smoking on medical expenditure and the effect of schooling on wages. We also conduct Monte Carlo experiments to investigate the performance of our methods.

4
Paper
A Monte Carlo Analysis for Recurrent Events Data
Box-Steffensmeier, Janet M.
De Boef, Suzanna

Uploaded 07-13-2002
Keywords survival analysis
repeated events
heterogeneity
event dependence
simulations
Abstract Scholars have long known that multiple events data, which occur when subjects experience more than one event, cause a problem when analyzed without taking into consideration the correlation among the events. In particular there has not been a solution about the best way to model the common occurrence of repeated events, where the subject experiences the same type of event more than once. Many event history model variations based on the Cox proportional hazards model have been proposed for the analysis of repeated events and it is well known that these models give different results (Clayton 1994; Lin 1994; Gao and Zhou 1997; Klein and Moeschberger 1997; Therneau and Hamilton 1997; Wei and Glidden 1997; Box-Steffensmeier and Zorn 1999; Hosmer and Lemeshow 1999; Kelly and Lim 2000). Our paper focuses on the two main alternatives for modeling repeated events data, variance corrected and frailty (also referred to as random effects) approaches, and examines the consequences these different choices have for understanding the interrelationship between dynamic processes in multivariate models, which will be useful across disciplines. Within political science, the statistical work resulting from this project will help resolve some important theoretical and policy debates about political dynamics, such as the liberal peace, by commenting on the reliability of the different modeling strategies used to test those theories and applying those models. Specifically, the results of the project will help assess whether one of the two primary approaches is better able to account for within-subject correlation. We evaluate the various modeling strategies using Monte Carlo evidence to determine whether and under what conditions alternative modeling strategies for repeated events are appropriate. The question as to the best modeling strategy for repeated events data is an important one. Our understanding of political processes, as in all studies, depends on the quality of the inferences we can draw from our models. There is currently little guidance about which approach or model is appropriate and so, not surprisingly, we see analysts unsure of the best way to analyze their data. Given the dramatic substantive differences that result from using the different models and approaches, this is a problem that will be of interest across research communities.

5
Paper
Did Illegally Counted Overseas Absentee Ballots Decide the 2000 U.S. Presidential Election?
Imai, Kosuke
King, Gary

Uploaded 02-13-2002
Keywords 2000 U.S. Presidential Election
Ecological Inference
Bayesian Model Averaging
Abstract Although not widely known until much later, Al Gore received 202 more votes than George W. Bush on election day in Florida. George W. Bush is president because he overcame his election day deficit with overseas absentee ballots that arrived and were counted after election day. In the final official tally, Bush received 537 more votes than Gore. These numbers are taken from the official results released by the Florida Secretary of State's office and so do not reflect overvotes, undervotes, unsuccessful litigation, butterfly ballot problems, recounts that might have been allowed but were not, or any other hypothetical divergence between voter preferences and counted votes. After the election, the New York Times conducted a six month long investigation and found that 680 of the overseas absentee ballots were illegally counted, and no partisan, pundit, or academic has publicly disagreed with their assessment. In this paper, we describe the statistical procedures we developed and implemented for the Times to ascertain whether disqualifying these 680 ballots would have changed the outcome of the election. The methods involve adding formal Bayesian model averaging procedures to King's (1997) ecological inference model. Formal Bayesian model averaging has not been used in political science but is especially useful when substantive conclusions depend heavily on apparently minor but indefensible model choices, when model generalization is not feasible, and when potential critics are more partisan than academic. We show how we derived the results for the Times so that other scholars can use these methods to make ecological inferences for other purposes. We also present a variety of new empirical results that delineate the precise conditions under which Al Gore would have been elected president, and offer new evidence of the striking effectiveness of the Republican effort to convince local election officials to count invalid ballots in Bush counties and not count them in Gore counties.

6
Paper
Analyzing the dynamics of international mediation processes
Schrodt, Philip A.
Gerner, Deborah J.

Uploaded 07-16-2001
Keywords event data
cross-correlation
mediation
Cox proportional hazard
pattern recognition
Abstract This paper presents initial results from a project that will formally test a number of the hypotheses embedded in the theoretical and qualitative literatures on mediation, using automated coding of event data from news-wire sources. In contrast to most of the existing quantitative literature, which emphasizes the structural aspects of mediation, we will focus on the dynamics. The initial part of the paper focuses on two issues of design. First, we discuss the advantages of generating data using fully automated methods, which increases the transparency and replicability of the research. This transparency is extended to the development of more complex variables that cannot be captured as single events: these are defined as pattern of the underlying event data. We also suggest that these can be usefully studied using conventional inferential statistics rather than computational pattern recognition. Second, we justify the "statistical case study" approach which focuses on a small number of cases that are limited in geographical and temporal scope. While the risk of this approach is that one will find patterns of behavior that apply only in those circumstances, we point out that the more conventional large-N time-series cross-sectional studies also carry inferential risks. The statistical tests reported in this paper look at three different issues using data on the Israel-Lebanon and Israel-Palestinian conflicts in the Levant (1979-1999), and the Serbia-Croatia and Serbia-Bosnia conflicts in the Balkans (1991-1999). First, cross- correlation is used to look at the effects of mediation on the level of violence over time. Second, we test the "sticks-or-carrots" hypothesis on whether mediation is more effective in reducing violence if accompanied by cooperative or conflictual behavior by the mediator. Finally, we estimate Cox proportional hazard models to assess the factors that influence (1) whether mediation is accepted by the parties in a conflict, (2) whether formal agreements are reached, and (3) whether the agreements reduce the level of conflict. Future work in the project involves development of a new event coding scheme specifically designed for the study of mediation, and expansion of the list of cases to include other mediated conflicts in the Middle East and West Africa.

7
Paper
The Binomial-Beta Hierarchical Model for Ecological Inference Revisited and Implemented via the ECM Algorithm
Mattos, Rogerio
Veiga, Alvaro

Uploaded 05-21-2001
Keywords ecological inference
hierarchical models
binomial-beta distribution
ECM Algorithm
Abstract The binomia-beta hierarchical model is a recent contribution to ecological inference. Developed for the 2x2 tables case and under a bayesian perspective, the model is based on compounding the binomial and the beta distributions into a hierarchical structure to describe the behavior of aggregate variables. From a sample of aggregate observations, inference with this model can be made with regard to the values of the unobservable disaggregate variables. The paper discusses some issues regarding the construction of this EI model: First, previous uses of compounded binomial and beta distributions in the EI literature are reviewed; second, a faster approach to use the model in practice, based on posterior maximization implemented via the ECM algorithm, is proposed and illustrated with an application to a real dataset; finally, limitations regarding the use of marginal posteriors for binomial probabilities as elements of inference (basically, the failure to respect the accounting identity) instead of the predictive densities for the binomial proportions are pointed, together with suggestions of principles for EI model building in general.

8
Paper
Aggregation Among Binary, Count, and Duration Models
King, Gary
Signorino, Curtis S.
Alt, James E.

Uploaded 08-28-2000
Keywords Duration
event count
binary
renewal process
aggregation
Abstract Binary, count, and duration data all code discrete events occurring at points in time. Although a single data generation process can produce all of these three data types, the statistical literature is not very helpful in providing methods to estimate parameters of the same process from each. In fact, only a single theoretical process exists for which known statistical methods can estimate the same parameters --- and it is generally used only for count and duration data. The result is that seemingly trivial decisions about which level of data to use can have important consequences for substantive interpretations. We describe the theoretical event process for which results exist, based on time-independence. We also derive a set of models for a time-dependent process and compare their predictions to those of a commonly used model. Any hope of understanding and avoiding the more serious problems of aggregation bias in events data is contingent on first deriving a much wider arsenal of statistical models and theoretical processes that are not constrained by the particular forms of data that happen to be available. We discuss these issues and suggest an agenda for political methodologists interested in this very large class of aggregation problems.

9
Paper
A Specification Test for Linear Regressions that use King-Based Ecological Inference Point Estimates as Dependent Variables
Herron, Michael C.
Shotts, Kenneth W.

Uploaded 07-14-2000
Keywords ecological inference
second stage regressions
ordinary least squares
logical consistency
Abstract Many researchers use point estimates produced by the King (1997) ecological inference technique as dependent variables in second stage linear regressions. We show, however, that this two stage procedure is at risk of logical inconsistency. Namely, the assumptions necessary to support the procedure's first stage (ecological inference via King's method) can be incompatible with the assumptions supporting the second (linear regression). We derive a specification test for logical consistency of the two stage procedure and describe options available to a researcher whose ecological dataset fails the test.

10
Paper
Ideology and U.S. Senate Candidates
Burden, Barry C.
Kenny, Christopher B.

Uploaded 04-19-2000
Keywords ideology
measurement
elite surveys
Abstract This paper reports on a pilot study for what will become the Candidate Ideology Survey (CIS). Beginning in 2000, the CIS will survey all major-party House and Senate candidates, asking them to locate themselves on the left-right ideological spectrum. Such an approach improves on existing ideology measures such those based on roll call votes because it puts both incumbents and challengers on a common scale. Existing studies of congressional elections that include only the ideology of the incumbent in vote models are likely underestimating the importance of ideology generally, the positions of challengers are useful if not necessary. The paper presents findings from a preliminary survey of senators and Senate challengers in 1998. It explains the ususual elite mail survey methodology used in terms of response rate and representativeness of the sample. It also examines the validity of the data in terms of partisan and regional differences and relationships with existing ideological measures. Among other substance results, we find that the ideological "fit" of incumbents with constituents is much better than the "fit" of challengers with constituents. By improving on this design and adding the House in the 2000 CIS wave, we hope to generate data that will be of great use to researchers who study congressional elections.

11
Paper
Respondent Uncertainty of Candidate Issue Positions and Its Effects on Estimates of Issue Salience
Glasgow, Garrett

Uploaded 03-23-1999
Keywords issue salience
uncertainty
coefficient bias
spatial models
Abstract (not transcribed)

12
Paper
Learning in Campaigns: A Policy Moderating Model of Individual Contributions to House Candidates
Wand, Jonathan
Mebane, Walter R.

Uploaded 04-18-1999
Keywords FEC
campaign contributions
campaign finance
policy moderation
GLM
generalized linear model
negative binomial
time series
bootstrap
U.S. House of Representatives
1984 election
Abstract We propose a policy moderating model of individual campaign contributions to House campaigns. Based on a model that implies moderating behavior by voters, we hypothesize that individuals use expectations about the Presidential election outcome when deciding whether to donate money to a House candidate. Using daily campaign contributions data drawn from the FEC Itemized Contributions files for 1984, we estimate a generalized linear model for count data with serially correlated errors. We expand on previous empirical applications of this type of model by comparing standard errors derived from a sandwich estimator to confidence intervals produced by a nonparametric bootstrap.

13
Paper
Estimation and Inference by Bayesian Simulation: an on-line resource for social scientists
Jackman, Simon

Uploaded 08-30-1999
Keywords Markov chain Monte Carlo
Bayesian statistics
how-to
BUGS
ordinal probit
time series
Abstract http://tamarama.stanford.edu/mcmc a Web-based on-line resource for Markov chain Monte Carlo, specifically tailored for social scientists. MCMC is probably the most exciting development in statistics in the last ten years. But to date, most applications of MCMC methods are in bio-statistics, making it difficult for social scientists to fully grasp the power of MCMC methods. In providing this on-line resource I aim to overcome this deficiency, helping to put MCMC in the reach of social scientists. The resource comprises: (*) a set of worked examples (*) data and programs (*) links to other relevant web sites (*) notes and papers At the meetings in Atlanta, I will present two of the worked examples, which are part of this document: (*) Cosponsor: computing auxiliary quantities from MCMC output (e.g., percent correctly predicted in a logit/probit model of legislative behavior; cf Herron 1999). (*) Delegation: estimating a time-series model for ordinal data (e.g., changes to the U.S. president's discretionary power in trade policy, 1890-1990; cf Epstein and O'Halloran 1996).

14
Paper
The Influence of the Initiative Process on Interest Groups and Lobbying Techniques
Boehmke, Frederick

Uploaded 09-22-1999
Keywords Initiative
direct democracy
survey analysis
interest groups
lobbying
selection
bias
Abstract I use survey data on interest groups and their activities drawn from four state populations to test hypotheses about the implications of direct democracy for the characteristics and strategic choices of interest groups. I use this data to test predictions about direct democracy's effect for group populations, confirming previous work (Boehmke 1999b) and extending it by exploring more detailed characteristics such as membership and resources. I then link these characteristics to lobbying techniques to test if the initiative process has an impact at the group level. As expected, groups involved in initiative campaigns tend to accentuate outside lobbying strategies, but even groups not currently involved in initiatives are influenced by the possibility of its use. This is because the initiative process alters the characteristics that can be effectively used when attempting to influence policy. The analysis makes use of a technique to correct for heterogeneous response rates across group types. By gathering information about a high percentage of an additional, smaller sample, I am able to correct for this response rate differential through a weighting procedure. The correction is found to have a substantial effect on the results: its absence would leave the researcher to conclude that the initiative plays little role in state interest group activities. This data will also be used to test and correct for possible sample selection bias.

15
Paper
GEE Models of Judicial Behavior
Zorn, Christopher

Uploaded 04-02-1998
Keywords generalized estimating equations
time-series cross-sectional data
temporal dependence
heterogeneity
judicial decision making
Abstract The assumption of independent observations in judicial decision making flies in the face of our theoretical understanding of the topic. In particular, two characteristics of judicial decision making on collegial courts introduce heterogeneity into successive decisions: individual variation in the extent to which different jurists maintain consistency in their voting behavior over time, and the ability of one judge or justice to influence another in their decisions. This paper addresses these issues by framing judicial behavior in a time-series cross-section context and using the recently developed technique of generalized estimating equations (GEE) to estimate models of that behavior. Because the GEE approach allows for flexible estimation of the conditional correlation matrix within cross-sectional observations, it permits the researcher to explicitly model interjustice influence or over-time dependence in judicial decisions. I utilize this approach to examine two issues in judicial decision making: latent interjustice influence in civil rights and liberties cases during the Burger Court, and temporal consistency in Supreme Court voting in habeas corpus decisions in the postwar era. GEE estimators are shown to be comparable to more conventional pooled and TSCS techniques in estimating variable effects, but have the additional benefit of providing empirical estimates of time- and panel- based heterogeneity in judicial behavior.

16
Paper
Is Instrumental Rationality a Universal Phenomenon?
Bennett, D. Scott
Stam, III, Allan C.

Uploaded 04-22-1998
Keywords rational
expected utility
preferences
game theory
Abstract This paper examines whether the expected utility theory of war explains international conflict equally well across all regions and time-periods as a way of examining whether instrumental rationality is a universal phenomenon. In the rational choice literature, scholars typically assume that decision-makers are purposive egoistic decision-makers with common preferences across various outcomes. However, critics of the assumption have suggested that preferences and decision structures vary as a function of polity type, culture and learning among state leaders. There have been few attempts to directly examine this assumption and evaluate whether it seems empirically justified. In this paper we attempt to test the assumption of common instrumental rationality, examining several competing hypotheses about the nature of decision making in international relations and expectations about where and when instrumental rationality should be most readily observable. In particular, we want to explore the effects of regional learning to discover if there is a difference by region and over time in the outbreak of war and the predictions of the expected utility model. We find important differences both over regions and over time in how the predictions of expected utility theory fit actual conflict occurrence.

17
Paper
Aggregate Voting Data and Implied Spatial Voting
Herron, Michael C.

Uploaded 07-15-1998
Keywords spatial voting
aggregate data
ecological inference
micro-foundations
Abstract The paper draws attention to the micro--foundations of aggregate voting data by introducing the concept of an implied spatial voting model. The adjective ``implied'' refers to the fact that this paper's spatial theory primitives, which describe how individual--level preferences are distributed across and within voting districts, are implied by or derived from aggregate voting data. The key idea proposed here is that, given an observed distribution of aggregate voting data, it is possible to derive features of an individual--level, spatial voting model capable of generating the observed data. Thus, an implied spatial voting model is an inverse image of an observed, aggregate vote share distribution. We provide numerical examples of how spatial voting models can be implied by aggregate voting data and we then analyze aggregate data and National Election Study survey data from the 1980, 1984, and 1988 presidential elections. And, to demonstrate that implied spatial voting models can be calculated from aggregate data alone, we consider presidential elections 1928--1960 and the Chicago mayoral elections of 1983 and 1987. This paper's focus on the micro--foundations of aggregate data highlights the limitations inherent in aggregate data analyses. In particular, the paper discusses identification problems, in part a consequence of the lack of scale and location invariance in preference orderings and in part a consequence of the lack of individual--level information in aggregate data, that affect movement between individual--level theories like spatial voting theory and aggregate voting data.

18
Paper
Rivalry, Reciprocity, and the Dynamics of Presidential-Congressional Institution Building
Krause, George

Uploaded 08-20-1998
Keywords Institution Building
Presidential-Congressional Relations
Prisoner's Dilemma
Nonmyopic Equilibria
Theory of Moves
Johansen Cointegration Procedure
FIML
Vector Error Correction Mechanisms (VECMs)
Innovation Accounting
Abstract A central feature of the development of the presidential and congressional branches has been the process of institution building. This phenomenon represents the size and scope of a branch's formal institutional apparatus that is reflected by the resources it utilizes for operational and functional purposes. In this study, a simple dynamic Prisoner's Dilemma game-theoretic model, based on the Theory of Moves (Brams 1994), is set forth to explain this process. This theoretical model produces two Nonmyopic Equilibria (NMEs): (1) a Contractionary Equilibrium where both the president and Congress expend fewer resources; and (2) an Expansionary Equilibrium where each institution expends greater resources. The theoretical predictions derived from this positive model suggest that variations in the institutional expenditures by each branch will exhibit a stable long-run equilibrium relationship that is consistent with these NME's. Using constant-dollar annual data on Executive Office of the President and Legislative Branch expenditures for the 1939-1997 period, a Vector Error Correction Mechanism (VECM) model, that is derived from the game-theoretic model noted above, is employed to empirically account for both short-run and long-run movements as well as long-run equilibrium relations. The statistical evidence supports the predictions of the theoretical model. Specifically, the historical evolution of presidential and congressional institution building represents a conflict situation where neither institution has a permanent advantage over the other due to its equal power and farsightedly rational behavior. Contrary to existing research on this topic, the empirical findings reveal that both presidential and congressional efforts at institution building do not just emanate from within each respective branch, but instead are very responsive to one another with respect to these activities. This, in turn, suggests that causal (temporal) sequence of institution building is in stark contrast from the conventional wisdom of an "opportunistic" presidency that exploits Congress.

19
Paper
Nuisance vs. Substance: Specifying and Estimating Time-Series--Cross-Section Model
Beck, Nathaniel
Katz, Jonathan

Uploaded 01-01-1995
Keywords Econometrics
Time-series--cross-section
GLS
FGLS
EGLS
Parks
Robust standard errors
GLS--ARMA
Abstract In a previous article we showed that ordinary least squares with panel corrected standard errors is superior to the Parks generalized least squares approach to the estimation of time-series--cross-section models. In this article we compare our proposed method to another leading technique, Kmenta's ``cross-sectionally heteroskedastic and timewise autocorrelated'' model. This estimator uses generalized least squares to correct for both panel heteroskedasticity and temporally correlated errors. We argue that it is best to model dynamics via a lagged dependent variable, rather than via serially correlated errors. The lagged dependent variable approach makes it easier for researchers to examine dynamics and allows for natural generalizations in a manner that the serially correlated errors approach does not. We also show that the generalized least squares correction for panel heteroskedasticity is, in general, no improvement over ordinary least squares and is, in the presence of parameter heterogeneity, inferior to it. In the conclusion we present a unified method for analyzing time-series--cross-section data.

20
Paper
Aggregate Economic Conditions and Indivdual Forecasts: A Mulilevel Model of EconomicExpectations
Jones, Bradford S.
Haller, H. Brandon

Uploaded 00-00-0000
Keywords random coefficient modeling
multilevel analysis
hierarchical linear models
Abstract To what extent are individual economic expectations related to actual economic conditions? This is the central question examined in this paper. Surprisingly, little research exists examining how economic expectations are formed. Moreover, even less research has been done examining the interaction between the state of the national economy and individual forecasts. Most research addressing expectation formation has resided at the aggregate level. In this paper, we utilize the methodology of random coefficient models to explore the linkage between individuals and the macroeconomic environment. We conceptualize individuals as being "nested" within time periods. Individual forecasts are treated as contextually conditioned by the state of the economy. We find evidence that aggregate economic indicators do influence the parameters predicting economic expectations. Furthermore, the relationship between the macroeconomy and individual expectations provides strong support for Katona's (1972, 1975) notion of "psychological economics." We find that individual forecasts of the future are "brighter" when aggregate economic conditions are "darkest." Additionally, we find that individuals tend to rely less on retrospective evaluations of the economy when the economy is faring poorly.

21
Paper
Polarization and Political Violence
Penubarti, Mohan
Asea, Patrick

Uploaded 07-12-1996
Keywords polarization
political violence
extreme bounds analysis
Abstract We explore the implications of a new notion of inequality --- polarization --- for the incidence and level of political violence. A society is said to be polarized when its members can be classified into different clusters, with each cluster being similar in terms of the attributes of its members (intra--group homogeneity) but with different clusters having members with dissimilar attributes (inter--group heterogeneity). The notion of polarization provides an important conceptual breakthrough in understanding inequality in societies because a society may be facing a decrease (increase) in inequality while at the same time experiencing an increase (decrease) in polarization. We conduct empirical analysis on a large sample of countries to demonstrate the positive link between polarization and political violence. In contrast, traditional measures of inequality perform poorly with the introduction of polarization in the model specification. Additionally, we conduct global sensitivity analysis to explore the robustness of the polarization measure to reasonable changes in the conditioning information set.

22
Paper
The Diffusion of Democracy, 1946-1994
O'Loughlin, John
Ward, Michael D.
Lofdahl, Corey L.
Cohen, Jordin S.
Brown, David S.
Reilly, David
Gleditsch, Kristian S.
Shin, Michael E.

Uploaded 11-12-1997
Keywords Spatial diffusion

exploratory spatial data analysis
spatial statistics
regional effects
democracy
measures of democracy
space-time autocorrelation
Abstract Research to date on democratization neglects the interconnections between temporal and spatial components that influence this process. This article presents research that reveals the relationship between the temporal and spatial aspects of democratic diffusion in the world-system since 1946. We provide strong and consistent evidence of temporal cascading of democratic and autocratic trends as well as strong spatial association (or autocorrelation) of authority structures. The analysis uses an exploratory data approach in a longitudinal framework to understand global and regional trends in democratization. Our work also reveals discrete changes in regimes that run counter to the dominant aggregate trends of democratic waves or sequences. We demonstrate how the ebb and flow of democracy varies between the world's regions. We conclude that further modeling of the process of regime change from autocracy to democracy as well as reversals should start from a "domain-specific" position that disaggregates the globe into its regional mosaics.

23
Paper
Early Warning of Conflict in Southern Lebanon using Hidden Markov Models
Schrodt, Philip A.

Uploaded 08-24-1997
Keywords hidden Markov models
event data
early warning
international crisis
sequence analysis
Middle East
WEIS
BCOW
Abstract This paper extends earlier work on the application of hidden Markov models (HMMs) to the problem of forecasting international conflict. HMMs are a sequence comparison method widely used in computerized speech recognition as a computationally efficient method of generalizing a set of sequences observed in a noisy environment. The technique is easily be adapted to work with sequences of international event data. The paper provides a theoretical "micro-foundation" for the use of sequence comparison in conflict early- warning based on coadaptation of organizational standard operating procedures. The left-right (LR) HMM used in speech recognition is first extended to a left-right-left (LRL) model that allows a crisis to escalate and de-escalate. This model is tested for its ability to correctly discriminate between BCOW crisis that involve and do not involve war. The LRL model provides slightly more accurate classification than the LR model. The interpretation of the hidden states in the LRL models, however, is more ambiguous than in the LR model. The HMM is then applied to the problem of forecasting the outbreak of armed violence between Israel and Arab forces in south Lebanon during the period 1979 to 1997 (excluding 1982-1985). An HMM is estimated using six cases of "tit-for-tat" escalation, then fitted to the entire time period. The model identifies about half of the TFT conflictsŃincluding all of the training casesŃthat occur in the full sequence, with only one false positive. This result suggests that HMMs could be used in an event-based monitoring system. However, the fit of the model is very sensitive to the number of days in a sequence when no events occurred, and consequently the fit measure is ineffective as an early warning indicator. Nonetheless, in a subset of models, the maximum likelihood estimate of the sequence of hidden Markov states provides a robust early warning indicator with a three to six-month lead. These models are valid in a split-sample test, and the patterns of cross-correlation of the individual states of the model are consistent with the theoretical expectations. While this approach clearly needs further validation, it appears promising. The paper concludes with observations on the extent to which the HMM approach can be generalized to other categories of conflict, some suggestions on how the method of estimation can be improved, and the implications that sequence-based forecasting techniques have for theories of the causes of conflict.

24
Paper
Minority Representation in Multi-member Districts
Gerber, Elisabeth R.
Morton, Becky
Rietz, Thomas

Uploaded 08-13-1997
Keywords cumulative voting
multi-member districts
minority representation
laboratory elections
Abstract We present a theoretical and experimental examination of cumulative voting versus straight (non-cumulative) voting in multi-member district elections. Cumulative voting has been proposed as a method for increasing minority representation. Given the recent court rulings against racial gerrymandering to achieve minority representation in single-member districts, the effect of multi- member district elections on minority representation is an important issue. We present a model of voting in double-member district elections with two majority candidates and one minority candidate and consider the voting equilibria under the two voting systems. In straight voting, we find that while an equilibrium always exists where the two majority candidates are expected to win the two seats, equilibria also exist where minority candidates may be elected. In cumulative voting, we find that equilibrium minority candidate wins are also possible but are less likely when minority voters prefer one majority candidate over another. We then describe experimental evidence on voting behavior and outcomes in straight and cumulative voting elections. We find that minority candidates win significantly more seats in cumulative than in straight voting elections, as predicted, but win fewer elections when minority voters prefer one majority candidate over another.

25
Paper
A Theory of Nonseparable Preferences in Survey Responses
Lacy, Dean

Uploaded 07-11-1997
Keywords nonseparable-preferences
framing
experiments
question-order-effects
Abstract This paper presents a model of individual-level responses to issue questions in public opinion surveys when respondents have nonseparable preferences. The model implies two results: responses will change depending on the order of questions and vary over time. Each of these conclusions is consistent with empirical findings that are often cited to support the argument that people are irrational or lack fixed and well-formed preferences. Results from an experiment reveal that question-order effects occur on issues for which people have nonseparable preferences, and order effects do not occur on issues for which most people have separable preferences.

26
Paper
Partisan and Ideological Trends: Causality and Sophistication in the Electorate
Box-Steffensmeier, Janet M.
De Boef, Suzanna

Uploaded 04-01-1997
Keywords political sophistication
macropartisanship
macroideology
ARFIMA models
Granger causality
Abstract Studying the liberal-conservative dimensions of political competition deserves high-priority to gain insight into political change, as emphasized by Eisinga, Franses, and Ooms (1997, 3) and Inglehart and Klingemann (1976, 272). Yet the relationship between ideological and partisan movements in the American electorate has largely gone uninvestigated (but see Box-Steffensmeier, Knight, and Sigelman 1996). We investigate the relationship between trends in macropartisanship and macroideology for more and less politically sophisticated adults. We argue that only a portion of the electorate is involved with and attentive to the political environment, able to organize political debate in terms of liberal and conservative referents, and in turn, can link their ideological and partisan identifications. Using CBS and New York Times survey data on partisanship and ideology we find a causal relationship between ideology and partisanship only for the more politically sophisticated respondents. Mutual causality at both short and long lags characterizes the relationship between partisan and ideological change for adults with education beyond high school. In addition to the increased level of political sophistication that characterizes those for whom the series are linked, these respondents are more likely, by wide margins, to have claimed to have voted than less sophisticated respondents. Thus, any linkage has political implications. The incentives for politicians to link popular ideological sentiment with partisanship are strong. The people who put them in office (or kick them out) are the same folks who connect ideology and partisanship and who pay attention to politics.

27
Paper
Methodology as ideology: mathematical modeling of trench warfare
Gelman, Andrew

Uploaded 01-26-2005
Keywords cooperation
First World War
game theory
prisonerâ??Ă?Ă´s dilemma
Abstract The Evolution of Cooperation, by Axelrod (1984), is a highly influential study that identifies the benefits of cooperative strategies in the iterated prisoner’s dilemma. We argue that the most extensive historical analysis in the book, a study of cooperative behavior in First World War trenches, is in error. Contrary to Axelrod’s claims, there soldiers in the Western Front were not generally in a prisoner’s dilemma (iterated or otherwise), and their cooperative behavior can be explained much more parsimoniously as immediately reducing their risks. We discuss the political implications of this misapplication of game theory.

28
Paper
Bridging Institutions and Time: Creating Comparable Preference Estimates for Presidents, Senators, Representatives and Justices, 1950-2002
Bailey, Michael

Uploaded 07-19-2005
Keywords ideal point estimation
Supreme Court
Congress
Abstract Difficulty in comparing preferences across time and institutional contexts hinders the empirical testing of many important theories in political science. In this paper, I characterize these difficulties and provide a measurement approach that relies on inter-temporal and inter-institutional ``bridge'' observations and Bayesian Markov chain simulation methods. I generate preference estimates for Presidents, Senators, Representatives and Supreme Court Justices that are comparable across time and across institutions. Such preference estimates are indispensable in a variety of important research projects, including research on statutory interpretation, executive influence on the Supreme Court and Senate influence on court appointments.

29
Paper
The Dangers of Extreme Counterfactuals
King, Gary
Zeng, Langche

Uploaded 07-18-2005
Keywords propensity score
extrapolation
counterfactual
convex hull
distance
model dependence
Abstract We address the problem that occurs when inferences about counterfactuals -- predictions, ``what if'' questions, and causal effects -- are attempted far from the available data. The danger of these extreme counterfactuals is that substantive conclusions drawn from statistical models that fit the data well turn out to be based largely on speculation hidden in convenient modeling assumptions that few would be willing to defend. Yet existing statistical strategies provide few reliable means of identifying extreme counterfactuals. We offer a proof that inferences farther from the data are more model-dependent, and then develop easy-to-apply methods to evaluate how model-dependent our answers would be to specified counterfactuals. These methods require neither sensitivity testing over specified classes of models nor evaluating any specific modeling assumptions. If an analysis fails the simple tests we offer, then we know that substantive results are sensitive to at least some modeling choices that are not based on empirical evidence. The most recent version of this paper and software that implements the methods described is available at http://gking.harvard.edu.

30
Paper
Scaling regression inputs by dividing by two standard deviations
Gelman, Andrew

Uploaded 06-10-2006
Keywords regression
standardization
$z$-score
Abstract Interpretation of regression coefficients is sensitive to the scale of the inputs. One method often used to place input variables on a common scale is to divide each variable by its standard deviation. Here we propose dividing each variable by {em two} standard deviations, so that the generic comparison is with inputs equal to the mean $pm 1$ standard deviation. The resulting coefficients are then directly comparable for untransformed binary predictors. We have implemented the procedure as a function in R. We illustrate the method with a simple public-opinion analysis that is typical of regressions in social science.

31
Paper
Expressive Bayesian Voters, their Turnout Decisions, and Double Probit
Achen, Christopher

Uploaded 07-17-2006
Keywords turnout
expressive
Bayesian
probit
scobit
EITM
Abstract Voting is an expressive act. Since people are not born wanting to express themselves politically, the desire to vote must be acquired, either by learning about the candidates, by using party identification as a cognitive shortcut, or by contact from a trusted source. Modeled as Bayesian updating, this simple explanatory framework has dramatic implications for the understanding of voter turnout. It mathematically implies the main empirical generalizations familiar from the literature, it predicts hitherto unnoticed patterns that appear in turnout data, it provides a better fitting statistical model (double probit) for sample surveys of turnout, and it allows researchers to forecast turnout patterns in new elections when circumstances change. Thus the case is strengthened for the Bayesian voter model as a central organizing principle for public opinion and voting behavior.

32
Paper
Negative Results in Social Science
Lehrer, David
Leschke, Janine
Lhachimi, Stefan
Vasiliu, Ana
Weiffen, Brigitte

Uploaded 11-11-2006
Keywords methodology
negative results
philosophy of science
publication bias
Abstract Do academic publication standards reflect or determine research results? The article proposes minimal criteria for distinguishing useful ‘unpublishable’ results from low-quality research, and argues that the virtues of negative results have been overlooked. We consider the fate these results have suffered thus far, review arguments for and against their publication, and introduce a new initiative—a journal to disseminate negative results and advance debate on their recognition and use.

33
Paper
Strategic Interaction and Interstate Crises: A Fixed-Effects Bayesian Quantal Response Estimator for Incomplete Information Games
Esarey, Justin
Mukherjee, Subhanan
Moore, Will

Uploaded 07-12-2007
Keywords fixed effects
quantal response
crisis bargaining
EITM
Abstract Two strategies have been laid out by a growing literature on how to properly test the hypotheses implied by a theory of strategic interaction. The first strategy focuses on conventional comparative statics and the proper specification of standard statistical models (OLS, logit or probit). The second strategy requires deriving a novel likelihood function directly from the model or theory and estimating the parameters with maximum likelihood or Bayesian methods. Both approaches have largely limited their attention to games of perfect information, though many important phenomena are studied using games of incomplete information. This study develops a statistical model for incomplete information games that we term the Fixed Effects Bayesian Quantal Response Model. Our FE-BQRE model, which lies in the domain of the second strategy, offers three advantages over existing efforts: it directly incorporates (i) Bayesian updating and (ii) signaling dynamics, and (iii) it mimics the temporal learning process that we believe takes place in international politics.

34
Paper
Estimating Party Policy Positions with Uncertainty Based on Manifesto Codings
Benoit, Kenneth
Laver, Michael
Mikhaylov, Slava

Uploaded 08-21-2007
Keywords Comparative Manifesto Project
Mapping party positions
party policy
error estimates
measurement error
Abstract Spatial models of party competition are central to modern political science. Before we can elaborate such models empirically, we need reliable and valid measurements of agents' positions on salient policy dimensions. The primary empirical times series of estimated party positions in many countries derives from the content analysis of party manifestos by the Comparative Manifesto Project (CMP). Despite widespread use of the CMP data, and despite the fact that estimates in these data arise from documents coded once, and once only, by a single human researcher, the level of error in the CMP estimates has never been estimated or even fully characterized. This greatly undermines the value of the CMP dataset as a scientific resource. It is in many ways remarkable that so much has been published in the best professional journals using data that almost certainly has substantial, but completely uncharacterized, error. We remedy this situation. We outline the process of generating CMP document codings and positional estimates. Error in this process arises, not only from the obvious source of coder unreliability, but also from fundamental variability in the stochastic process by which latent party positions are translated into observable manifesto texts. Using the quasi-sentence codings from the CMP project, we reproduce the error-generating process by simulating coder unreliability and bootstrapping analyses of coded quasi-sentences to reproduce both forms of error. Using our estimates of these errors, we suggest and demonstrate ways to correct otherwise biased inferences derived from statistical analyses of the CMP data.

35
Paper
Why we (usually) don't have to worry about multiple comparisons
Gelman, Andrew
Hill, Jennifer
Yajima, Masanao

Uploaded 06-01-2008
Keywords Bayesian inference
hierarchical modeling
multiple comparisons
type S error
statistical significance
Abstract The problem of multiple comparisons can disappear when viewed from a Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. These address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low group-level variation, which is where multiple comparisons are a particular concern. Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the p-values corresponding to intervals of fixed width). Multilevel estimates make comparisons more conservative, in the sense that intervals for comparisons are more likely to include zero; as a result, those comparisons that are made with confidence are more likely to be valid.

36
Paper
Registration and Voting under Rational Expectations
Achen, Christopher

Uploaded 07-07-2008
Keywords turnout
registration
Heckman
Dubin-Rivers
expectations
Abstract Alone among modern democracies, the United States makes voter registration a personal responsibility rather than a governmental function. In almost all states, registration deadlines occur well before elections. Failure to register by the deadline makes the probability of voting exactly zero. This sequential feature of the registration and voting decisions has been skipped over by most researchers, who simply ignore registration. Others, notably Timpone (1998), have used the seemingly appropriate Heckman-style selection model, but have arrived at findings difficult to believe. This paper investigates the appropriate choice of a registration model under a rational expectations assumption about the desire to vote, showing that, rather surprisingly, conventional selection models will generally perform less well than ignoring the selection effect of registration entirely. However, neither is quite correct. Finally then, the paper proposes and tests a flexible model for registration as a step toward substantively appropriate joint modeling of registration and voting.

37
Paper
The Persuasive Effects of Direct Mail: A Regression Discontinuity Approach
Meredith, Marc
Kessler, Daniel
Gerber, Alan

Uploaded 07-21-2008
Keywords regression discontinuity
direct mail
persuasion
turnout
Abstract During the contest for Kansas attorney general in 2006, an organization sent out 6 pieces of mail criticizing the incumbent's conduct in office. We exploit a discontinuity in the rule used to select which households received the mailings to identify the causal effect of mail on vote choice and voter turnout. We find these mailings had both a statistically and politically significant effect on the challenger's vote share. Our estimates suggest that a ten percentage point increase in the amount of mail sent to a precinct increased the challenger's vote share by approximately three percentage points. Furthermore, our results suggest that the mechanism for this increase was persuasion rather than mobilization.

38
Paper
Just Plain Data Analysis: Common Statistical Fallacies in Analyses of Social Indicator Data
Klass, Gary

Uploaded 09-17-2008
Keywords Teaching
statistical fallacies
social indicators
Abstract This paper presents a short summary of the most common statistical fallacies found in public debates employing social indicator data as the evidentiary premises of arguments about politics and public affairs. The purpose is to offer students a convenient framework for evaluating, and developing the own, arguments relying on social indicator data.

39
Paper
Opium for the Masses: How Foreign Media Can Stabilize Authoritarian Regimes
Kern, Holger
Hainmueller, Jens

Uploaded 04-11-2007
Keywords instrumental variables
causal inference
local average response function
LATE
media effects
East Germany
democratization
regime legitimacy
Abstract In this case study of the impact of West German television on public support for the East German communist regime, we evaluate the conventional wisdom in the democratization literature that foreign mass media undermine authoritarian rule. We exploit formerly classified survey data and a natural experiment to identify the effect of foreign media exposure using instrumental variable estimators. Contrary to conventional wisdom, East Germans exposed to West German television were more satisfied with life in East Germany and more supportive of the East German regime. To explain this surprising finding, we show that East Germans used West German television primarily as a source of entertainment. Behavioral data on regional patterns in exit visa applications and archival evidence on the reaction of the East German regime to the availability of West German television corroborate this result.

40
Paper
A Comparison of the Small-Sample Properties of Several Estimators for Spatial-Lag Count Models
Hays, Jude
Franzese, Robert

Uploaded 07-22-2009
Keywords Interdependence
Spatial Econometrics
Spatial-Lag Models
Count Data
Poisson
Nonlinear Least-Squares
GMM Estimation
Abstract Political scientists frequently encounter and analyze spatially interdependent count data. Applications include counts of coups in African countries, of state participation in militarized interstate disputes, and of bills sponsored by members of Congress, to name just a few. The extant empirical models for spatially interdependent counts and their corresponding estimators are, unfortunately, dauntingly complex, computationally costly, or both. They also generally tend 1) to treat spatial dependence as nuisance, 2) to stress spatial-error or spatial-heterogeneity models over spatial-lag models, and 3) to treat all observed spatial association as arising by one undifferentiated source. Prominent examples include the Winsorized count model of Kaiser and Cressie (1997) and Griffith�s spatially-filtered Poisson model (2002, 2003). Given the available options, the default approaches in most applied political-science research are to either to ignore spatial interdependence in count variables or to use spatially-lagged observed-counts as exogenous regressors, either of which leads to inconsistent estimates of causal relationships. We develop alternative nonlinear least-squares and method-of-moments estimators for the spatial-lag Poisson model that are consistent. We evaluate by Monte Carlo simulation the small sample performance of these relatively simple estimators against the naiive alternatives of current practice. Our results indicate substantial consistency improvements against minimal complexity and computational costs. We illustrate the model and estimators with an analysis of terrorist incidents around the world.

41
Paper
Misspecification and the Propensity Score: When to Leave Out Relevant Pre-Treatment Variables
Clarke, Kevin A.
Kenkel, Brenton
Rueda, Miguel

Uploaded 07-14-2010
Keywords matching
propensity scores
conditioning
omitted variable bias
Abstract The popularity of propensity score matching has given rise to a robust, albeit informal, debate concerning the number of pre-treatment variables that should be included in the propensity score. The standard practice is to include all available pre-treatment variables in the propensity score. We demonstrate that this approach is not always optimal for the goal of reducing bias in the estimation of a treatment effect. We characterize conditions under which including an additional relevant variable in a propensity score increases the bias on the effect of interest across a variety of different implementations of the propensity score methodology. We find that matching within propensity score calipers is slightly more robust against such bias than other common methods.

42
Paper
Properties of Ideal-Point Estimators
Tahk, Alexander

Uploaded 07-20-2010
Keywords ideal points
ideal-point estimation
consistency
Quinn conjecture
optimal classification
Abstract Although ideal-point estimation has become relatively commonplace in political science, fairly little is known about the properties of these estimations. Two of the most common estimators—NOMINATE and the Bayesian approach of Clinton, Jackman, and Rivers—suffer from the incidental parameters problem, implying that standard results about the consistency of maximum-likelihood and Bayes estimators do not apply. Thus, despite their widespread use, these estimators are not known to be consistent and may lead to erroneous results even in very large samples. This paper provides several theoretical results regarding ideal-point estimation. First, this paper demonstrates a counterexample to consistency of common ideal-point estimators—even with regard to the rank of the ideal points. It then presents a simple estimator of the rank of unidimensional ideal points that is inefficient but also consistent for a generalization of most common ideal-point models.

43
Paper
The Draw Index: A Measure of Competition for Winner-Take-All Elections
Rose, Douglas
Rose, Douglas

Uploaded 08-26-2010
Keywords electoral
percentage
outcome
competition
Draw
change
measure
index
tie
winner-take-all
fractionalization
winner
vote
closeness
Abstract Winner’s percentage, a common measure of electoral competition in winner-take-all elections, measures the shift in vote shares required to produce changes in election outcome. Thus, winner’s percentage of the vote cast is a logical measure of winner-take-all competition. It treats equally shifts from higher to lower ranking candidates. A related measure, the Draw Index, even more clearly measures vote shifting needed to produce tied, then changed outcomes, with weighting by the preceding ties. A final adjustment, yielding the Draw Plus measure, provides a greater emphasis on the importance of a first change in outcome. Across all cases of five or fewer candidates, five measures of competition – including closeness and Rae’s Index of Fractionalization – are highly correlated. Methods of expanding these measures to include non-voters or no preference are explored in an appendix.

44
Paper
Racing Horses: Constructing and Evaluating Forecasts in Political Science
Brandt, Patrick
Freeman, John R.
Schrodt, Philip

Uploaded 07-27-2011
Keywords forecasting
political conflict
scoring rules
model training
forecast density
verification rank histogram
probability integral transform
Abstract We review methods for forecast evaluations and how they can be used in political sciences. We examine how forecast densities are more useful summaries of forecasted variables than point metrics. We also cover how continuous rank probability scores, probability integral transforms, and verification rank histograms can be used to calibrate and evaluate forecast performance. Finally, we present two illustrations, one a simulation and the other a comparison of forecasting models for the China-Taiwan (cross-straits) conflict.

45
Paper
Validation: What Big Data Reveal About Survey Misreporting and the Real Electorate
Hersh, Eitan
Ansolabehere, Stephen

Uploaded 07-13-2012
Keywords validation
misreporting
Catalist
election administration
turnout
registration
Abstract Social scientists rely on surveys to explain political behavior. From consistent over- reporting of voter turnout, it is evident that responses on survey items may be unreliable and lead scholars to incorrectly estimate the correlates of participation. Leveraging developments in technology and improvements in public records, we conduct the first ever fifty-state vote validation. We parse over-reporting due to response bias from over- reporting due to inaccurate respondents. We find that non-voters who are politically engaged and equipped with politically relevant resources consistently misreport that they voted. This finding cannot be explained by faulty registration records, which we measure with new indicators of election administration quality. Respondents are found to misreport only on survey items associated with socially desirable outcomes, which we find by validating items beyond voting, like race and party. We show that studies of representation and participation based on survey reports dramatically mis-estimate the differences between voters and non-voters.

46
Paper
On the Use of Linear Fixed E ects Regression Models for Causal Inference
Imai, Kosuke
Kim, In Song

Uploaded 07-23-2012
Keywords difference-in-differences
first difference
matching
observational data
panel data
propensity score
randomized experiments
stratification
Abstract Linear fixed effects regression models are a primary workhorse for causal inference among applied researchers. And yet, it has been shown that even when the treatment is exogenous within each unit, the linear regression models with unit-specific fixed effects may not consistently estimate the average treatment effect. In this paper, we offer a simple solution. Specifically, we show that weighted linear fixed effects regression models can accomodate a number of identification strategies including matching, stratification, first difference, propensity score weighting, and difference-in-differences. We prove the results by establishing finite sample equivalence relationships between weighted fixed effects and these estimators. Our analysis identifies the information implicitly used by standard fixed effects models to estimate counterfactual outcomes necessary for causal inference, highlighting the potential sources of their bias and inefficiency. In addition, we develop efficient computation strategies, model-based standard errors, and a specification test for weighted fixed effects estimators. Finally, we illustrate the proposed methodology by revisiting the controversy concerning the effects of the General Agreement on Tariffs and Trade (GATT) membership on international trade. Open-source software is available for fitting the proposed weighted linear fixed effects estimators.

47
Paper
Nationalism and Interstate Conflict: A Regression Discontinuity Analysis
Bertoli, Andrew

Uploaded 07-21-2013
Keywords International Relations
International Security
Methodology
Abstract Nationalism is widely viewed as a force for interstate violence, but does it really have an important effect on state aggression that cannot be explained by strategic concerns? I provide strong evidence that it does using regression discontinuity analysis. I take advantage of the fact that many countries experience a surge of nationalism when they go to the World Cup, and the World Cup qualification process from 1958-1998 produced a large number of countries that barely qualified or barely missed. I show that these countries are well-balanced across a wide range of factors, including past levels of aggression. However, the qualifiers experienced a significant spike in aggression during the World Cup year. I also replicate the analysis using the FIFA regional soccer championships and find similar results. In both cases, the estimated treatment effect is larger for authoritarian states than democracies, suggesting that democratic norms may help constrain nationalistic aggression.

48
Paper
Voter Persuasion in Compulsory Electorates: Evidence from a Field Experiment in Australia
Lam, Patrick
Peyton, Kyle

Uploaded 12-14-2013
Keywords voter persuasion
field experiments
causal inference
missing data
Abstract Most of the literature on grassroots campaigning focuses on mobilizing potential sup- porters to turn out to vote. The actual ability of partisan campaigns to boost support by changing voter preferences is unclear. We present the results of a field experiment the Australian Council of Trade Unions (ACTU) ran during the 2013 Australian Federal Election. The experiments were designed to minimize the conservative (the Coalition) vote as part of one of the largest and most extensively documented voter persuasion campaigns in Australian history. Union members who were identified as undecided voters in over 30 electorates were targeted with appeals by direct mail and phone banks. Because of the presence of compulsory voting in Australia, we are able to identify the effects of voter persuasion independently of voter turnout. We find that direct mail, the most extensively used campaign strategy in Australia, has little effect of voter persuasion. Direct human contact, on the other hand, seems to be an effective tool for voter persuasion. Among undecided voters who actually receive direct contact via phone call, we find a ten percentage point decrease in the Coalition vote. From a methodological standpoint, we use various methods to account for multiple treatment arms, measured treatment noncompliance in one of the treatments, and missing outcome and covariate data. The field experiment also provides a good lesson in conducting and saving broken experiments in the presence of planning uncertainty and implementation failures.

49
Paper
A model- based approach to the analysis of a large table of counts: occupational class patterns in among Australians by ancestry, generation, and age group
Jones, Kelvyn
Johnston, Ron
Manley, David
Owen, Dewi
Forrest, James

Uploaded 10-06-2014
Keywords tabular analysis of counts
log-Normal Poisson model
random effects
shrinkage
precision-weighted estimate
Bayesian estimation
pooling
Abstract A novel exploratory approach is developed to the analysis of a large table of counts. It uses random- effects models where the cells of the table (representing types of individuals) form the higher level in a multilevel model. The model includes Poisson variation and an offset to model the ratio of observed to expected values thereby permitting the analysis of relative rates. The model is estimated as a Bayesian model through MCMC procedures and the estimates are precision-weighted so that unreliable rates are down-weighted in the analysis. Once reliable rates have been obtained graphical and tabular analysis can be deployed. The analysis is illustrated through a study of the occupational class distribution for people of different age, birthplace origin (ancestry) and generation in Australia. The case is also made that even where there is a full census there is a need to move beyond a descriptive analysis to a proper inferential and modelling framework. We also discuss the relative merits of Full and Empirical Bayes approaches to model estimation.

50
Paper
The Estimation of Time-Invariant Variables in Panel Analyses with Unit Fixed Effects
Pluemper, Thomas
Troeger, Vera E.

Uploaded 07-23-2004
Keywords Time Invariant Variables
Unit effects
Monte Carlo
Hausman-Taylor
Abstract This paper analyzes the estimation of time-invariant variables in panel data models with unit-effects. We compare three procedures that have frequently been employed in comparative politics, namely pooled-OLS, random effects and the Hausman-Taylor model, to a vector decomposition procedure that allows estimating time-invariant variables in an augmented fixed effects approach. The procedure we suggest consists of three stages: the first stage runs a fixed-effects model without time-invariant variables, the second stage decomposes the unit-effects vector into a part explained by the time-invariant variables and an error term, and the third stage re-estimates the first stage by pooled-OLS including the time invariant variables plus the error term of stage 2. We use Monte Carlo simulations to demonstrate that this method works better than its alternatives in estimating typical models in comparative politics. Specifically, the unit fixed effects vector decomposition technique performs better than both pooled OLS and random effects in the estimation of time-invariant variables correlated with the unit effects and better than Hausman-Taylor in estimating the time-invariant variables correlated with the unit effects. Finally, we re-analyze recent work by Huber and Stephens (2001) as well as by Beramendi and Cusack (2004). These analyses seek to cope with the problem of time-invariant variables in panel data.


< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 next>
   
wustlArtSci