logoPicV1 logoTextV1

Search Results


Below results based on the criteria 'Benford'
Total number of records returned: 911

1
Paper
Treatment effects in before-after data
Gelman, Andrew

Uploaded 04-27-2004
Keywords correlation
experiments
interactions
hierarchical models
observational studies
variance components
Abstract In experiments and observations with before-after data, the correlation between "before" and "after" measurements is typically higher among the controls than among the treated units, violating the usual assumptions of equal variance and a constant treatment effect. We illustrate with three applied examples and then discuss models that could be used to fit this phenomenon, which we argue is related to the

2
Paper
Forming voting blocs and coalitions as a prisoner's dilemma: a possible theoretical explanation for political instability
Gelman, Andrew

Uploaded 10-27-2003
Keywords coalitions
cooperation
decisive vote
elections
legislatures
prisoner's dilemma
voting power
Abstract Individuals in a committee can increase their voting power by forming coalitions. This behavior is shown here to yield a prisoner's dilemma, in which a subset of voters can increase their power, while reducing average voting power for the electorate as a whole. This is an unusual form of the prisoner's dilemma in that cooperation is the selfish act that hurts the larger group. Under a simple model, the privately optimal coalition size is approximately 1.4 times the square root of the number of voters. When voters' preferences are allowed to differ, coalitions form only if voters are approximately politically balanced. We propose a dynamic view of coalitions, in which groups of voters choose of their own free will to form and disband coalitions, in a continuing struggle to maintain their voting power. This is potentially an endogenous mechanism for political instability, even in a world where individuals' (probabilistic) preferences are fixed and known.

3
Paper
Causal Inference with General Treatment Regimes: Generalizing the Propensity Score
Imai, Kosuke
van Dyk, David A.

Uploaded 07-08-2003
Keywords causal inference
income
medical expenditure
non-random treatment
observational studies
schooling
smoking
subclassification
Abstract In this article, we develop the theoretical properties of the propensity function which is a generalization of the propensity score of Rosenbaum and Rubin (1983). Methods based on the propensity score have long been used for causal inference in observational studies; they are easy to use and can effectively reduce the bias caused by non-random treatment assignment. Although treatment regimes need not be binary in practice, the propensity score methods are generally confined to binary treatment scenarios. Two possible exceptions were suggested by Joffe and Rosenbaum (1999) and Imbens (2000) for ordinal and categorical treatments, respectively. In this article, we develop theory and methods which encompass all of these techniques and widen their applicability by allowing for arbitrary treatment regimes. We illustrate our propensity function methods by applying them to two data sets; we estimate the effect of smoking on medical expenditure and the effect of schooling on wages. We also conduct Monte Carlo experiments to investigate the performance of our methods.

4
Paper
An Integrated Perspective on Party Platforms and Electoral Choice
Elff, Martin

Uploaded 08-19-2002
Keywords electoral behavior
party platforms
party manifestos
ideology
social cleavages
class voting
religious voting
comparative politics
principal curves
generalized additive models
dimensional analysis
discrete choice
Abstract There are several perspectives on voting behavior that usually constitute separate strands of research: the impact of social background on vote choice, the relation between policy positions of parties and policy preferences of voters, and the effect of party platforms on the electoral success of parties. Although they all apply to the same entities, that is, to voters and parties, these different perspectives seem to have divergent implications. Thus we are in need of a way to reconcile these perspectives. The empirical results presented in this paper suggest a way what such a reconciliation should look like. They could be summarized as follows: In party platforms, several ideological dimensions can be distinguished that are connected with different cleavages in the Lispet-Rokkan sense. Second, it is shown that individuals from different social groups differ in the way they evaluate party platforms and choose among parties. Third, the way these individuals evaluate party platforms conforms to spatial notions of voting. Fourth, a general pattern of platform evaluation established on the base of pooled data of several countries accounts to a large degree for differences between levels of religious voting in these countries.

5
Paper
Logical Inconsistency in King-based Ecological Regressions
Herron, Michael C.
Shotts, Kenneth W.

Uploaded 07-03-2002
Keywords ecological inference
EI-R
consistency
second stage regressions
Abstract The statistical procedure EI-R, in which point estimates produced by the King (1997) ecological inference technique are used as dependent variables in a linear regression, can be logically inconsistent insofar as the assumptions necessary to support EI-R's first stage (ecological inference via King's method) can be incompatible with the assumptions supporting its second stage (linear regression). In light of this problem, we derive a specification test for logical consistency of EI-R and describe options available to a researcher who confronts test rejection. We then apply our test to the implementation of EI-R in Burden and Kimball's (1998) study of ticket splitting and find that this implementation is logically inconsistent. In correcting for this problem we show that Burden and Kimball's alleged substantive results are not results at all and instead are artifacts of a self-contradictory statistical technique.

6
Paper
Ticket-Splitting and Strategic Voting in Mixed Electoral Systems
Gschwend, Thomas

Uploaded 08-22-2001
Keywords Ticket Splitting
Strategic Voting
Mixed Electoral Systems
MNL
Multiple Imputation
Abstract This work attempts to refocus the discussion about strategic voting from its narrow focus on single-member district systems. It provides several contribution to the literature on strategic voting, ticket-splitting and on electoral systems. My first contribution is to allow the electoral institutions to vary, thereby opening up the possibility to provide different incentives to operate at the same time for the same voter. I offer a theory that particular institutions not only determine the emph{degree} of strategic voting, but also the emph{kind} of strategies voters employ. In mixed electoral systems strategic voting has two facets. Strategic voters employ either a emph{wasted-vote strategy} or a emph{coalition insurance strategy}. My second contribution is to provide evidence that people vary in their emph{proclivity} to vote strategically, as determined by various motivational factors as well as their capability to comprehend the strategic implications that are offered by particular electoral rules. Evidence supporting these contributions is stemming from an appropriate choice-model using individual-level data from the 1998 German National Election Study

7
Paper
An Estimator for Some Binary-Outcome Selection Models without Exclusion Restrictions
Sartori, Anne E.

Uploaded 07-09-2001
Keywords selection bias
discrete choice
small-sample properties
Abstract This paper provides a new estimator for selection models with dichotomous dependent variables when identical factors affect the selection equation and the equation of interest. Such situations arise naturally in game-theoretic models where selection is typically nonrandom and identical explanatory variables influence all decisions under investigation. When its own identifying assumption is reasonable, the estimator allows the researcher to avoid the painful choice among identifying from functional form alone (using a Heckman-type estimator), adding a theoretically unjustified variable to the selection equation in a mistaken attempt to "boost" identification, or giving upon estimation entirely. The paper compares the small-sample properties of the estimator with those of the Heckman- type estimator and ordinary probit using Monte Carlo methods. A brief analysis of the causes of enduring rivalries and war, following Lemke and Reed (2001),

8
Paper
The Foundations of Latino Voter Partisanship
Alvarez, R. Michael
Bedolla, Lisa Garcia

Uploaded 03-08-2001
Keywords Latino
Partisanship
2000 presidential election
Abstract Traditionally, the Latino electorate has been considered to be Democratic in partisan affiliation. However, during the 2000 presidential election there were many efforts made by the Republican party to court Latino voters, suggesting that perhaps Latino voters may becoming more Republican in orientation. Using a telephone survey of Latino likely voters conducted in the 2000 election, we examine three different sets of correlates of Latino voter partisanship: social and demographic, issue and ideological, and economic. We find that in Latino voter partisanship is strongly structured by social and demographic, as well as issue and ideological, factors. We also find that while it is unlikely that changes in economic factors or abortion attitudes will significantly change which parties the different Latino nation-origin groups identify with, it is possible that changes in ideological positoins regarding the role of government in providing social services could result in significant changes in Latino party identification.

9
Paper
A Specification Test for Linear Regressions that use King-Based Ecological Inference Point Estimates as Dependent Variables
Herron, Michael C.
Shotts, Kenneth W.

Uploaded 08-16-2000
Keywords ecological inference
second stage regressions
ordinary least squares
logical consistency
Abstract Many researchers use point estimates produced by the King (1997) ecological inference technique as dependent variables in second stage linear regressions. We show, however, that this two stage procedure is at risk of logical inconsistency. Namely, the assumptions necessary to support the procedure's first stage (ecological inference via King's method) can be incompatible with the assumptions supporting the second (linear regression). We derive a specification test for logical consistency of the two stage procedure and describe options available to a researcher whose ecological dataset fails the test.

10
Paper
Interest Group Ratings, Measurement Error, and Regression Inconsistency
Herron, Michael C.

Uploaded 04-25-2000
Keywords interest group ratings
regressions
measurement error
party effects
Abstract This paper uses spatial voting theory to analyze the properties of errors in interest group ratings insofar as ratings are used to measure legislator policy preferences. I show that, in general, rating errors are not mean zero; that the errors in a set of ratings are correlated with the underlying legislator preferences measured by the ratings; that ordinary least squares estimation of a bivariate regression which uses ratings as independent variables produces inconsistent coefficient estimates; that instrumenting for the aforementioned interest group ratings with a second set of ratings, as proposed by Brunell et al., will not necessarily fix this problem and can actually make matters worse; that, paradoxically, biased interest ratings are sometimes better for regression estimates than are unbiased ratings; and, finally, that estimating a trivariate regression with both interest group ratings and a party indicator on its right hand side produces inconsistent estimates and, moreover, a party coefficient estimate which has an unreliable sign.

11
Paper
Measuring the Relative Impact of Issues and the Economy in Democratic Elections
Alvarez, R. Michael
Nagler, Jonathan
Willette, Jennifer R.

Uploaded 01-12-1999
Keywords multinomial probit
discrete choice
multiparty elections
multicandidate elections
Canadian elections
Abstract It is generally accepted that issues and economic outcomes influence elections. In this paper we analyze the relative importance of issues and the economy in Canadian elections. We estimate a model of the 1988 and 1993 Canadian elections in which we include voter evaluations of the parties on a variety of issues, and voter evaluations of the national economy and their personal finances. We demonstrate that it is possible to compare the effects of issues and the econocy on election outcomes. And we put this in the context of the impact of issues and elections in several other democracies. We show that even in elections where other factors are dominant, we can still see the impact of economic voting. And we argue that given the tenuous connection between the actions of elected officials and macroeconomic outcomes, this suggests that voters may be giving elected officials undue leeway in their non-economic policy-making functions.

12
Paper
Policy, Personality, and Presidential Performance
Aldrich, John
Gronke, Paul
Grynaviski, Jeff

Uploaded 04-12-1999
Keywords Policy
Personality
and Presidential Performance
Abstract The importance of personality and performance assessments for candidate evaluations and choice has been well established, most prominently in the work on presidential prototypes by Kinder and colleagues, and the social cognitive model of vote choice by Rahn and colleagues. This paper takes a revisionist look at the effect of personality assessments for understanding presidential elections. Most of the experimental and survey data were collected in a relatively brief period, particularly between 1980 and 1984. Among the unique aspects of the 1980s was the impact of the distinctive personality of Ronald Reagan, sometimes called the "teflon president," because of the degree to which the public admired him as an individual, regardless of political events. His persona therefore might reasonably be assumed to have uniquely influenced the times and thus the models and results. We examine this question primarily by replicating the Rahn, et al. models using the NES surveys from 1984 through 1996, allowing us to evaluate the structure and performance of presidential prototypes and their role in candidate assessments over a longer period of time and greater variety of candidates and presidents.

13
Paper
Federal Elections Project: A Grant Proposal
Lublin, David

Uploaded 07-09-1999
Keywords grant proposals
data
data collection
election data
EI
ecological inference
Abstract The central goal of the Federal Elections Project is to collect the 2000 federal election results at the precinct level and match them with demographic data from the 2000 U.S. Census. D. Stephen Voss of the University of Kentucky and I plan to gather data for all federal elections, specifically for president and vice president, senator, and representative. Although the focus is on federal offices, the data set will also include results from the eleven states holding gubernatorial elections in 2000. The matching of election and demographic data are especially critical now that new techniques of ecological inference allow the study of numerous political questions utilizing aggregate data. Gary King's (1997) EI makes it possible to estimate with standard errors the voting behavior of units like precincts and counties utilizing election and census data.

14
Paper
Trade and Conflict in the Cold War Era: An Empirical Analysis
Beck, Nathaniel

Uploaded 08-30-1999
Keywords trade
conflict
MIDs
generalized additive model
smooth
Abstract What is the relationship between trade and conflict in the post-World War II era. Using a dyad-year design, and studying both all dyads and politically relevant dyads, this paper uses the generalized additive model to study the relationship between dyadic trade and militarized interstate disputes (both all disputes and those involving casualties only). For all dyads, moving from no trade to a small amount of trade increases the likelihood of conflict, though that mostly reflects the fact that non-traders also are likely to have little conflict in any arena. Moving from zero to low trade decreases the likelihood of conflict among politically relevant dyads, though this may simply reflect the nature of the Cold War world where dyads made up of Cold War opponents did not trade but did fight. In any event, there is little evidence for a causal pacific impact of trade, but also little evidence that trade is inherently conflictual, other than being an obvious necessary condition for trade disputes and also signalling that dyadic partners are in some interesting relationship.

15
Paper
Cointegration and Military Rivalry: Some Evidence on 5 Modern Rivalries
Gerace, Michael P.

Uploaded 11-29-1999
Keywords cointegration
military rivalry
military expenditures
arms race
time series
Johansen Method
Abstract his article investigates the possibilities for stability in arms races, with its starting point being Richardson's discussion of stability conditions. Most discussions of stability focus on whether armaments levels become stable, but there could also be a stable relationship between the armaments of rivals. By employing a time series approach, the behavioral aspects of a model and underlying stability conditions can be related clearly to data characteristics, which clarifies the possibilities for a model. The military expenditures of 5 sets of rivals are then investigated for stationarity, the nature of the trend, and for cointegration. Whether the data are stationary and, if not, the nature of the trend, have implications for what kind of stability can exist over the long-run (or whether the models are explosive). The Johansen method is used for the cointegration tests, and VEC models are evaluated for two cases. While the results are mixed, there is some support for cointegrating relationships among rivals, there is no indication of stability in the level of expenditures or of explosive instability over the long-run.

16
Paper
Parties, Issue Spaces, and Voting: A Comparative Perspective
Alvarez, R. Michael
Nagler, Jonathan
Willette, Jennifer R.

Uploaded 04-20-1998
Keywords elections
parties
issues
comparative
Abstract An important property of any party system is the set of choices it presents to the electorate. In this paper we analyze the distribution of the parties in the multidimensional issue space, and introduce the notion of compactness of the party system. We show how compactness can be measured using standard survey items found on national election surveys. By measuring the spacing of the parties relative to the distribution of the voters, we are able to compute a metric-free measure of compactness of the party system. Comparing the compactness of party systems across countries allows us to determine the relative amount of issue choice afforded voters in different polities. We test the impact compactness of the party space has on voter choice in four countries: the United States, the Netherlands, Canada, and Great Britain. We demonstrate that the more compact the issue space on any issue, the less voters weight that issue in making their vote decision. Thus we provide evidence for theories of issue voting.

17
Paper
Estimating Time-Varying Parameters with Flexible Least Squares
Wood, B. Dan

Uploaded 07-02-1998
Keywords time series
time-varying parameters
stochastic parameters
flexible least squares
Abstract A common assumption among time series analysts is that estimated coefficients remain constant through time. Yet this strong assumption often has little grounds in substantive theory or empirical tests. If coefficients vary through time in an infinite time sequence, but are estimated with constant coefficient methods in a finite time sequence, then this can lead to significant information loss, as well as to errors of inference. This paper demonstrates a method for exploring the relative stability of time series coefficients, Flexible Least Squares (FLS). In particular, FLS is superior to other such methods, in that it enables the analyst to diagnose the magnitude of coefficient variation, as well as detect which particular coefficients are changing. FLS also provides an estimated vector of time-varying coefficients that can be used for exploratory or descriptive purposes. FLS properties are demonstrated through simulation analysis and an evaluation of the time-varying equilibrium between federal revenues and expenditures from 1904-1996.

18
Paper
Modeling Direction and Intensity in Ordinal Scales with Midpoints
Jones, Bradford S.
Sobel, Michael E.

Uploaded 07-21-1998
Keywords adjacent category logit
log-linear models
public opinion
Congress
Abstract Political opinion analysts are frequently work with semantically balanced ordinal scales. Such survey items are frequently used to measure candidate evaluations, public spending preferences, positions on social issues, and candidate and party placement. Because of the special nature of these survey items (semantically balanced about a midpoint), researchers may be interested in understanding how both the response direction and response intensity varies over time and/or across covariate classes. That is, trends may be found in the tendency for respondents to choose categories above vs. below the midpoint (the response direction) and trends may be found in the tendency for respondents to choose between or among category labels above or below the midpoint. And while political analysts are commonly interested in response intensity and direction, traditional methods used to model distributions on semantically balanced ordinal scales are problematic. In this paper, we discuss a class of models originally developed by Sobel (1995, 1997, 1998) that allows researchers to simultaneously model direction and intensity in ordinal scales with midpoints. Specifically, we parameterize the model as an adjacent category logit model. Numerous parsimonious models may be arrived at that describe trends in the response direction and response intensity. Because the adjacent category logit model is linear in the logits, we estimate the model using log-linear models. We present an application of the models to data on approval ratings of House incumbents. We find that the trends in response directions (the tendency for respondents to evaluate the incumbent favorably or not favorably) increase through the 1980s, peaking in the late Eighties, and are now declining over the 1990s. With regard to response intensity, (that is, the tendency to respond in the extreme categories vs. the moderate categories), we find that intensity increases during most presidential election cycles and vanishes during midterm election years. We argue this finding is related to the different levels of political information citizens are exposed to in presidential vs. midterm election cycles.

19
Paper
Economic Performance, Job Insecurity, and Electoral Choice
Lacy, Dean
Mughan, Anthony

Uploaded 09-17-1998
Keywords economic voting
economic insecurity
Perot
turnout
multinomial probit
1996 election
Abstract The mass political economy literature concentrates on egocentric and sociotropic evaluations of short-term economic performance. Scant attention is paid to other economic concerns people may have. In a neo-liberal economic climate characterized by a downsized labor market and the retrenchment of government welfare entitlements, one such widely-publicized concern is job insecurity. We show that job insecurity is a novel form of discontent that is independent of the retrospective evaluations of short-term performance that are the stuff of the mainstream mass political economy literature. At the same time, the political effects of job insecurity are distinctive. In a multinomial probit model of electoral choice in the 1996 U.S. presidential election, job insecurity is associated with support for the third-party candidate, Ross Perot, but, contrary to conventional wisdom, has no implications for turnout. Traditional retrospective evaluations of economic performance explain the major-party vote and abstention.

20
Paper
Survey Measures of Uncertainty
Alvarez, R. Michael

Uploaded 00-00-0000
Keywords uncertainty
surveys
National Election Studies
Abstract There have been a number of measures of voter uncertainty about candidate issue stands which have been proposed in the literature. Here I examine the use of "direct" uncertainty questions, where respondents are asked to give their subjective uncertainty about some question they have just been asked. The 1995 NES Pilot Study included two survey experiments regarding the uncertainty questions; one which examined uncertainty about candidate traits, the other looking at uncertainty of environmental issue placements using branching-format issue questions. Using these survey experiments, I conclude that these survey questions merit use in future National Election Study surveys.

21
Paper
An Empirical Model of Government Formation in Parliamentary Democracies
Martin, Lanny W.
Stevenson, Randolph T.

Uploaded 00-00-0000
Keywords coalition theory
government formation
conditional logit
econometrics
Abstract The study of coalition politics in parliamentary democracies has led to the construction of several sophisticated theories of government formation, but it has thus far failed to lead to the development of a reliable method that will permit us to verify these theories empirically. In this paper, we propose a solution to the problems plaguing the application of multivariate statistical analysis in this area. Specifically, we advocate use of the conditional logit technique to model the government formation process. We use this model to test various hypotheses from coalition theory on an original data set consisting of information on every potential government that could have formed in 285 separate instances of coalition bargaining in 14 post-war parliamentary democracies. We then illustrate further uses of this method by examining three real-world cases of government formation.

22
Paper
Politicians and the Press: Who Leads, Who Follows?
Bartels, Larry M.

Uploaded 08-23-1996
Keywords media
agenda-setting
president
Congress
Bosnia
Medicare
NAFTA
Whitewater
VAR
Abstract his paper examines the interplay between politicians and the press in setting the national policy agenda. The data for the analysis consist of daily counts of executive branch activities, congressional activities, New York Times stories, local newspaper stories, and ABC News coverage of Bosnia, Medicare, NAFTA, and Whitewater during the first three years of the Clinton administration. Vector autoregressions suggest that all three media outlets (and the politicians themselves) followed the lead of the executive branch on Bosnia and NAFTA and of Congress on Medicare and Whitewater. However, New York Times coverage led political activities even more than it followed them, with especially strong agenda-setting effects for NAFTA and Whitewater. The independent agenda-setting power of ABC News was substantially less than that of the Times, but still considerable, while local newspapers tended, by and large, to follow the lead of politicians and the national news media. Prepared for presentation at the Annual Meeting of the American Political Science Association, San Francisco, September 1996.

23
Paper
The Impact of Political Campaigns on the Effects of Political Sophistication
Fournier, Patrick P.

Uploaded 09-18-1997
Keywords campaign
sophistication
information
heterogeneity
individual deviation
aggregate deviation
Canadian politics
Abstract [none provided]

24
Paper
The Consequences of Majority-Minority Districts for Representation: Evidence of Partisan Mobilization, Countermobilization and Demobilization
Brandt, Patrick T.
Bailey, Michael

Uploaded 08-21-1997
Keywords Multinomial probit
panel data methods
simulated maximum likelihood
probability simulation
redistricting
Abstract Few analyses of the effects of race-based congressional redistricting have used survey data to analyze the implications of redistricting. This type of micro-level data can add significant intuition to aggregate data analysis. This paper looks at whether voters respond to redistricting by mobilizing, demobilizing, or countermobilizing using panel data from the 1990-1992 National Election Study. A 2-period vote choice model is estimated using a multiperiod multinomial probit model, and controlling for the effects of redistricting. Results show that the presence of black Democratic candidates in majority-minority districts after redistricting reduces turnout by white voters for the Democratic candidates.

25
Paper
Congressional Campaign Contributions, District Service and Electoral Outcomes in the United States: Statistical Tests of a Formal Game Model with Nonlinear Dynamics
Mebane, Walter R.

Uploaded 07-22-1997
Keywords congressional elections
campaign contributions
campaign finance
district service
intergovernmental transfers
formal model
game theory
Cournot-Nash equilibrium
Nash equilibrium
differential equations
dynamical system
nonlinear dynamics
Hopf bifurcation
normal form
Whitney embedding theorem
divergence theorem
Liouville's theorem
multivariate normal distribution
maximum likelihood
Wald test
stability
asymptotic stability
Abstract Using a two-stage game model of congressional campaigns, the second stage being a system of ordinary differential equations, I argue that candidates, political parties and financial contributors interact strategically in American congressional elections in a way that is inherently nonlinear. Congressional races in which the incumbent faces a challenge are generated by dynamical systems that have Hopf bifurcations: a small change in the challenger's quality or in the type of district service can change a stable incumbent advantage into an oscillating race in which the incumbent's chances are uncertain. The normal form equations for such a system inspire a statistical model that can recover qualitative features of the dynamics from cross-sectional data. I estimate and test the model using data from the 1984 and 1986 election periods for political action committee campaign contributions, intergovernmental transfers and general election vote shares.

26
Paper
Heterogeneity and Bias in Models of Vote Choice
Berinsky, Adam

Uploaded 04-21-1997
Keywords voting models
selection bias
heteroskedasticity
missing data
Abstract Voters in the United States do not behave in a homogenous manner. Voting models typically account for such heterogeneity by seeking to decompose the process of vote choice into a number of distinct components. By examining voting choice data in this way, researchers are able to ascertain reasonable estimates of the average effect of various socio-economic and political variables on the candidate selection process. Models of this sort, while plausible, may not properly reflect the true heterogeneity of the American voter. At their core, simple models assume that voters use a common and uniform decision rule when deciding between candidates. But it is possible, if not likely, that different groups and classes of citizens use differently tructured processes to determine their choice of candidates. Searchers have attempted to account for this heterogeneity in a variety of ways. Rivers(1988) and Jackson (1992), for example, have accounted for differences in the voting behavior of individuals by allowing the mean effect of theoretically important variables to vary across individuals. While these approaches are extremely promising, in this paper I will take a different approach and examine three more subtle forms of heterogeneity in the vote choice process: (1) heterogeneity induced by non-random selection from the full population of citizens into the vote choice model sample; (2) heterogeneity due to the interaction of selection bias and non-constant variance; and (3) heterogeneity in the patterns of missing data across groups of the respondents. While much of the discussion in the paper is focused on the first two forms of heterogeneity, it is the third form of heterogeneity - one not typically addressed in the political science literature - that is the most important determinant of the degree of bias in vote choice models. Thus, heterogeneity within the sample of respondents affects the vote choice model estimates, just not in the way I originally envisioned. It is not just heterogeneity in the variance term, or in the selection into the vote choice process that poses a threat to accurate estimates of the power of the predictors in our vote choice models. Rather, it is the failure to preserve or account for the heterogeneity of the paths by which people answer survey questions that is the real bogeyman of vote choice models.

27
Paper
Getting the Mean Right is a Good Thing: Generalized Additive Models
Beck, Nathaniel
Jackman, Simon

Uploaded 01-30-1997
Keywords non-parametric regression
scatterplot smoothing
local fitting
splines
non-linearity
Perot
incumbency
cabinet duration
democratic peace
Abstract This is a substantial revision of the paper submitted as beck96. A shorter version of this paper is under consideration at a political science journal of note. Theory: Social scientists almost always use statistical models positing the dependent variable as a linear function of X, despite suspicions that the social and political world is not so parsimonious. Generalized additive models (GAMs) permit each independent variable to be modelled non-parametrically while requiring that the independent variables combine additively, striking a sensible balance between the flexibility of non-parametric techniques and the ease of interpretation and familiarity of linear regression. GAMs thus offer social scientists a practical methodology for improving on the extant practice of ``linearity by default''. Method: We present the statistical concepts and tools underlying GAMs (e.g., scatterplot smoothing, non-parametrics more generally, and accompanying graphical methods), and summarize issues pertaining to estimation, inference, and the statistical properties of GAMs. Monte Carlo experiments assess the validity of tests of linearity accompanying GAMs. Re-analysis of published work in American politics, comparative politics, and international relations demonstrates the usefulness of GAMs in social science settings. Results: Our re-analyses of published work show that GAMs can extract substantive mileage beyond that yielded by linear regression, offering novel insights, particularly in terms of modelling interactions. The Monte Carlo experiments show there is little danger of GAMs spuriously finding non-linear structures. All data analysis, Monte Carlo experiments, and statistical graphs were generated using S-PLUS, Version 3.3. The routines and data are available at ftp://weber.uscd.edu/pub/nbeck/gam.

28
Paper
The Authority of Supreme Court Precedent: A Network Analysis
Fowler, James
Jeon, Sangick

Uploaded 07-07-2005
Abstract We construct the complete network of 30,288 majority opinions written by the U.S. Supreme Court and the cases they cite from 1754 to 2002. Data from this network demonstrates quantitatively the evolution of the norm of stare decisis in the 19th Century and a significant deviation from this norm by the activist Warren court. We further describe a method for creating authority scores using the network data to identify the most important Court precedents. This method yields rankings that conform closely to evaluations by legal experts, and even predicts which cases they will identify as important in the future. An analysis of these scores over time allows us to test several hypotheses about the rise and fall of precedent. We show that reversed cases tend to be much more important than other decisions, and the cases that overrule them quickly become and remain even more important as the reversed decisions decline. We also show that the Court is careful to ground overruling decisions in past precedent, and the care it exercises is increasing in the importance of the decision that is overruled. Finally, authority scores corroborate qualitative assessments of which issues and cases the Court prioritizes and how these change over time.

29
Paper
Sampling people or people in places? The BES as an election study
Johnston, Ron
Harris, Rich
Jones, Kelvyn

Uploaded 08-15-2005
Keywords British Election Study
representativeness
sampling
Abstract UK general elections involve a number of separate, though complexly inter-linked, contests for support among the parties. Two of these are reflected in the main types of model of voting behaviour used by political scientists, whereas the third involves the separate contests that take place – in most cases among the main political parties – in the (now) 646 constituencies which send representatives to the House of Commons. Ideally, electoral surveys should take account of all three. In this note, we explore the extent to which that is the case with the 2005 British Election Study – with the coverage restricted to England and Wales only, for technical reasons – and explore the implications of our findings for future electoral studies.

30
Paper
The difference between ``significant'' and ``not significant'' is not itself statistically significant
Gelman, Andrew
Stern, Hal

Uploaded 12-23-2005
Keywords multilevel modeling
multiple comparisons
replication
statistical significance
Abstract A common error in statistical analyses is to summarize comparisons by declarations of statistical significance or non-significance. There are a number of difficulties with this approach. First is the oft-cited dictum that statistical significance is not the same as practical significance. Another difficulty is that this dichotomization into significant and non-significant results encourages the dismissal of observed differences in favor of the usually less interesting null hypothesis of no difference. Here, we focus on a less commonly noted problem, namely that changes in statistical significance are not themselves significant. By this, we are not merely making the commonplace observation that any particular threshold is arbitrary---for example, only a small change is required to move an estimate from a 5.1% significance level to 4.9%, thus moving it into statistical significance. Rather, we are pointing out that even large changes in significance levels can correspond to small, non-significant changes in the underlying variables. We illustrate with a theoretical and an applied example.

31
Paper
Election Forensics: Vote Counts and Benford's Law
Mebane, Walter R.

Uploaded 07-17-2006
Keywords election forensics
Benford's law
vote fraud
election fraud
Florida 2004
Mexico 2006
Abstract How can we be sure that the declared election winner actually got the most votes? Was the election stolen? This paper considers a statistical method based on the pattern of digits in vote counts (the second-digit Benford's Law, or 2BL) that may be useful for detecting fraud or other anomalies. The method seems to be useful for vote counts at the precinct level but not for counts at the level of individual voting machines, at least not when the way voters are assigned to machines induces a pattern I call roughly equal division with leftovers (REDWL). I demonstrate two mechanisms that can cause precinct vote counts in general to satisfy 2BL. I use simulations to illustrate that the 2BL test can be very sensitive when vote counts are subjected to various kinds of manipulation. I use data from the 2004 election in Florida and the 2006 election in Mexico to illustrate use of the 2BL tests.

32
Paper
Should the Democrats move to the left on economic policy?
Gelman, Andrew

Uploaded 09-20-2006
Keywords median voter
Presidential election
public opinion
spatial model of voting
Abstract Could John Kerry have gained votes in the recent Presidential election by more clearly distinguishing himself from George Bush on economic policy? At first thought, the logic of political preferences would suggest not: the Republicans are to the right of most Americans on economic policy, and so in a one-dimensional space with party positions measured with no error, the optimal strategy for the Democrats would be to stand infinitesimally to the left of the Republicans. The median voter theorem suggests that each party should keep its policy positions just barely distinguishable from the opposition. In a multidimensional setting, however, or when voters vary in their perceptions of the parties' positions, a party can benefit from putting some daylight between itself and the other party on an issue where it has a public-opinion advantage (such as economic policy for the Democrats). We set up a plausible theoretical model in which the Democrats could achieve a net gain in votes by moving to the left on economic policy, given the parties' positions on a range of issue dimensions. We then evaluate this model based on survey data on voters' perceptions of their own positions and those of the candidates in 2004. Under our model, it turns out to be optimal for the Democrats to move slightly to the {em right} but staying clearly to the left of the Republicans' current position on economic issues.

33
Paper
Authoritarian Reversals and Democratic Consolidation
Svolik, Milan

Uploaded 02-21-2007
Keywords democratic consolidation
transitions to democracy
split-population models
cure rate models
mixture models
Abstract I investigate the determinants and the process of authoritarian reversals and democratic consolidation. I employ a new empirical model that allows me to distinguish between two central dynamics: the likelihood that a democracy consolidates, and the timing of authoritarian reversals in democracies that are not consolidated. I demonstrate that existing democracies are a mixture of transitional and consolidated democracies rather than a single population. This approach leads to new insights into the causes of democratic consolidation that cannot be obtained with existing techniques. I find that the level of economic development, type of executive, and authoritarian past determine whether a democracy consolidates, but have no effect on the timing of reversals. That risk is only associated with economic recessions. I also find that the existing studies greatly underestimate the risk of early reversals while they simultaneously overestimate the risk of late reversals, and that a large number of existing democracies are in fact consolidated.

34
Paper
Statistics for Digits
Mebane, Walter

Uploaded 07-17-2007
Keywords election forensics
2BL test
Benford's Law
vote counts
outliers
anomalies
election fraud
Abstract I show how election results may be used to calibrate a test that compares the second digits of a set of precinct-level vote counts to the frequencies expected according to Benford's law. For the votes cast for two competing candidates, the calibration is accomplished by tuning a simulation mechanism that mixes normal and negative binomial distributions so that the first two moments of the simulated distribution match the moments observed in a set of precincts. I illustrate the method using data from the counties that had the ten largest values of the digit test statistic for the major party candidates in the 2000 and 2004 U.S. presidential election. Calibration suggests that the peculiar features of the joint distribution of candidate support and precinct sizes explain several of the large test statistic values. I show that artificial manipulations can significantly increase the test statistic's value even relative to the increased distribution the tuned mechanism is producing. So the test can sometimes detect systematic distortions in vote counts even when the baseline mechanism does not produce counts that have digits that are distributed as specified by Benford's law.

35
Paper
Democratic Compromise: A Latent Variable Analysis of Ten Measures of Regime Type
Pemstein, Daniel
Meserve, Stephen
Melton, James

Uploaded 02-07-2008
Keywords democracy
measurement
democracy measurement
regime
regime type
latent variable analysis
Bayesian latent variable analysis
UDS
Unified Democracy Scores
multi-rater ordinal probit
Abstract Using a Bayesian latent variable approach, we synthesize a new measure of democracy, the Unified Democracy Scores (UDS), from ten extant scales. We accompany this new scale with quantitative estimates of uncertainty, provide estimates of the relative reliability of the constituent indicators, and quantify what the ordinal levels of each of the existing measures mean in relationship to one another. Our method eschews the difficult -- and often arbitrary -- decision to use one existing democracy scale over another in favor of a cumulative approach that allows us to simultaneously leverage the measurement efforts of numerous scholars.

36
Paper
Modeling Sample Selection for Durations with Time-Varying Covariates
Boehmke, Frederick

Uploaded 07-02-2008
Keywords selection
selection bias
duration
time-vary covariates
event history
exchange rates
Abstract We extend previous estimators for duration data that suffer from non-random sample selection to allow for time-varying covariates. Rather that a continuous-time duration model, we propose a discrete-time alternative that models the (constant) effects of sample selection at the time of selection across all years of the resulting spell. Properties of the estimator are compared to those of a naive discrete duration model through Monte Carlo analysis and indicate that our estimator outperforms the naive model when selection is non-trivial. We then apply this estimator to the question of the duration of monetary regimes.

37
Paper
Do Observational Methods Produce Reliable Results? The Use of Matching in Estimating the Treatment Effect of Class Size Reduction
Hosek, Adrienne

Uploaded 07-09-2008
Abstract Several studies have tested the accuracy and validity of observational research methods to evaluate what estimation techniques . Randomized experiments are the gold standard of research design. When conducted correctly, such studies produce an unbiased estimate of the treatment effect for the experimental sample. Unfortunately, randomized experiments are rarely performed in the social sciences, largely due to insufficient resources. When a randomized experiment is not an option, social scientists turn to observational research methods to study the effects of a given treatment. Several previous studies have looked at the validity of using observational techniques to determine whether the reliably provide an accurate and consistent measure of a known treatment effect. In this paper, we re-eximine the work of Hollister and Wilde (2007), which did not systematically recover the experimental benchmark through propensity score analysis using data from an experimental study on class size reduction. They concluded that observational methods performed poorly based on these results. We find that they did not develop an appropriate test and thus the inability to achieve the experimental benchmark should not reflect flaws in the methodological approach, but rather stem from problems in test design.

38
Paper
Public Opinion and Senate Confirmation of Supreme Court Nominees
Kastellec, Jonathan
Lax, Jeffrey
Phillips, Justin

Uploaded 08-22-2008
Keywords Supreme Court
nominations
public opinion
multilevel models
poststratification

Abstract We study the relationship between state-level public opinion and the roll call votes of senators on Supreme Court nominees. Applying recent advances in multilevel modeling, we use national polls on nine recent Supreme Court nominees to produce state-of-the-art estimates of public support for the confirmation of each nominee in all 50 states. We show that greater public support strongly increases the probability that a senator will vote to approve a nominee, even after controlling for standard predictors of roll call voting. We also find that the impact of opinion varies with context: it has a greater effect on opposition party senators, on ideologically opposed senators, and for generally weak nominees. These results establish a systematic and powerful link between constituency opinion and voting on Supreme Court nominees.

39
Paper
Foreign Media and Protest Diffusion in Authoritarian Regimes: The Case of the 1989 East German Revolution
Kern, Holger

Uploaded 11-25-2008
Keywords Germany
media
causal inference
matching
authoritarian
collective action
social movement
Abstract Does access to foreign media facilitate the diffusion of protest in authoritarian regimes? Apparently for the first time, I test this hypothesis by exploiting a natural experiment in communist East Germany. I take advantage of the fact that West German television broadcasts could be received in most but not all parts of East Germany and conduct a matched analysis in which counties without access to West German television are matched to a comparison group of counties with West German television. Comparing these two groups of East German counties, I find no evidence that West German television affected the speed or depth of protest diffusion during the 1989 East German revolution.

40
Paper
Spike and Slab Prior Distributions for Simultaneous Bayesian Hypothesis Testing, Model Selection, and Prediction, of Nonlinear Outcomes
Pang, Xun
Gill, Jeff

Uploaded 07-13-2009
Keywords Spike and Slab Prior
Hypothesis Testing
Bayesian Model Selection
Bayesian Model Averaging
Adaptive Rejection Sampling
Generalized Linear Model
Abstract A small body of literature has used the spike and slab prior specification for model selection with strictly linear outcomes. In this setup a two-component mixture distribution is stipulated for coefficients of interest with one part centered at zero with very high precision (the spike) and the other as a distribution diffusely centered at the research hypothesis (the slab). With the selective shrinkage, this setup incorporates the zero coefficient contingency directly into the modeling process to produce posterior probabilities for hypothesized outcomes. We extend the model to qualitative responses by designing a hierarchy of forms over both the parameter and model spaces to achieve variable selection, model averaging, and individual coefficient hypothesis testing. To overcome the technical challenges in estimating the marginal posterior distributions possibly with a dramatic ratio of density heights of the spike to the slab, we develop a hybrid Gibbs sampling algorithm using an adaptive rejection approach for various discrete outcome models, including dichotomous, polychotomous, and count responses. The performance of the models and methods are assessed with both Monte Carlo experiments and empirical applications in political science.

41
Paper
Characterizing the variance improvement in linear Dirichlet random effects models
Kyung, Minjung
Gill, Jeff
Casella, George

Uploaded 09-11-2009
Keywords Dirichlet processes
mixture models
Bayesian nonparametrics
Abstract An alternative to the classical mixed model with normal random effects is to use a Dirichlet process to model the random effects. Such models have proven useful in practice, and we have observed a noticeable variance reduction, in the estimation of the fixed effects, when the Dirichlet process is used instead of the normal. In this paper we formalize this notion, and give a theoretical justification for the expected variance reduction. We show that for almost all data vectors, the posterior variance from the Dirichlet random effects model is smaller than that form the normal random effects model. Forthcoming: Statistics and Probability Letters

42
Paper
"How many zombies do you know?" Using indirect survey methods to measure alien attacks and outbreaks of the undead
Gelman, Andrew

Uploaded 03-16-2010
Abstract The zombie menace has so far been studied only qualitatively or through the use of mathematical models without empirical content. We propose to use a new tool in survey research to allow zombies to be studied indirectly without risk to the interviewers.

43
Paper
Measuring Political Support and Issue Ownership Using Endorsement Experiments, with Application to the Militant Groups in Pakistan
Bullock, Will
Imai, Kosuke
Shapiro, Jacob

Uploaded 07-18-2010
Keywords endorsement experiment
survey experiment
bayesian
pakistan
militant groups
issue ownership
social desirability
Abstract To measure the levels of support for political actors (e.g., candidates and parties) and the strength of their issue ownership, survey experiments are often conducted in which respondents are asked to express their opinion about a particular policy endorsed by a randomly selected political actor. These responses are contrasted with those from a control group that receives no endorsement. This survey methodology is particularly useful for studying sensitive political attitudes. We develop a Bayesian hierarchical measurement model for such endorsement experiments, demonstrate its statistical properties through simulations, and use it to measure support for Islamist militant groups in Pakistan. Our model uses item response theory to estimate support levels on the same scale as the ideal points of respondents. The model also estimates the strength of political actors' issue ownership for speci c policies as well as the relationship between respondents' characteristics and support levels. Our analysis of a recent survey experiment in Pakistan reveals three key patterns. First, citizens' attitudes towards militant groups are geographically clustered. Second, once these regional di fferences are taken into account, respondents' characteristics have little predictive power for their support levels. Finally, militant groups tend to receive less support in the areas where they operate.

44
Paper
Beyond LATE: A Simple Method for Recovering Sample Average Treatment Effects
Aronow, Peter
Sovey, Allison

Uploaded 03-24-2011
Keywords compliance score
instrumental variables
LATE
average treatment effect
causal inference
Abstract Political scientists frequently use instrumental variables estimators to estimate the Local Average Treatment Effect (LATE), or the average treatment effect among those who comply with treatment assignment. However, the LATE is often not the causal estimand of interest; researchers may instead be interested in the Sample Average Treatment Effect (SATE), or the average treatment effect for the entire sample. We first introduce the compliance score, a pre-treatment covariate that reflects a unit's probability of treatment compliance, to researchers in political science. We posit a maximum likelihood estimation technique for predicting compliance scores even in the presence of two-sided non-compliance. We then develop a new technique, inverse compliance score weighting, that, in conjunction with a standard IV estimator, will allow researchers to easily estimate the SATE. Finally, we estimate both the LATE and SATE for a randomized experiment designed to measure the effects of media exposure and reach striking substantive conclusions.

45
Paper
Agnostic Notes on Regression Adjustments to Experimental Data: Reexamining Freedman's Critique
Lin, Winston

Uploaded 09-02-2011
Keywords Covariate adjustment
Randomization inference
Neyman's repeated sampling approach
Sandwich estimator
Social experiments
Abstract Freedman [Adv. in Appl. Math. 40 (2008a) 180–193; Ann. Appl. Stat. (2008b) 2 176–196] critiqued OLS regression adjustment of estimated treatment effects in randomized experiments, using Neyman’s model for randomization inference. This paper argues that in sufficiently large samples, the statistical problems he raised are either minor or easily fixed. OLS adjustment improves or does not hurt asymptotic precision when the regression includes a full set of treatment-covariate interactions. Asymptotically valid confidence intervals can be constructed with the Huber-White sandwich standard error estimator. Even the traditional OLS adjustment has benign large-sample properties when subjects are randomly assigned to two groups of equal size. The strongest reasons to support Freedman’s preference for unadjusted estimates are transparency and the dangers of specification search.

46
Paper
Who's a Directional Voter and Who's a Proximity Voter? An Application of Finite Mixture Modeling to Issue Voting in the 2008 American Presidential Election
Kropko, Jonathan

Uploaded 07-15-2012
Keywords issue voting
directional
proximity
ANES
finite mixture modeling
multiple imputation
Abstract This project aims to use new methodology to help settle a longstanding debate in American politics: whether proximity or directional distance is more appropriate for voting models in Presidential elections; whether the two distances are better fits for different subsets of the electorate; and if so, what are the characteristics of the voters for whom each distance fits best? Unlike previous attempts to judge between the directional and proximity models, which have used summary statistics generated at the level of the whole sample to make inferences, this study compares the fit of the models for each individual observation. A finite mixture model, as recently described by Imai and Tingley (2012), estimates the probability that each observation could have been generated by each competing model. These probabilities can then be modeled using other covariates. Using the 2008 American National Election Study, I estimate the probability that each voter is using each kind of issue distance, and I test the hypothesis that voters with higher levels of political sophistication are more likely to evaluate candidates using a proximity model, and voters with lower levels of sophistication are more likely to evaluate candidates using a directional model. While strong evidence suggests that some voters are directional and some are proximity, no evidence is found that suggests sophistication influences the probability that each voter is directional or proximity. In addition, like previous studies, the relative strength of the directional and proximity models is found to depend crucially on modeling decisions, especially the use of each candidate's average placement in the sample versus each respondent's idiosyncratic placement of the candidates.

47
Paper
How can soccer improve statistical learning?
Filho, Dalson
Rocha, Enivaldo
Paranhos, Ranulfo
Júnior, José

Uploaded 03-19-2014
Keywords quantitative methods
linear regression
soccer
Abstract This paper presents an active classroom exercise focusing on the interpretation of ordinary least squares regression coefficients. Methodologically, students analyze Brazilian soccer matches data, formulate and test classical hypothesis regarding home team advantage. Technically, our framework is simply adapted for others sports and has no implementation cost. In addition, the exercise is easily conducted by the instructor and highly enjoyable for the students. The intuitive approach also facilitates the understanding of linear regression practical application.

48
Paper
The Most Liberal Senator: Analyzing and Interpreting Congressional Roll Calls
Clinton, Joshua
Jackman, Simon
Rivers, Doug

Uploaded 05-12-2004
Keywords ideal points
roll call voting
2004 presidential election
Abstract The non-partisan National Journal recently declared Senator John Kerry to be the "top liberal" in the Senate based on analysis of 62 roll calls in 2003. Although widely reported in the media (and the subject of a debate among the Democratic presidential candidates), we argue that this characterization of Kerry is misleading in at least two respects. First, when we account for the "margin of error: in the voting scores -- which is considerable for Kerry given that he missed 60% of the National Journal's key votes while campaigning -- we discover that the probability that Kerry is the "top liberal" is only .30, and that we cannot reject the conclusion that Kerry is only the 20th most liberal senator. Second, we compare the position of the President Bush on these key votes; including the President's announced positions on these votes reveals the President to be just as conservative as Kerry is liberal (i.e., both candidates are extreme relative to the 108th Senate). A similar conclusion holds when we replicate the analysis using all votes cast in the 107th Senate. A more comprehensive analysis than that undertaken by National Journal (including an accounting of the margins of error in voting scores) shows although Kerry belongs to the most liberal quintile of the contemporary Senate, Bush belongs to the most conservative quintile.

49
Paper
Unanticipated Delays: A Unified Model of Position Timing and Position Content
Boehmke, Frederick

Uploaded 12-09-2003
Keywords duration
discrete choice
seemingly unrelated
position taking
NAFTA
Abstract On potentially contentious votes or when the margin of an upcoming vote is expected to be small, public position announcements by elected representatives may be strategically linked to position content and ultimately, to vote choice. Strategic position timing may occur when legislators announce early in order to sway others' vote choice; it may occur late when legislators stall in order to gain more information or are hoping that a close margin will make their vote valuable to participants willing to make side payments. Since intentions behind delay may often be unobserved or even unobservable, existing empirical analyses are unable to capture them. In this paper I argue that unobserved factors that influence position timing are related to unobserved factors influencing position content. To test this prediction, I develop a seemingly unrelated discrete-choice duration model that estimates the relationship between unobserved factors in the two processes. I then estimate this model using data on position timing and position content from the vote for the North American Free Trade Agreement. The results provide clear evidence that the two processes are linked and are consistent with my arguments about the sources of unanticipated delay.

50
Paper
Presidential Elections and the Stock Market: Comparing Markov-Switching and (FIE)GARCH Models of Stock Volatility
Leblang, David
Mukherjee, Bumba

Uploaded 07-07-2003
Keywords Volatility
Markov-Switching
ARCH
Political Business Cycle
Stock Markets
Abstract Existing theoretical research on electoral politics and financial markets predict that when investors expect left parties Democrats (US), Labor (UK), to win elections market volatility increases. In addition, current econometric research on stock market volatility suggests that Markov-Switching models provide more accurate volatility forecasts and fit stock price volatility data better than linear or non-linear GARCH (Generalized Autoregressive Conditional Heteroscedasticity) models. We take issue with both of these claims. We construct a formal model which predicts that if traders anticipate that the Democratic candidate will win the Presidential election stock market volatility decreases. Using two data sets from the 2000 Presidential election we test our claim by estimating several GARCH, Exponential-GARCH (EGARCH), Fractionally Integrated Exponential-GARCH (FIEGARCH) and Markov-Switching models. We also conduct extensive out-of-sample forecasting tests to evaluate these competing statistical models. Results from the out-of-sample forecasts show^×in contrast to prevailing claims that GARCH and EGARCH models provide substantially more accurate forecasts than the Markov-Switching models. Estimates from all the competing statistical models support the predictions from our formal model.


< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 next>
   
wustlArtSci