logoPicV1 logoTextV1

Search Results

Below results based on the criteria 'Kullback'
Total number of records returned: 874

What Divides Us?: The Image and Organization of Political Science
Grant, Tobin J.

Uploaded 05-25-2004
Keywords political science
multidimensional scaling
sociology of science
Abstract The dominant image of political science presented in the literature is one of a field divided along a methodological dimension. According to this hard-soft model, political scientists vary primarily in their use of "hard" scientific methods and approaches to politics and "soft" humanistic means of understanding politics. I test whether this image accurately depicts the organization of political science. I multidimensional scalings of political science based on the organization of political scientists within APSA and the organization of the APSA annual meeting. These models locate the primary division within the discipline as one based on the types of political phenomena examined, with international relations studying state actors standing at one end of a continuum and American politics scholars, particularly those studying local politics, at the other end. The methodological differences are important but are secondary to this first dimension. This model is consistent for the organization of both political scientists and their scholarship.

Erratum for 'The Method Behind the Madness: Presidential Electoral College Strategies, 1988-96,' JOP Vol. 61, No. 4, November 1999: 893-913.
Shaw, Daron

Uploaded 07-30-2003
Keywords elections
Abstract A recent analysis of my 1999 JOP article on Electoral College strategies identifies two important errors. In both cases, tables containing critical multivariate analyses erroneously report preliminary and methodologically inappropriate estimations that I conducted for an earlier version of the paper. The corrected results are presented here, along with some further clarifications about the variables and estimation techniques. These corrected results continue to support the central arguments of the article, although one discrepancy between the old and new findings is worth noting.

Agglomerative Clustering of Rankings Data, with an Application to Prison Rodeo Events
Zorn, Christopher

Uploaded 07-03-2003
Keywords Cluster analysis
ordinal data
Abstract This paper considers the problem of assessing item similarity on the basis of rankings data, that is, data on ordinal outcomes. I discuss a modification to the standard dissimilarity measure used in agglomerative clustering which addresses the ordinal nature of ranking data. I then apply this alternative to cluster nine events comprising the Angola, Louisiana prison rodeo.

Rational Voting
Gelman, Andrew
Kaplan, Noah
Edlin, Aaron

Uploaded 08-02-2002
Keywords elections
rational choice
sociotropic voting
Abstract By separating the assumptions of ``rationality'' and ``selfishness,'' we show that it can be rational to vote if one is motivated by the effects of the election on society as a whole. For voters with ``social'' preferences the expected utility of voting is approximately independent of the size of the electorate, suggesting that rational voter turnouts can be substantial even in large elections. Less important elections are predicted to have lower turnout, but a feedback mechanism keeps turnout at a reasonable level under a wide range of conditions. We show how this feedback mechanism distinguishes voting from other free-rider problems. Our theory is consistent with several empirical findings in political science, including survey results that suggest that people vote based on perceived social benefit, the positive relation between turnout and (anticipated) closeness of the election, other forms of political participation, and declining response rates in opinion polls. Since our ''social'' theory of rational voting is instrumental, it creates a rich foundation to study {em how} people vote as well as why. A rational person should make voting decisions almost entirely based on perceived social benefits of the election outcome.

Moving Mountains: Bayesian Forecasting As Policy Evaluation
Brandt, Patrick T.
Freeman, John R.

Uploaded 04-24-2002
Keywords Bayesian vector autoregression
policy evaluation
conditional forecasting
Abstract Many policy analysts fail to appreciate the dynamic, complex causal nature of political processes. We advocate a vector autoregression (VAR) based approach to policy analysis that accounts for various multivariate and dynamic elements in policy formulation and for both dynamic and specification uncertainty of parameters. The model we present is based on recent developments in Bayesian VAR modeling and forecasting. We present an example based on work in Goldstein et al. (2001) that illustrates how a full accounting of the dynamics and uncertainty in multivariate data can lead to more precise and instructive results about international mediation in Middle Eastern conflict.

Practical Maximum Likelihood
Altman, Micah
McDonald, Michael P.

Uploaded 07-22-2001
Keywords maximum likelihood
statistical computation
numerical stability
Abstract Maximum likelihood estimation is now widely used in political science, providing a general statistical framework in which we build and test increasingly complex models of politics. The modern development of maximum likelihood is attributable to Fisher, and the approach dominated mathematical statistics during the twentieth century. More attention has been paid to the development of complex statistical models than to the necessary details of their estimation. In this article we discuss some of the art and practice of MLE: -Estimation: We discuss how to choose algorithms for MLE estimations, methods for setting algorithm parameters appropriately, and how to formulate likelihood functions for efficient and accurate estimation. -Tests of Estimation: Methods of statistical inference assume that a global maximum of the likelihood function has been found. There are however, few general guarantees that likelihood functions are single-peaked. Furthermore, no MLE software currently in use by political scientists verifies that global maximum of the likelihood function has been reached. We provide tests of global optimality, drawing from current research in statistics, econometrics, and computer science. -MLE Based Inference: Standard errors produced by MLE's can be misleading, and lead to unreliable inferences, when the likelihood function is not well behaved around its maximum. We illustrate the consequences of unreliable methods, and discuss more robust methods of calculating

Using Auxiliary Data to Estimate Selection Bias Models
Boehmke, Frederick

Uploaded 07-06-2001
Keywords selection bias
two-stage estimation
survey design
interest groups
Abstract Recent work has made progress in estimating models involving selection bias of a particularly strong nature: all nonrespondents are unit nonresponders, meaning that no data is available for them. These models are reasonable successful in circumstances where the dependent variable of interest is continuous, but they are less practical empirically when it is latent and only discrete outcomes or choices are observed. I develop a method in this paper to estimate these models that is much more practical in terms of estimation. The model uses a small amount of auxiliary information to estimate the selection equation parameters which are then held fixed to estimate the equation of interest parameters in a maximum likelihood setting. After presenting monte carlo analysis to support the model, I apply the technique to a substantive problem: which interest groups are likely to to be involved in support of potential initiatives to achieve their policy goals.

Comparing GEE and Robust Standard Errors, with an Application to Judicial Voting
Zorn, Christopher

Uploaded 11-27-2000
Keywords GEE
panel data
robust variance
Abstract Implicit in most statistical analyses is the assumption that observations are conditionally independent; this claim has important implications, both statistical and substantive, for the conclusions we draw. I outline and compare two alternatives for addressing heterogeneity due to correlated data: the use of "robust" (or "heteroskedasticity-corrected") standard errors, and application of the method of generalized estimating equations ("GEEs"). I provide an example, based on an earlier study of judicial voting in search and seizure cases (Segal 1986), and use the example to discuss practical considerations in choosing among the various variance estimators in the presence of correlated data.

Bayesian and Frequentist Inference for Ecological Inference: The RxC Case
Rosen, Ori
King, Gary
Jiang, Wenxin
Tanner, Martin A.

Uploaded 07-25-2000
Keywords EM
least squares
ecological inference
Abstract In this paper we propose Bayesian and frequentist approaches to ecological inference, based on RxC contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by King, Rosen and Tanner (1999) from the 2x2 case to the RxC case. As in the 2x2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.

Anatomy of a Third-Party Victory: Electoral Support for Jesse Ventura in the 1998 Minnesota Gubernatorial Election
Lacy, Dean
Monson, Quin

Uploaded 04-24-2000
Keywords vote-stealing
Condorcet winner
multinomial probit
Abstract [not transcribed]

Gender and Tax
Alvarez, R. Michael
McCaffery, Edward J.

Uploaded 02-17-1999
Keywords taxation
gender gap
marriage penalty
issue preferences
Abstract This paper addresses two empirical questions in the literature on taxation and law. The first is whether women support redistributive tax policies more than men do. The second is why there exists a strong bias in the current tax code against modern, two-earner families, a bias which is usually understood as falling primarily on women. Our analysis of data from the 1996 presidential election suggests a single answer to both questions. Women and men often answer direct survey questions about their attitudes towards matters of tax similarly, while continuing to show a marked gender gap in their actual voting behavior when tax is one of several issues to be considered.

Time Series Models for Compositional Data
Brandt, Patrick T.
Monroe, Burt L.
Williams, John T.

Uploaded 04-13-1999
Keywords compositional data
vector autoregression
Abstract Who gets what? When? How? Data that tell us who got what are compositional data - they are proportions that sum to one. Political science is, unsurprisingly, replete with examples: vote shares, seat shares, budget shares, survey marginals, and so on. Data that also tell us when and how are compositional time series data. Standard time series models are often used, to detrimental consequence, to model compositional time series. We examine methods for modeling compositional data generating processes using vector autoregression (VAR). We then use such a method to reanalyze aggregate partisanship in the United States.

Inference from Response-Based Samples with Limited Auxiliary Information
King, Gary
Zeng, Langche

Uploaded 07-09-1999
Keywords rare events
logistic regression
binary dependent variables
bias correction
endogenous selection
selection bias
Abstract This paper is for the methods conference; it is related to "Logistic Regression in Rare Events Data," also by us; the conference presentation will be based on both papers. We address a disagreement between epidemiologists and econometricians about inference in response-based (a.k.a. case-control, choice-based, retrospective, etc.) samples. Epidemiologists typically make the rare event assumption (that the probability of disease is arbitrarily small), which makes the relative risk easy to estimate via the odds ratio. Econometricians do not like this assumption since it is false and implies that attributable risk (a.k.a. a first difference) is zero, and they have developed methods that require no auxiliary information. These methods produce bounds on the quantities of interest that, unfortunately, are often fairly wide and always encompass a conclusion of no treatment effect (relative risks of 1 or attributable risks of 0) no matter how strong the true effect is. We simplify the existing bounds for attributable risk, making it much easier to estimate, and then suggest one possible resolution of the disagreement by providing a method that allows researchers to include easily available information (such as that the fraction of the population with the disease falls within at most [.001,.05]); this method considerably narrows the bounds on the quantities of interest. We also offer software to implement the methods suggested. We would very much appreciate any comments you might have!

Coordination, Moderation and Institutional Balancing in American House Elections at Midterm
Mebane, Walter R.
Sekhon, Jasjeet

Uploaded 09-02-1999
Keywords congressional elections
rational expectations
voter equilibrium
midterm cycle
stochastic choice model
Abstract Individuals' turnout decisions and vote choices for the House of Representatives have been coordinated in recent midterm election years, with each eligible voter (each elector) using a strategy that features policy moderation. Coordination is defined as a rational expectations equilibrium among electors, in which each elector has both common knowledge and private information about the election outcome. Stochastic choice models estimated using individual-level data from the American National Election Study Post-Election Surveys of years 1978-1998 support coordination, but a model in which electors act non-strategically to moderate policy has very similar behavioral implications and also works well. The empirical coordinating model satisfies the fixed point condition that defines the common knowledge expectation electors have about the election outcome in the equilibrium of the theoretical model. Both the coordinating and non-strategic models are capable of generating a midterm cycle in which the President's party usually loses vote share at midterm. Both models correctly flag 1998 as an exception to that pattern: the Republican party had policy positions that were too conservative for most electors. Moderation at midterm has usually been based on electors' expectations that the House will dominate the President in determining post-election policy.

Regression Analysis and the Philosophy of Social Sciences -- a Critical Realist View
Ron, Amit

Uploaded 12-20-1999
Keywords Regression analysis
critical realism
philosophy of social science
Abstract This paper challenges the connection conventionally made between regression analysis and the empiricist philosophy of science and offers an alternative explication for the way regression analysis is being practiced. The alternative explication is based on critical realism, a competing approach to empiricism in the field of philosophy of science. The paper argues that critical realism can better explicate the way in which scientists ‘play’ with the data as part of the process of inquiry. The practice of regression analysis is understood by the critical realist explication as a post hoc attempt to identify a restricted closed system. The gist of successful regression analysis is not being able to offer a law-like statement but to bring forth evidence of an otherwise hidden mechanism. Through the study methodological debates regarding regression analysis, it is argued that critical realism can offer conceptual tools for better understanding the core issues that are at stake in these debates.

Duration Models and Proportional Hazards in Political Science
Box-Steffensmeier, Janet M.

Uploaded 04-20-1998
Keywords (none submitted)
Abstract In recent years political scientists have increasingly adopted a wide range of techniques for modeling duration data. A key assumption of all these approaches is that the hazard ratios (i.e., the conditional relative risks across substrata) are proportional to one another, and that this proportionality is maintained over time. Estimation of proportional hazards (PH) models when in fact hazards are non-proportional results in coefficient biases and decreased power of significance tests. In particular, misspecified PH models will overestimate the impact of variables whose associated hazards are increasing, while coefficient estimates for covariates in which the hazards are converging will be biased towards zero. We investigate the proportionality assumption of two widely used duration models, the Weibull parametric model and Cox's (1972) semiparametric approach, in the context of a duration modelof Supreme Court retirements. We address the potential problems with incorrectly assuming proportionality, illustrate a range of techniques for testing the proportionality assumption, and conclude with a number of means for accurately and efficiently estimating these models in the presence of non-proportional hazards.

Modeling Time Series Count Data: A State-Space Approach to Event Counts
Brandt, Patrick T.
Williams, John T.
Fordham, Benjamin

Uploaded 07-08-1998
Keywords Poisson models
event counts
state-space models
Kalman filter
non-normal time series
Abstract This is a revised version, dated July 16, 1998. Time series count data is prevalent in political science. We argue that political scientists should employ time series methods to analyze time series count data. A simple state-space model is presented that extends the Kalman filter to count data. The properties of this model are outlined and further evaluated by a Monte Carlo study. We then show how time series of counts present special problems by turning to two replications: the number of hospital deaths that are the subject of a recent criminal court case, and Pollins (1996) MIDs data from international relations.

The Number of Parties: New Evidence from Local Elections
Benoit, Kenneth

Uploaded 08-11-1998
Keywords electoral systems
regression analysis
political parties
Abstract Theory: Duverger's ``Law'' concerning the structural and psychological consequences of electoral rules has been much studied in both single cases and in multinational samples, but these suffer from several common theoretical and empirical shortcomings that make their estimates suspect. Besides resort to experimental data, another solution is to select a carefully controlled election dataset where the precise nature of the processes generating the data is understood. Local elections provide a means to control social cleavages as well as to provide a potentially large number of observations. Hypotheses: The size of electoral districts, as well as the type of electoral formula, will influence the number of parties that compete, the concentration of support for these parties, and the number of parties that win seats, even when the elections are confined to one country at the subnational level. In addition, the greater number of observations should provide very precise estimates of these effects. Methods: Regression analysis of district magnitude with an interactive term characterizing rules as proportional or plurality. The data come from 8,377 Hungarian local elected bodies consisting of municipal councils, county councils, town councils, and mayors. Results: The results extend previous research on Duverger's effects, providing more precise estimates that may be compared directly to previous results. In addition, the analysis of rare multi-member plurality elections reveals a counter-intuitive result about candidate and party entry in response to these rules, suggesting several directions for future investigation of MMP rules.

Do Majority-Minority Districts Maximize Black Representation in Congress
Epstein, David
O'Halloran, Sharyn
Cameron, Charles

Uploaded 01-01-1995
Keywords districting
voting rights act
minority representation
electoral systems
semi-parametric estimation
Abstract This paper investigates the question of whether or not concentrated minority districts, which increase the probability that minorities are elected to office but decrease minority influence elsewhere, maximize overall black representation in Congress. We address this question in a three-step process: we first estimate representation equations that link constituency preferences to the actions of their representative; then electoral equations that link constituency characteristics to the type of representative elected; and finally combine these two effects to simulate the districting strategies that maximize substantive minority representation. We find that outside of the South, dividing minority voters equally across districts maximizes representation, while in the South the optimal scheme creates concentrated districts on the order of 47% black voting age population. We also conclude that minority candidates have substantial chances of being elected from districts with less than 50% minority voters, and that in the face of a national Republican tide, optimal districting schemes will concentrate minority voters less, rather than more.

Information and American Attitudes Toward Bureaucracy
Alvarez, R. Michael
Brehm, John

Uploaded 00-00-0000
Keywords discrete choice
ordered logit
Internal Revenue Service
Abstract The exploration of American attitudes towards the Internal Revenue Service joins an unusual pair of research domains: public opinion and public administration. Public administration scholars contend that the hostility Americans show towards ``bureaucracy'' stems from the contradictory expectations Americans have for bureaucratic performance. Drawing upon a survey commissioned by the IRS and conducted in 1987 just after the passage of the Tax Reform Act, we explore attitudes towards the performance of the IRS in eight categories. Using a new heteroskedastic ordinal logit technique, we demonstrate (1) that it is overwhelmingly a single expectation of flexibility that governs attitudes towards the IRS; (2) that these expectations are not in contradiction; and (3) that domain-specific information sharply focuses respondent attitudes towards bureaucracy.

Effectiveness in the U.S. House of Representatives: Struggle, Strategy, and Success
Box-Steffensmeier, Janet M.
Sinclair, Valeria

Uploaded 00-00-0000
Keywords (none submitted)
Abstract Of the thousands of bills sponsored in any given Congress, only a small percentage actually become public law. Despite this fact, members of Congress continue to enter the legislative battlefield, some better prepared for battle than others. We examine what distinguishes House members who are most active in bill sponsorship from those who are least active. Furthermore, we investigate why some members are successful in pursuing their legislative agendas when others' initiatives fall by the wayside. Much of the literature points to the importance of institutional factors, such as party affiliation, leadership, and seniority in determining a member's degree of legislative effectiveness. Extending thisprimarily institutional focus, we investigate the determinants of House members' activity and the effect that their individual strategic choices have on improving the probability of their bills surviving as they wind through the legislative maze. Using data collected from the 103rd Congress (1993-1994), we construct a multivariate model estimating the impact of House members' decisions about a number of strategic issues including legislative specialization in the multiple referral environment of today s Congress, floor speaking, networking, geographical focus, and timing, on the number of bills they sponsor and the success of their legislative agendas.

Generic Tests for a Nonlinear Model of Congressional Campaign Dynamics
Mebane, Walter R.

Uploaded 08-25-1996
Keywords Congress
differential equations
Hopf bifurcation
non-nested hypothesis tests
Cox tests
nonlinear models
Abstract I develop a statistical model based on a generic third-order Taylor series approximation for differential equation systems that exhibit Hopf bifurcation in order to use district-level cross-sectional data to test a nonlinear dynamic formal model of campaign contributions, district service and voting during and after a U.S. House election. The statistical model represents the key nonlinearities of the formal model's Cournot-Nash equilibrium in a highly robust fashion. For data from the years 1984--85 and 1986--87, non-nested hypothesis tests (implemented using a calibrated, parametric bootstrap method) show that under assumptions of multivariate normality, the nonlinear model is vastly superior to the generic linear alternative defined by the sample mean vector and covariance matrix.

Precise, Second-Order Correct Estimates of the Natural Rate of Unemployment via Bootstrap Calibration
Sekhon, Jasjeet

Uploaded 09-11-1997
Keywords Natural Rate of Unemployment
Confidence Interval
Marginal Likelihood
Conditional Likelihood
Profile Likelihood
Abstract The natural rate of unemployment, which is usually considered to be the nonaccelerating inflation rate of unemployment (NAIRU), is an important and often used economic variable (Gordon 1997). In this paper I present second-order correct $O_{p}(n^{-2})$ confidence regions for estimates of the NAIRU, obtained via bootstrap calibration. These confidence regions are three times smaller than those provided in recent econometric work (Staiger, Stock and Watson 1997a, 1997b). The confidence regions are sufficiently precise to support use of the NAIRU for a variety of analytical and policy purposes, including monetary policy, as determined by a criterion suggested by Krueger (1997).

Economic Conditions and Presidential Elections
Nagler, Jonathan
Willette, Jennifer R.

Uploaded 08-21-1997
Keywords Elections
economic voting
Abstract One of the more robust findings over the last 50 years in research on elections has been the importance of macroeconomic conditions on voting in U.S. presidential elections. An important contribution to that literature was made by Steven Weatherford in a 1978 article demonstrating that working class voters are more sensitive to economic conditions than are middle class voters in their vote choice. Weatherford's result was based on the 1956 through 1960 elections. We replicate Weatherford's result for 1960, and show that the substantive finding is extremely sensitive to the definition of class. When using occupation groups as the measure of class, we are able to essentially replicate Weatherford's result. However, using income as the measure of class we do not find any evidence to support the same finding for 1960. We then extend the analysis to cover the period 1956 thru 1996 using both an income-based measure of class and an occupation-based measure of class. We show that there does not appear to be a clear pattern distinguishing levels of economic voting between working-class and middle-class voters; though using the occupation-based measure working class voters appear more sensitive to the economy in recent elections. Finally, we offer a new theory of economic voting. We propose that voters vote based on the economic performance of their economic reference group - rather than on their own personal finances or on the state of the national economy.

Legislative Entrepreneurship and Campaign Finance
Wawro, Gregory

Uploaded 07-21-1997
Keywords campaign finance
fixed effects
panel data
selection bias
Abstract Drawing on models of service--induced and investor PAC campaign contributions, I analyze the role that legislative entrepreneurship plays in PACs' contribution decisions. I explore the possibility that PACs use campaign contributions to invest in members of Congress with the expectation that members will reciprocate by engaging in entrepreneurial behavior to the benefit of PACs. To determine whether a relationship exists between legislative entrepreneurship and PAC contributions I compute measures of entrepreneurial behavior for individual members of the U.S. House using detailed data on bill sponsorship and congressional hearings from the 97th through the 101st Congress. In order to cleanly estimate the effects of legislative entrepreneurship, we need to account for unobservable member--specific factors that enter into the PAC contribution calculus. To account for such factors I employ panel data methods which require very few assumptions about the data and provide a way to test whether the manipulations of the data that are required for a panel analysis introduce bias.

Voter Turnout and the Life Cycle: A Latent Growth Curve Analysis
Plutzer, Eric

Uploaded 04-09-1997
Keywords turnout
random effects model
latent growth models
Abstract The distinctive relationship between age and voter turnout has intrigued students of electoral behavior since at least the early 1960s. Nevertheless, political scientists actually know little about how individuals acquire the habit of voting during young adulthood. Moreover, previous speculations and explanations are all questionable because they are based on data and models that are inappropriate for what is essentially a developmental process. Problems include confounding age with generational effects, assumptions of reversibility of gains in participation from key life events, and a failure account for the fact that an individual's probability of turnout at any particular age is a function of two distinct latent variables: their turnout rate in the very first elections, and their subsequent rate of increase. Theory construction is muddled because these two variables are negatively correlated and have different predictors. This study uses longitudinal data covering young voters over their first four presidential elections and uses latent growth curve models -- a special case of multi-level or Hierarchical Linear Models which are finding wide applicability in the social sciences. Given appropriate data, this approach permits statistical models that better correspond to life-cycle hypotheses. The findings clarify the role of parental influence, marriage and parenthood, while raising questions about the costs of mobility.

Generalized Substantively Reweighted Least Squares Regression
Gill, Jeff

Uploaded 01-29-1997
Keywords Linear Models
Robust Procedures
Data Analysis
Outlier Identification
Abstract Linear modeling often employs robust and resistant techniques to compensate for undesirable properties in the data. Conversely, Substantive Weighted Least Squares differs from these techniques since it seeks to analyze what makes the outliers distinguishable in their use of resources. SWLS does not see outliers as becoming potentially unbounded or even that they are necessarily undesirable elements of the data. SWLS runs consecutive weighted OLS models downweighting each case whose jacknifed residual is less than a specific threshold. Final iteration significant variables are identified as those which have a greater effect on higher performing cases and therefore provide prescriptive recommendations. GSRLS generalizes the SWLS technique by using transformations relating the jackknifed residuals to a common tabular distribution. This allows alpha-level positive outlier identification. Here, GSRLS is first placed in a theoretical context and further explored through monte-carlo simulation. In general, GSRLS can be seen as a data-analytic tool that exploits certain characteristics of the linear model to find variable influence on successful cases.

Testing the Pooling Assumption with Cross-Sectional Time Series Data: A Proposal and an Assesment with Simulation Experiments
Stanig, Piero

Uploaded 07-17-2005
Keywords Cross-Sectional Time Series Data
heterogeneity of coefficients
Abstract I propose to use the loss of fit of the cross-validated predictions relative to the fit of the predictions from a pooled regression to test the assumption of constant betas across countries in a CSTS setting. The performance of this measure is a) evaluated in several simulation experiments that reproduce research situations common in comparative politics, and b) compared to the “cross-validated standard error of the regression”, proposed by Franzese(2002). I show that the measure I propose depends much less on the stochastic component in the DGP, and is better able to detect the country-specificity of the betas. I calculate the critical values that can be used to test the pooling assumption in some typical comparative politics CSTS situations. Finally, to evaluate the behavior of the measure with an actual dataset, I replicate the results of Alvarez et al. (1991) as replicated in Beck et al. (1993), calculate the proposed measure, and show that the pooling assumption does not seem to be inappropriate for the model they estimate.

Diagnostics for multivariate imputation
Abayomi, Kobi
Gelman, Andrew
Levy, Marc

Uploaded 08-16-2005
Keywords missing data
multiple imputation
regression diagnostics
Abstract We consider three sorts of diagnostics for random imputations: (a) displays of the completed data, intended to reveal unusual patterns that might suggest problems with the imputations, (b) comparisons of the distributions of observed and imputed data values, and (c) checks of the fit of observed data to the model used to create the imputations. We formulate these methods in terms of sequential regression multivariate imputation [Van Buuren and Oudshoom 2000, and Raghunathan, Van Hoewyk, and Solenberger 2001], an iterative procedure in which the missing values of each variable are randomly imputed conditional on all the other variables in the completed data matrix. We also consider a recalibration procedure for sequential regression imputations. We apply these methods to the 2002 Environmental Sustainability Index (ESI), a linear aggregation of 68 environmental variables on 142 countries, with 22% missing values.

The Swing Voter's Curse in the Laboratory
Battaglini, Marco
Morton, Rebecca
Palfrey, Thomas

Uploaded 01-12-2006
Abstract This paper reports the first laboratory study of the swing voter's curse and provides insights on the larger theoretical and empirical literature on 'pivotal voter' models. Our experiment controls for different information levels of voters, as well as the size of the electorate, the distribution of preferences, and other theoretically relevant parameters. The design varies the share of partisan voters and the prior belief about a payoff relevant state of the world. Our results support the equilibrium predictions of the Feddersen-Pesendorfer model, and clearly reject the notion that voters in the laboratory use naive decision-theoretic strategies. The voters act as if they are aware of the swing voter's curse and adjust their behavior to compensate. While the compensation is not complete and there is some heterogeneity in individual behavior, we find that aggregate outcomes, such as efficiency, turnout, and margin of victory, closely track the theoretical predictions.

Election Forensics: Vote Counts and Benford's Law
Mebane, Walter R.

Uploaded 07-18-2006
Keywords election fraud
vote fraud
Benford's Law
election forensics
Abstract How can we be sure that the declared election winner actually got the most votes? Was the election stolen? This paper considers a statistical method based on the pattern of digits in vote counts (the second-digit Benford's Law, or 2BL) that may be useful for detecting fraud or other anomalies. The method seems to be useful for vote counts at the precinct level but not for counts at the level of individual voting machines, at least not when the way voters are assigned to machines induces a pattern I call roughly equal division with leftovers (REDWL). I demonstrate two mechanisms that can cause precinct vote counts in general to satisfy 2BL. I use simulations to illustrate that the 2BL test can be very sensitive when vote counts are subjected to various kinds of manipulation. I use data from the 2004 election in Florida and the 2006 election in Mexico to illustrate use of the 2BL tests.

Does Private Money Buy Public Policy? Campaign Contributions and Regulatory Outcomes in Telecommunications
de Figueiredo, Rui

Uploaded 10-06-2006
Keywords campaign contributions
selection bias
omitted variable bias
Abstract To what extent can market participants affect the outcomes of regulatory policy? In this paper, we study the effects of one potential source of influence – campaign contributions – from competing interests in the local telecommunications industry, on regulatory policy decisions of state public utility commissions. Using a unique new data set, we find, in contrast to much of the literature on campaign contributions, that there is a significant effect of private money on regulatory outcomes. This result is robust to numerous alternative model specifications. We also assess the extent of omitted variable bias that would have to exist to obviate the estimated result. We find that for our result to be spurious, omitted variables would have to explain more than five times the variation in the mix of private money as is explained by the variables included in our analysis. We consider this to be very unlikely.

Direct Democracy and Social Issues
Matsusaka, John

Uploaded 05-29-2007
Keywords Direct democracy
social issues
Abstract This paper explores the connection between the initiative process -- the most potent form of direct democracy -- and social issues by examining laws on seven social issues in all 50 American states. Initiative states are 18 percent more likely than noninitiative states to choose a conservative than a liberal policy on the median issue after controlling for public opinion, demographic, and regional variables. The conservative shift is majoritarian: initiative states are 8 percent more likely than noninitiative states to choose laws that reflect the majority's preference. The initiative effect does not appear to depend on the institutional features that scholars and reformers often discuss.

Back to the Future: Modeling Time Dependence in Binary Data
Carter, David
Signorino, Curtis S.

Uploaded 07-17-2007
Abstract Since Beck, Katz, and Tucker (1998), the use of time dummies or splines has become the standard method to model temporal dependence in binary data. There are potential problems with both of these approaches, especially in the case of time dummies. We propose a simpler alternative: using t, t^2, and t^3 to approximate the hazard. This cubic polynomial is trivial to implement and avoids problems with time dummies such as quasi-complete separation and issues with splines such as interpretation or knot selection. It also accommodates non-proportional hazards in a more straightforward way than either time dummies or splines. Monte Carlo analysis and reanalysis of numerous published empirical results are used to show that our method performs as well as splines and better than time dummies. Non-proportional hazards are also simple to model with a cubic polynomial. We present new results with data from Crowley and Skocpol (2001) to demonstrate how to model and interpret a non-proportional hazard.

MPs for Sale? Estimating Returns to Office in Post-War British Politics
Eggers, Andrew
Hainmueller, Jens

Uploaded 03-22-2008
Keywords regression discontinuity design
political economy
Abstract While the role of money in policymaking is a central question in political economy research, surprisingly little attention has been given to the rents politicians actually derive from politics. We use both matching and a regression discontinuity design to analyze an original dataset on the estates of recently deceased British politicians. We find that serving in Parliament roughly doubled the wealth at death of Conservative MPs but had no discernible effect on the wealth of Labour MPs. We argue that Conservative MPs profited from office in a lax regulatory environment by using their political positions to obtain outside work as directors, consultants, and lobbyists, both while in office and after retirement. Our results are consistent with anecdotal evidence on MPs' outside financial dealings but suggest that the magnitude of Conservatives' financial gains from office was larger than has been appreciated.

Matching for Causal Inference Without Balance Checking
Iacus, Stefano
King, Gary
Porro, Giuseppe

Uploaded 06-26-2008
Keywords Matching
causal inference
observational data
missing data

Abstract We address a major discrepancy in matching methods for causal inference in observational data. Since these data are typically plentiful, the goal of matching is to reduce bias and only secondarily to keep variance low. However, most matching methods seem designed for the opposite problem, guaranteeing sample size ex ante but limiting bias by controlling for covariates through reductions in the imbalance between treated and control groups only ex post and only sometimes. (The resulting practical difficulty may explain why many published applications do not check whether imbalance was reduced and so may not even be decreasing bias.) We introduce a new class of "Monotonic Imbalance Bounding" (MIB) matching methods that enables one to choose a fixed level of maximum imbalance, or to reduce maximum imbalance for one variable without changing it for the others. We then discuss a specific MIB method called "Coarsened Exact Matching" (CEM) which, unlike most existing approaches, also explicitly bounds through ex ante user choice both the degree of model dependence and the causal effect estimation error, eliminates the need for a separate procedure to restrict data to common support, meets the congruence principle, is approximately invariant to measurement error, works well with modern methods of imputation for missing data, is computationally efficient even with massive data sets, and is easy to understand and use. This method can improve causal inferences in a wide range of applications, and may be preferred for simplicity of use even when it is possible to design superior methods for particular problems. We also make available open source software which implements all our suggestions.

"The Size and Scope of International Unions: A Coalition-Theoretic Approach"
Konstantinidis, Nikitas

Uploaded 07-10-2008
Keywords international unions
coalition theory
size and scope
flexible integration
Abstract This paper examines the endogenous strategic considerations in simultaneously creating, enlarging, and deepening an international union of countries within a framework of variable geometry. We introduce a coalition-theoretic model to examine the equilibrium relationship between union size and scope. What is the equilibrium (stable) size and scope of an international union and how do these variables interact? When should we expect countries to take advantage of more flexible modes of integration and how does that possibility affect the pace and depth of integration? In tackling these questions, we characterize the various policy areas of cooperation with respect to their cross-country and cross-policy spillovers, their efficiency scales, the heterogeneity of preferences, and the general cost structure. We then go on to show that the enlargement of a union and the widening of its policy scope are too symbiotic and mutually reinforcing dynamic processes under certain conditions. This is an exciting research puzzle given that current game-theoretic predictions have been at odds with the empirical reality of European integration.

Circular Data in Political Science and How to Handle It
Gill, Jeff
Hangartner, Dominik

Uploaded 08-25-2008
Keywords circular data
von Mises distribution
clock and calendar effects
directional data
radial measures
Iraq casualties
party movement model
Abstract There has been no attention to circular (purely cyclical) data in political science research. We show that such data exists and is generally mishandled by models that do not take into account the inherently recycling nature of some phenomenon. Clock and calendar effects are the obvious cases, but directional data exists as well. We develop a modeling framework based on the von Mises distribution and apply it to two datasets: casualties in the second Iraq war and party movement in a two-dimensional ideological space. Results clearly demonstrate the importance of circular regression models to handle periodic and directional data.

The 2008 Presidential Primaries through the Lens of Prediction Markets
Malhotra, Neil
Snowberg, Erik

Uploaded 01-16-2009
Abstract Abstract To explore the influence of primary results during the 2008 nomination process we leverage a previously unused methodology --- the analysis of prediction market contracts. The unique structure of prediction markets allows us to address two unexplored questions. First, we analyze whether primary results affect candidates' chances in the general election, as candidates who take strong positions during the nomination contest may be unable to easily appeal to centrist voters in the general election. We also assess whether states with early primaries, such as Iowa and New Hampshire, have a disproportionate effect on the nominating process. We show that the length of the primary season has a minimal impact of the electability of candidates in the general election, and that some states have a disproportionate impact on the nominating process. However, the states that have the largest impact are not necessarily New Hampshire and Iowa, the states that have often been assumed to be the most influential because of their early position on the primary calendar.

An Observational Study of Ballot Initiatives and State Outcomes
Keele, Luke

Uploaded 07-17-2009
Keywords causal inference
ballot initiatives
voter turnout
Abstract It has long been understood that the presence of the ballot initiative process leads to different outcomes among states. In general, extant research has found that the presence of ballot initiatives tends to increase voter turnout and depress state revenues and expenditures. I reconsider this possibility and demonstrate that past findings are an artifact of incorrect research design. Failure to account for differences in states often leads to a confounding association between ballot initiatives and voter turnout and fiscal policy. Here, I conduct an observational study based on a counterfactual model of inference to analyze the effects of ballot initiatives. The resulting research design leads to two analyses. First, I utilize the synthetic case control method, which allows me to compare over time outcomes in states with initiatives to states without initiatives while accounting for pretreatment baseline differences across states. Second, I use matching to assess voter turnout differences across metro areas along state boundaries with and without ballot initiatives. In both analyses, I find that ballot initiatives rarely have spillover effects on voter turnout and state fiscal policy.

From Nature to the Lab: The Methodology of Experimental Political Science and the Study of Causality
Morton, Rebecca
Williams, Kenneth

Uploaded 09-18-2009
Keywords experiments
Abstract In this manuscript we review the methodology of experimental political science and the study of causality.

Unpacking the Black Box: Learning about Causal Mechanisms from Experimental and Observational Studies
Imai, Kosuke
Keele, Luke
Tingley, Dustin
Yamamoto, Teppei

Uploaded 07-01-2010
Keywords causal inference
direct and indirect effects
potential outcomes
sensitivity analysis
media cues
incumbency effects
Abstract Understanding causal mechanisms is a fundamental goal of social science research. Demonstrating whether one variable causes a change in another is often insufficient, and researchers seek to explain why such a causal relationship arises. Nevertheless, little is understood about how to identify causal mechanisms in empirical research. Many researchers either informally talk about possible causal mechanisms or attempt to quantify them without explicitly stating the required assumptions. Often, some assert that process tracing in detailed case studies is the only way to evaluate causal mechanisms. Others contend the search for causal mechanisms is so elusive that we should instead focus on causal effects alone. In this paper, we show how to learn about causal mechanisms from experimental and observational studies. Using the potential outcomes framework of causal inference, we formally define causal mechanisms, present general identification and estimation strategies, and provide a method to assess the sensitivity of one's conclusions to the possible violations of key identification assumptions. We also propose several alternative research designs for both experimental and observational studies that may help identify causal mechanisms under less stringent assumptions. The proposed methodology is illustrated using media framing experiments and observational studies of incumbency advantage.

Multiple Overimputation: A Unified Approach to Measurement Error and Missing Data
Blackwell, Matthew
Honaker, James
King, Gary

Uploaded 07-19-2010
Keywords measurement error
missing data
multiple imputation
Abstract Social scientists typically devote considerable effort to reducing measurement error during data collection and then ignore the issue during data analysis. Although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model dependence, difficult computation, or inapplicability with multiple mismeasured variables. We develop an easy-to-use alternative that generalizes the popular multiple imputation (MI) framework by treating missing data problems as a special case of extreme measurement error and correcting for both. Like MI, the proposed "multiple overimputation" (MO) framework is a simple two-step procedure. First, multiple (around 5) completed copies of the data set are created where cells measured without error are held constant, those missing are imputed from the distribution of predicted values, and cells (or entire variables) with measurement error are ``overimputed,'' that is imputed from a predictive distribution with observation-level priors defined by the mismeasured values and available external information, if any. In the second step, analysts can then run whatever statistical method they would have run on each of the overimputed data sets as if there had been no missingness or measurement error; the results are then combined via a simple procedure. We also offer open source software that implements all the methods described herein.

Should I Use Fixed or Random Effects?
Clark, Tom
Linzer, Drew

Uploaded 03-26-2012
Keywords Fixed effects
Random effects
Panel data
Abstract Empirical analyses in political science very commonly confront data that are grouped---multiple votes by individual legislators, multiple years in individual states, multiple conflicts during individual years, and so forth. Modeling these data presents a series of potential challenges, of which accounting for differences across the groups is perhaps the most well-known. Two widely-used methods are the use of either "fixed" or "random" effects models. However, how best to choose between these approaches remains unclear in the applied literature. We employ a series of simulation experiments to evaluate the relative performance of fixed and random effects estimators for varying types of datasets. We further investigate the commonly-used Hausman test, and demonstrate that it is neither a necessary nor sufficient statistic for deciding between fixed and random effects. We summarize the results into a typology of datasets to offer practical guidance to the applied researcher.

Dynamic Bayesian Forecasting of Presidential Elections in the States
Linzer, Drew

Uploaded 07-16-2012
Keywords President
Public Opinion
Abstract I present a dynamic Bayesian forecasting model that enables early and accurate prediction of U.S. presidential election outcomes at the state level. The method systematically combines information from historical forecasting models in real time with results from the large number of state-level opinion surveys that are released publicly during the campaign. The result is a set of forecasts that are initially as good as the historical model, then gradually increase in accuracy as Election Day nears. I employ a hierarchical specification to overcome the limitation that not every state is polled on every day, allowing the model to borrow strength both across states and, through the use of random-walk priors, across time. The model also filters away day-to-day variation in the polls due to sampling error and national campaign e ects, which enables daily tracking of voter preferences towards the presidential candidates at the state and national levels. Simulation techniques are used to estimate the candidates' probability of winning each state and, consequently, a majority of votes in the Electoral College. I apply the model to pre-election polls from the 2008 presidential campaign and demonstrate that the victory of Barack Obama was never realistically in doubt. The model is currently ready to be deployed for forecasting the outcome of the 2012 presidential election. Project website: votamatic.org

On the Validity of the Regression Discontinuity Design for Estimating Electoral Effects: New Evidence from Over 40,000 Close Races
Eggers, Andrew
Folke, Olle
Fowler, Anthony
Hainmueller, Jens
Hall, Andrew B.

Uploaded 05-15-2013
Keywords regression discontinuity
Abstract Many papers use regression discontinuity (RD) designs that exploit ``close" election outcomes in order to identify the effects of election results on various political and economic outcomes of interest. Several recent papers critique the use of RD designs based on close elections because of the potential for imbalance near the threshold that distinguishes winners from losers. In particular, for U.S.\ House elections during the post-war period, lagged variables such as incumbency status and previous vote share are significantly correlated with victory even in very close elections. This type of sorting naturally raises doubts about the key RD assumption that the assignment of treatment around the threshold is quasi-random. In this paper, we examine whether similar sorting occurs in other electoral settings, including the U.S. House in other time periods, statewide, state legislative, and mayoral races in the U.S., and national and/or local elections in a variety of other countries, including the U.K., Canada, Germany, France, Australia, India, and Brazil. No other case exhibits sorting. Evidently, the U.S.\ House during the post-war period is an anomaly.

Aliu, Armando
Aliu, Dorian

Uploaded 08-18-2013
Keywords TNNs
Abstract Can that be a world by equivalent to the supranational model of Europe with the same legitimacy and with the same effectiveness? In this study was argued that Civilizing Global Order (CGO) by Transnational Norm-Building Networks (TNNs) should have the legitimacy and effectiveness of the European Union supranational order. In this context, the concept of decentration (supra: centralization and infra: decentralization) which includes the nexus of voice (democratic participation) and entitlement (legal-social rights and duties) was examined. In this study as methodology published secondary data, online resources were used in order to reinforce the hypothesis.

How can soccer improve statistical learning?
Filho, Dalson
Rocha, Enivaldo
Paranhos, Ranulfo
Júnior, José

Uploaded 03-19-2014
Keywords quantitative methods
linear regression
Abstract This paper presents an active classroom exercise focusing on the interpretation of ordinary least squares regression coefficients. Methodologically, students analyze Brazilian soccer matches data, formulate and test classical hypothesis regarding home team advantage. Technically, our framework is simply adapted for others sports and has no implementation cost. In addition, the exercise is easily conducted by the instructor and highly enjoyable for the students. The intuitive approach also facilitates the understanding of linear regression practical application.

Practical Issues in Implementing and Understanding Bayesian Ideal Point Estimation
Bafumi, Joseph
Gelman, Andrew
Park, David K.
Kaplan, Noah

Uploaded 06-11-2004
Keywords Ideal points
Logistic regression
Rasch model
Abstract In recent years, logistic regression (Rasch) models have been used in political science for estimating ideal points of legislators and Supreme Court justices. These models present estimation and identifiability challenges, such as improper variance estimates, scale and translation invariance, reflection invariance, and issues with outliers. We resolve these issues using Bayesian hierarchical modeling, linear transformations, informative regression predictors, and explicit modeling for outliers. In addition, we explore new ways to usefully display inferences and check model fit.

Estimation of Equations with Ordered Categorical Variables
Franklin, Charles
Jackson, John

Uploaded 07-16-2003
Keywords ordered variables
measurement error
Abstract Ordered categorical variables frequently appear on both the right- and left-hand side of statistical models. We discuss the problems that measurement error, induced by categorization of continuous latent variables, introduces to these models. We show the nature of the bias and demonstrate its consequences by both Monte Carlo simulation and applications to data on partisanship at the individual and macro levels.

< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 next>