logoPicV1 logoTextV1

Search Results

Below results based on the criteria 'Benford'
Total number of records returned: 967

Prior Distributions for Variance Parameters in Hierarchical Models
Gelman, Andrew

Uploaded 03-28-2004
Keywords Bayesian inference
hierarchical model
multilevel model
noninformative prior distribution
weakly informative prior distribution
Abstract Various noninformative prior distributions have been suggested for scale parameters in hierarchical models. We construct a new folded-noncentral-$t$ family of conditionally conjugate priors for hierarchical standard deviation parameters, and then consider noninformative and weakly informative priors in this family. We use an example to illustrate serious problems with the inverse-gamma family of "noninformative" prior distributions. We suggest instead to use a uniform prior on the hierarchical standard deviation, using the half-$t$ family when the number of groups is small and in other settings where a weakly informative prior is desired.

An Experimental Test of Proximity and Directional Voting
Paolino, Philip
Lacy, Dean

Uploaded 07-27-2004
Keywords experiment
issue voting
Abstract Lewis and King (2000) discuss difficulties in evaluating the proximity hypothesis about issue voting versus the directional hypothesis. In this paper, we propose as a solution to this problem is asking individuals to evaluate candidates generated to represent certain issue positions experimentally. Such an approach controls candidates' positions, while holding other features constant, presents these fictitious candidates to randomly assigned subjects, and examines whether the relationship between subjects' evaluations of these candidates and their ideological beliefs is consistent with proximity or directional theory. Our results provide slightly more support for proximity theory, but our data are not entirely conclusive on this point.

States as Policy Laboratories: Experimenting with the Children's Health Insurance Program
Volden, Craig

Uploaded 07-09-2003
Keywords diffusion
Abstract For more than a decade, scholars of policy diffusion across the states have relied on state-year event history analyses. Such work has been limited by: (1) focusing mainly on neighbor-to-neighbor diffusion paths, rather than other similarities across states; (2) neglecting the role of the success or failures of policies in their diffusion; (3) studying singular specific policy adoptions rather than the choice among policy variants; and (4) setting aside questions about how diffusion mechanisms vary across different policies and different political processes. This paper proposes the alternative approach of dyad-year event history analysis, commonly used in international relations, and applies it to the study of policy diffusion in Children's Health Insurance Program from 1998-2001. This approach uncovers strong evidence of the emulation of states with similar political, demographic, and budgetary characteristics, and those with successful policies. Moreover, the diffusion mechanisms differ substantially across different policy types and political processes.

Taking the State Space Seriously: The Dynamic Linear Model and Bayesian Time Series Analysis
Buckley, Jack

Uploaded 08-02-2002
Keywords time series
state space
think tanks
Abstract No abstract submitted.

The importance of statistical methodology for analyzing data from field experimentation: Evaluating voter mobilization strategies
Imai, Kosuke

Uploaded 07-08-2002
Keywords field experiments
causal inference
instrumental variables
Abstract We introduce a set of new Markov chain Monte Carlo algorithms for Bayesian analysis of the multinomial probit model. Our Bayesian representation of the model places a new, and possibly improper, prior distribution directly on the identifiable parameters and thus is relatively easy to interpret and use. Our algorithms, which are based on the method of marginal data augmentation, involve only draws from standard distributions and dominate other available Bayesian methods in that they are as quick to converge as the fastest methods but with a more attractive prior specification.

Stochastic Dependence in Competing Risks
Gordon, Sanford C.

Uploaded 09-05-2001
Keywords Competing risks
duration models
survival models
event history
random effects
frailty models
unobserved heterogeneity
Monte Carlo simulation
legislative position-taking
cabinet survival
numeric integration
Markov Chain Monte Carlo
Abstract The term "Competing Risks" describes duration models in which spells may terminate via multiple outcomes: The term of a cabinet, for example, may end with or without an election; wars persist until the loss or victory of the aggressor. Analysts typically assume stochastic independence among risks, the duration modeling equivalent of independence of irrelevant alternatives. However, many political examples violate this assumption. I review competing risks as a latent variables approach. After discussing methods for modeling dependence that place restrictions on the nature of association, I introduce a parametric generalized dependent risks model in which inter-risk correlation may be estimated and its significance tested. The method employs risk-specific random effects drawn from a multivariate normal distribution. Estimation is conducted using numerical methods and/or Bayesian simulation. Monte Carlo simulation reveals desirable large sample properties of the estimator. Finally, I examine two applications using data on cabinet survival and legislative position taking.

A Seemingly Unrelated Regression Model for Analyzing Multiparty Elections
Jackson, John

Uploaded 07-11-2001
Keywords elections
Abstract This paper presents a model for analyzing the returns in multiparty elections. The dependent variable is the log of the ratio of each party's vote to the vote share of a base party, as in the King-Katz model. The estimator is a version of the seemingly unrelated regression model, thereby taking advantage of the properties and computational convenience of the linear model. The error structure is composed of two elements. The first is the conventional SUR type errors that are homoscedastic across voting districts. The second is a district specific error structure that is derived by treating the observed votes as a sample from a multivariate normal distribution of true party support. The paper derives the small sample properties of the estimator, which are important in many applications where there are not a large number of districts. The model is applied to the 1993 Polish parliamentary elections. The results from this analysis form the basis for monte carlo simulations comparing several different estimators.

Public Opinion During The Vietnam War: A Revised Measure Of The Public Will
Berinsky, Adam

Uploaded 04-02-2001
Keywords non-response
public opinion
Abstract The conception of opinion polls as "broadly representative" of public sentiment has long pervaded academic and popular discussions of polls. In 1939, polling pioneer George Gallup advanced the virtues of surveys as a means for political elites to assess the collective "mandate of the people." If properly designed and conducted, Gallup argued, polls would act as a "sampling referendum" and provide a more accurate measure of popular opinion than more traditional methods, such as reading mail from constituents and attending to newspapers (see also Gallup and Rae 1940; for a contrary view, see Ginsberg 1986, Herbst 1993). More recently, Verba has argued, "sample surveys provide the closest approximation to an unbiased representation of the public because participation in a survey requires no resources and because surveys eliminate the bias inherent in the fact that participants in politics are self-selected -- surveys produce just what democracy is supposed to produce -- equal representation of all citizens"(1996, 3). Thus, while surveys may be limited in several respects they appear to provide a requisite egalitarian compliment to traditional forms of political participation. Through opinion polls, the voice of "the people," writ broadly, may be heard. Or maybe not. In this paper, I reconsider this conventional wisdom. Specifically, I demonstrate that the imbalance in political rhetoric surrounding the Vietnam War disadvantaged those groups who were the natural opponents of the War. I investigate the effect of accounting for "don't know" responses on the shape of public opinion on the Vietnam issue using a number of datasets from the 1960s and early 1970s and find that analyses that use very different data sources converge to the same conclusion. The process of collecting opinion on Vietnam excluded a dovish segment of the population from the collective opinion signal in the early part of the war. However, this bias shrank over time as anti-war messages became more common in the public sphere. To use the language of Verba, Schlozman, and Brady, the "voice" of those who abstained from the Vietnam questions was different from those who responded to such items. So while there may indeed have been a "silent majority" -- as President Nixon maintained during the early years of his presidency -- it was a majority that opposed, rather than supported, the war.

Ticket-splitting and Strategic Voting
Gschwend, Thomas

Uploaded 08-22-2000
Keywords Ticket Splitting
Strategic voting
Germany EI
Multiple imputation
Abstract Germany provides an especially interesting case for the study of strategic voting because a two-ballot system is used. Voters are encouraged to split their votes using different strategies. I disentangle different types of strategic voting that have been mixed in the literature so far: On the first vote there is emph{tactical} voting, and on the second vote there is emph{loan} voting. Therefore, I focus particularly on ticket splitting patterns. The data set I use contains official election results of first and second votes for all German districts from the federal election of 1998. To obtain estimates that determines quantity of straight and split ticket voting between political parties I employ King's EI for a first-stage analysis and use these estimates as independent variables in second-stage models. In order to account for the uncertainty in first-stage EI-point estimates I use a multiple imputation approach. I show that tactical and loan voters secured the representation of FDP and the Greens in the German Parliament. Several validation attempts of the second-stage prediction results prove that not every second-stage analysis based on first stage EI-point estimates is doomed to fail.

Supplement to 'Democracy and Markets: The Case of Exchange Rates'
Freeman, John R.
Hays, Jude
Stix, Helmut

Uploaded 06-30-2000
Keywords Markov switching model
specification testing
Abstract This is a methods supplement to "Democracy and Markets: The Case of Exchange Rates," American Journal of Political Science July 2000.

A Positive Theory of Bureaucratic Discretion as Agency Choice
Krause, George

Uploaded 02-24-2000
Keywords bureaucratic discretion
administrative decision-making
policy implementation
formal theory
Abstract Existing research on the positive theory of bureaucratic discretion views this phenomenon as a "supply-side" concept that elected officials determine without considering bureaucratic preferences altogether, or by merely treating it as being exogenous to the optimization problem confronting politicians. It has been well established by bureaucracy scholars that agencies have preferences concerning bureaucratic discretion and are proactive in trying to get these preferences met (e.g., Rourke 1984; Wilson 1989). In this essay, I set forth a "demand-side" theory of bureaucratic discretion where an administrative agency's preferences for this commodity under conditions of uncertainty is determined through the relationship between its utility and (a) bureaucratic discretion, and (b) policy (implementation) outcome uncertainty, separately. Moreover, I argue that the discretionary context confronting the agency will matter, and thus incorporate this into the theoretical model. Hypotheses concerning the discretionary context by which administrative agencies will view bureaucratic discretion are generated from this model. Finally, I propose a statistical test that could be employed to empirically test the theoretical predictions of the "demand-side" model of bureaucratic discretion set forth in this paper.

The Timing of Voting Decisions in Presidential Campaigns
Box-Steffensmeier, Janet M.
Kimball, David

Uploaded 04-12-1999
Keywords heteroskedastic probit
time of vote decision
presidential elections
1988 election
Abstract Voting analysts often make a distinction between "long-term" and "short-term" forces that influence the voting decision in presidential elections (Campbell et al. 1960). Long-term forces reflect information and considerations that are available to voter before the presidential campaign starts, such as party identification, demographic attributes, and the record each candidate has established previously in government. In contrast, short-term forces refer more specifically to the campaign. We posit that there is variation in the way voters integrate the long- and short-term forces into a voting decision. Furthermore, the long-term forces are smaller in number and thus easier for researchers to identify and measure. For example, much attention has been devoted to conceptualization and measurement of party identification. However, short-term forces are nearly infinite in number and are much harder to measure and link up to the voting decision. This means that voting models should perform well when predicting the choices of voters who are guided primarily by long-term forces. In contrast, voting models should not perform as well for citizens who are strongly influenced by short-term forces. In statistical terms, there will be heteroskedastic error variance in common vote models due to the differing influence of short- and long-term forces. We examine the variation among voters by using the standard NES question that asks citizens how long before the election they made their voting decisions and test our expectations using the heteroskedastic probit technique (Brehm and Alvarez 1995), which is like a standard probit model except that there is a separate equation to model the error variance (the errors in prediction). By using the timing of the vote decision to help model the error variance, we produce unbiased estimates and improve our ability to explain voting behavior and the impact of campaigns.

Political Methodology - A Welcoming Discipline
Beck, Nathaniel

Uploaded 05-11-1999
Keywords political methodology
data collection
Abstract This article discusses, from my own perspective, political methodology at the age of twenty five years. In particular, I look at the relationship of political methodology to other methodological subdisciplines and to statistics, focussing on the division of labor among the various methodological disciplines. I also briefly discuss some issues in data collection.

Policies, Prototypes, and Presidential Approval
Gronke, Paul

Uploaded 08-20-1999
Keywords presidency
presidential approval
variance modeling
Abstract American presidents, like all democratic political leaders, rely on popular support in order to promote their political agenda, gain legislative victories, and succeed at the ballot box. Extant studies of approval, however, focus resolutely on aggregate values, while individual level determinants, and variation, have been ignored. This paper redresses this imbalance, and in doing so speaks to the outstanding questions in studies of presidential approval. Individual level presidential approval is a product of three dimensions of evaluation: prospective policy judgements (what are you going to do for me tomorrow?), retrospective assessments (what did you do for me yesterday?), and personality assessments (what kind of leader are you?). In addition, the model draws on new models of uncertainty in the survey response, testing the hypothesis that weaker partisan attachments and lower levels of chronic political information will lead to greater uncertainty about presidential performance. The model is tested using NES studies from 1980--1996. The overall performance is superior (predicting from 40-70% of the variance in individual scores), and the primary hypotheses are confirmed. Retrospective, but not prospective, judgements drive individual level presidential approval, thus challenging the ``bankers" model of approval, and personality assessments play a central role in approval. Finally, strong evidence is found of heteroskedasticity in the approval models, and political information, interest in the presidential contest, and strength of partisan attachments all lead to lower levels of uncertainty. Variations in the role played by party during the Reagan years, compared to the presidencies of Carter, Bush, and Clinton, suggest a complex interaction between partisan ties, presidential performance, and the personality of the particular individual occupying the oval office.

A Spectral Analysis of Military Expenditures: Implications for Data and Theory
Gerace, Michael P.

Uploaded 11-13-1999
Keywords spectral analysis
military expenditures
defense economics
arms race
Abstract This paper employs spectral analysis on the military expenditures of 7 countries across two broad time periods. The countries are the United States, Britain, France, Germany, Italy, Russia and Japan and the periods are 1872-1913 and 1950-1991 (less Russia in the second period). Periodograms of the 13 military expenditure variables are estimated in order to evaluate the variance structure of each variable. This procedure is conducted on each variable in its trending form and after detrending (26 times in all). While the trend in the data accounts for most of the variance in the levels of the data, the importance of the trend and the length of the period defining the trend seem to be influenced by the presence of war in the data. Despite these differences, however, the trend remains the most important feature of the data. The detrended data indicate that numerous influences converge on military expenditures, as is indicated by the large number of periods with significant waves. The large number of periods in the data attest to the difficulty of estimating a parsimonious model in the time domain. The idea of extracting certain portions from the data to estimate relationships in the time domain is briefly explored.

Who Votes By Mail? A Dynamic Model of the Individual-Level Consequences of Vote-by-Mail Systems
Berinsky, Adam
Burns, Nancy
Traugott, Michael

Uploaded 04-17-1998
Keywords turnout
duration analysis
continuous-time multistate duration model
Abstract Throughout the years, a number of changes have been proposed to electoral laws with the aim of increasing voter turnout and altering the composition of the electorate to make it more reflective of the voting age population. The most recent of these innovations is voting-by-mail (VBM). While the use of VBM has spread through the United States, little empirical evaluation of the impact of VBM has been undertaken to date. The analysis presented here fills this gap in our knowledge by assessing the impact of VBM on the Oregon electorate through a multistate duration analysis (Heckman and Singer, 1984; Heckman and Walker, 1986, 1991) that takes into account other factors associated with election administration and characteristics of individual voters. This methodology has the added advantage of providing a reasonable basis for extrapolation of these effects to other jurisdictions. The results of our research suggest that VBM does increase voter turnout in the aggregate, although its effects are not uniform across all groups in the electorate. More importantly, it does not seem to exert any influence on the partisan composition of the electorate. From a methodological perspective, the use of a multistate duration analysis provides a promising approach to extrapolating the impact of a policy change from one jurisdiction to another when appropriate data are available in each.

Binomial-Beta Hierarchical Models for Ecological Inference
King, Gary
Rosen, Ori
Tanner, Martin A.

Uploaded 05-28-1998
Keywords ecological inference
hierarchical models
iterative simulation
Abstract We develop a binomial-beta hierarchical model for ecological inference, using insights from King's (1997) ecological inference model and from the literature on hierarchical models based on Markov chain Monte Carlo algorithms (Tanner, 1996). Models in the framework we provide appear to scale up well, to have few numerical difficulties, and to recognize and avoid automatically problems with multiple modes and some other statistical issues.

Time Series Models for Discrete Data: solutions to a problem with quantitative studies of international conflict
Jackman, Simon

Uploaded 07-21-1998
Keywords categorical time series
dependent binary data
Markov regression models
latent autoregressive process
Markov Chain Monte Carlo
international conflict
democratic peace
Abstract Discrete dependent variables with a time series structure occupy something of a statistical limbo for even well-trained political scientists, prompting awkward methodological compromises and dubious substantive conclusions. An important example is the use of binary response models in the analysis of longitudinal data on international conflict: researchers understand that the data are not independent, but lack any way to model serial dependence in the data. Here I survey methods for modeling categorical data with a serial structure. I consider a number of simple models that enjoy frequent use outside of political science (originating in biostatistics), as well as a logit model with an autoregressive error structure (the latter model is fit via Bayesian simulation using Markov chain Monte Carlo methods). I illustrate these models in the context of international conflict data. Like other re-analyses of these data addressing the issue of serial dependence, citeaffixed{beck:btscs}{e.g.,}, I find economic interdependence does not lessen the chances of international conflict. Other findings include a number of interesting asymmetries in the effects of covariates on transitions from peace to war (and vice versa). Any reasonable model of international conflict should take into account the high levels of persistence in the data; the models I present here suggest a number of methods for doing so.

Measuring the Electoral and Policy Impact of Majority-Minority Voting Districts: Candidates of Choice, Equal Opportunity, and Representation
Epstein, David
O'Halloran, Sharyn

Uploaded 09-15-1998
Keywords voting rights act
ecological regression
Abstract The Voting Rights Act guarantees minority voters an "equal opportunity to elect the candidate of their choice." Yet the implementation of this requirement is beset with technical difficulties: first, current case law provides no clear definition as to who qualifies as a candidate of choice of the minority community; second, traditional techniques for estimating equal opportunity rely heavily on ecological regression, which is prone to statistical bias; and third, no attempt is made to systematically evaluate the impact of alternative districting strategies on the substantive representation of minority interests, rather than just descriptive representation. This paper offers an alternative approach to majority-minority districting that 1) explicitly defines the term "candidate of choice;" 2) determines the point of equal opportunity without relying on ecological regression; and 3) estimates the expected impact of competing districting schemes on substantive representation. It then applies this technique to a set of alternative districting plans for the South Carolina State Senate.

Elections and the National Election Studies
King, Gary

Uploaded 00-00-0000
Keywords surveys
aggregate data
ecological inference
Abstract This paper, which was written for the National Election Studies, Congressional Elections Conference, argues that the National Election Studies can best contribute knowledge about American politics, and best ensure that the organization prospers, by a data collection strategy that includes a creative combination of detailed aggregate election data with traditional survey research. A sampling design similar to, but considerably less expensive than, the voter validation studies could produce a bounty of information about real precinct-level electoral returns from numerous electoral offices, along with valuable demographic and economic data.

Getting the Mean Right: Generalized Additive Models
Beck, Nathaniel
Jackman, Simon

Uploaded 00-00-0000
Keywords non-parametric regression
non-linear egression
Monte Carlo analysis
interaction effects
cabinet duration
Abstract We examine the utility of the generalized additive model as an alternative to the common linear model. Generalized additive models are flexible in that they allow the effect of each independent variable to be modelled non-parametrically while requiring that the effect of all the independent variables is additive. GAMs are common in the statistics literature but are conspicuously absent in political science. The paper presents the basic features of the generalized additive model. Through Monte Carlo experimentation we show that there is little danger of the generalized additive model finding spurious structures. We use GAMS to reanalyze several political science data sets. These applications show that generalized additive models can be used to improve standard analyses by guiding researchers as to the parametric shape of response functions. The technique also provides interesting insights about data, particularly in terms of modelling interactions.

Modelling Space and Time: The Event History Approach
Beck, Nathaniel

Uploaded 08-22-1996
Keywords duration analysis
event history analysis
time-series--cross-section data
discrete duration data
duration dependence
Abstract This is an elementary exposition of duration modelling prepared for a volume in celebration of the 30th anniversary of the Essex Summer School (Research Strategies in the Social Sciences, Elinor Scarbrough and Eric Tanenbaum, editors). The approach is non-mathematical. The running example used is the King et al. model of cabinet durations with particular attention paid to detecting and interpreting duration dependence in that model. There is some new discussion of ascertaining duration dependence using discrete methods and the relationship between discrete duration data and binary time-series--cross-section data.

The Resurgence of Nativism in California? The Case of Proposition 187 and Illegal Immigration
Alvarez, R. Michael
Butterfield, Tara L.

Uploaded 09-25-1997
Keywords two-stage probit
discrete choice
binary probit
propositions and initiatives
economic voting
illegal immigration
immigration reform
California politics
Abstract We argue that support among California voters for Proposition 187 in 1994 was an example of cyclical nativism. This nativism was provoked primarily by California's economic downturn during the early 1990s. We develop four specific hypotheses to explain how poor economic conditions in California and the consequent nativistic sentiments would result in support for Proposition 187: 1) voters who believe that California's economic condition is poor will be more likely to support Proposition 187; 2) voters who perceive themselves as being economically threatened by illegal immigrants will be more likely to support Proposition 187; 3) voters with lower levels of education are more economically vulnerable and will be more likely to support Proposition 187; 4) voters in Southern California feel more directly affected by illegal immigration and will be more likely to support Proposition 187. To test these hypotheses, we analyze voter exit poll data from the 1994 California election. We utilize a two-stage probit model to allow for the endogeneity which results from the politicization of illegal immigration during this election. We find support for our hypotheses in the data. These findings cause us to conclude that nativism, fueled by economic conditions, was a salient factor leading many Californians to support Proposition 187.

Concordance, Projection, and Citizen Perceptions of Roll Call Voting
Wilson, J. Matthew
Gronke, Paul

Uploaded 08-21-1997
Keywords roll call votes
Abstract Democratic government is permised on some level of representation of constituent opionion by elected legislators. Our work examines this relationship from the perspective of the individual constituent. We assess the degree and causes of constituent knowledge about particular roll call votes. In the current work, we select a high profile domestic policy issue -- the 1994 Crime Bill -- as our focus. We show how that citizen misperception of their representative's position on this piece of legislation was rampant, with false positives (erroneously attributing support) far outweighing false negatives (erroneously attributing opposition). These descriptive results alone stand in constrast to previous findings regarding the 1991 Gulf War vote and the 1992 vote to confirm Clarence Thomas to the Supreme Court (Alvarez and Gronke, 1996a, 1996b). Next, in a series of causal models, we show how concordance and projection dominate citizen attributions for this particular bill. These processes may work to the benefit of incumbents, since people who are unable to recall a particular vote are most likely to project their own attitudes onto the representative.

Testing Theories Involving Strategic Choice: The Example of Crisis Escalation
Smith, Alastair

Uploaded 07-23-1997
Keywords Strategic choice
Bayesian model testing
Markov chain Monte Carlo simulation
multi-variate probit
crisis escalation
Abstract If we believe that politics involves a significant amount of strategic interaction then classical statistical tests, such as Ordinary Least Squares, Probit or Logit, cannot give us the right answers. This is true for two reasons: The dependent variables under observation are interdependent-- that is the essence of game theoretic logic-- and the data is censored -- that is an inherent feature of off the path expectations that leads to selection effects. I explore the consequences of strategic decision making on empirical estimation in the context of international crisis escalation. I show how and why classical estimation techniques fail in strategic settings. I develop a simple strategic model of decision making during crises. I ask what this explanation implies about the distribution of the dependent variable: the level of violence used by each nation. Counterfactuals play a key role in this theoretical explanation. Yet, conventional econometric techniques take no account of unrealized opportunities. For example, suppose a weak nation (B) is threatened by a powerful neighbor (A). If we believe that power strongly influences the use of force then the weak nation realizes that the aggressor's threats are probably credible. Not wishing to fight a more powerful opponent, nation B is likely to acquiesce to the aggressor's demands. Empirically, we observe A threaten B. The actual level of violence that A uses is low. However, the theoretical model suggests that B acquiesced precisely because A would use force. Although the theoretical model assumes a strong relationship between strength and the use of force, traditional techniques find a much weaker relationship. Our ability to observe whether nation A is actually prepared to use force is censored when nation B acquiesces. I develop a Strategically Censored Discrete Choice (SCDC) model which accounts for the interdependent and censored nature of strategic decision making. I use this model to test existing theories of dispute escalation. Specifically, I analyze Bueno de Mesquita and Lalman's (1992) dyadically coded version of the Militarized Interstate Dispute data (Gochman and Moaz 1984). I estimate this model using a Bayesian Markov chain Monte Carlo simulation method. Using Bayesian model testing, I compare the explanatory power of a variety of theories. I conclude that strategic choice explanations of crisis escalation far out-perform non-strategic ones.

Structural Shifts And Deterministic Regime Switching in Aggregate Data Analysis
Tam, Wendy

Uploaded 06-04-1997
Keywords ecological inference
Abstract It is common for the only available data to be aggregated at a level above \r\nthe microeconomic unit in question. Analyzing aggregate data with ecological \r\nregression implicitly assumes constancy of parameters across aggregate units. \r\nThis assumption is rarely tenable in aggregate data analysis since the\r\naggregation process often generates new macro-level observations where the \r\nparameters vary across the aggregate units. Standard ecological regression \r\nestimates are not useful when they are employed on data with changing structural\r\nparameters. A switching regressions context is proposed where the state-defining \r\nvariable is deterministic and measures homogeneity between the macro-level \r\nunits.

Appendix to 'A Computable Equilibrium Model for the Study of Political Economy'
Freeman, John R.
Houser, Daniel

Uploaded 03-14-1997
Keywords political economy
computable political economic equilibrium
Abstract This is an appendix to 'Freeman, John R. and Daniel Houser: A Computable Equilibrium Model for the Study of Political Economy.' [cf. note 9]

Dynamic Models for Dynamic Theories: The Ins and Outs of Lagged Dependent Variables
Keele, Luke
Kelly, Nathan

Uploaded 06-28-2005
Keywords time series
lagged dependent variables
Abstract A lagged dependent variable in an OLS regression is often used as a means of capturing dynamic effects in political processes and as a method for ridding the model of autocorrelation. But recent work contends that the lagged dependent variable specification is too problematic for use in most situations. More specifically, if residual autocorrelation is present, the lagged dependent variable causes the coefficients for explanatory variables to be biased downward. We use a Monte Carlo analysis to assess empirically how much bias is present when a lagged dependent variable is used under a wide variety of circumstances. In our analysis, we compare the performance of the lagged dependent variable model to several other time series models. We show that while the lagged dependent variable is inappropriate in some circumstances, it remains the an appropriate model for the dynamic theories often tested by applied analysts. From the analysis, we develop several practical suggestions on when and how to use lagged dependent variables on the right hand side of a model.

Publication, Publication
King, Gary

Uploaded 07-26-2005
Keywords replication
data sharing
class assignments
Abstract I show herein how to write a publishable paper by beginning with the replication of a published article. This strategy seems to work well for class projects in producing papers that ultimately get published, helping to professionalize students into the discipline, and teaching them the scientific norms of the free exchange of academic information. I begin by briefly revisiting the prominent debate on replication our discipline had a decade ago and some of the progress made in data sharing since. (This paper is forthcoming in PS: Political Science and Politics. The current version is available at http://gking.harvard.edu.)

A Bayesian analysis of the multinomial probit model using marginal data augmentation
Imai, Kosuke
van Dyk, David A.

Uploaded 08-21-2002
Keywords Bayesian analysis
Data augmentation
Prior distributions
Probit models
Rate of convergence
Abstract We introduce a set of new Markov chain Monte Carlo algorithms for Bayesian analysis of the multinomial probit model. Our Bayesian representation of the model places a new, and possibly improper, prior distribution directly on the identifiable parameters and thus is relatively easy to interpret and use. Our algorithms, which are based on the method of marginal data augmentation, involve only draws from standard distributions and dominate other available Bayesian methods in that they are as quick to converge as the fastest methods but with a more attractive prior specification.

A Hierarchical Bayesian Framework for Item Response Theory Models with Applications in Ideal Point Estimation
Lu, Ying
Wang, Xiaohui

Uploaded 07-15-2006
Keywords item response theory
testlet response theory
random and fixed effect models
vote cast data
roll call analysis
Abstract Ideal point estimation, a variation of item response theory models, has been widely used by political scientists to analyze legislative behaviors. However, many existing ideal point estimation research is based on unrealistic assumptions of independence of different individuals' decisions towards all cases/bills and the independence of one's decisions towards different cases/bills. The violation of such assumptions leads to bias and inefficiency in parameter estimation. More importantly, failing to address these assumptions has hampered the ideal point estimation research from offering intuitive and concise explanations on complex legislative behaviors such as multidimensionality, strategic voting, temporary coalitions. In this paper, we extend one testlet response theory model by Bradlow, Wainer and Wang(1999) to a comprehensive hierarchical Bayesian statistical framework that allows researchers to model inter-individual and intra-individual correlations through random effects and/or fixed effects. Through simulations and an analysis of the US Supreme Court vote cast data, we show that the proposed framework holds good promise for tackling many unsettled issues in ideal point estimations. As a companion to this paper, we also offer an easy-to-use R package with C code that implements the methods discussed herein.

Electoral Outcomes, Economic Expectations and the 'Ethic of Self-Reliance'
Glasgow, Garrett
Weber, Roberto

Uploaded 08-18-2002
Keywords economic individualism
election outcomes
economic expectations
Abstract This paper examines how election outcomes affect individual economic expectations. In particular, we are interested in how differences in economic individualism change the relationship between election outcomes and individual expectations for personal economic well-being. We hypothesize that economic individualists will not link electoral outcomes to expectations for their personal economic well-being, while individuals who are not economic individualists will link the two. We confirm this hypothesis empirically using a postelection survey from the 1994 German Bundestag election.

Alternative Balance Metrics for Bias Reduction in Matching Methods for Causal Inference
Sekhon, Jasjeet

Uploaded 07-18-2006
Keywords Matching
Causal Inference
Genetic Matching
Balance Metrics
Abstract Sekhon (2006; 2004a) and Diamond and Sekhon (2005) propose a matching method, called Genetic Matching, which algorithmically maximizes the balance of covariates between treat- ment and control observations via a genetic search algorithm (Sekhon and Mebane 1998). The method is neutral as to what measures of balance one wishes to optimize. By default, cumulative probability distribution functions of a variety of standardized statistics are used as balance metrics and are optimized without limit. The statistics are not used to conduct formal hypothesis tests, because no measure of balance is a monotonic function of bias in the estimand of interest and because we wish to maximize balance. Descriptive measures of discrepancy generally ignore key information related to bias which is captured by probability distribution functions of standardized test statistics. For example, using several descriptive metrics, one is unable reliably to recover the experimental benchmark in a testbed dataset for matching estimators (Dehejia and Wahba 1999). And these metrics, unlike those based on optimized distribution functions, perform poorly in a series of Monte Carlo sampling experiments just as one would expect given their properties.

The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation
Imai, Kosuke
King, Gary
Nall, Clayton

Uploaded 07-17-2007
Keywords causal inference
community intervention trials
field experiments
group-randomized trials
health policy
matched-pair design
Abstract A basic feature of many field experiments is that investigators are only able to randomize clusters of individuals -- such as households, communities, firms, medical practices, schools, or classrooms -- even when the individual is the unit of interest. To recoup some of the resulting efficiency loss, many studies pair similar clusters and randomize treatment within pairs. Other studies (including almost all published political science field experiments) avoid pairing, in part because some prominent methodological articles claim to have identified serious problems with this 'matched-pair cluster-randomized' design. We prove that all such claims about problems with this design are unfounded. We then show that the estimator for matched-pair designs favored in the literature is appropriate only in situations where matching is not needed. To address this problem without modeling assumptions, we generalize Neyman's (1923) approach and propose a simple new estimator with much improved statistical properties. We also introduce methods to cope with individual-level noncompliance, which most existing approaches incorrectly assume away. We show that from the perspective of, among other things, bias, efficiency, or power, pairing should be used in cluster-randomized experiments whenever feasible; failing to do so is equivalent to discarding a considerable fraction of one's data. We develop these techniques in the context of a randomized evaluation we are conducting of the Mexican Universal Health Insurance Program.

Bayesian Model Averaging: Theoretical developments and practical applications
Montgomery, Jacob
Nyhan, Brendan

Uploaded 01-22-2008
Keywords Bayesian model averaging
model robustness
specification uncertainty
Abstract Political science researchers typically conduct an idiosyncratic search of possible model configurations and then present a single specification to readers. This approach systematically understates the uncertainty of our results, generates concern among readers and reviewers about fragile model specifications, and leads to the estimation of bloated models with huge numbers of controls. Bayesian model averaging (BMA) offers a systematic method for analyzing specification uncertainty and checking the robustness of one's results to alternative model specifications. In this paper, we summarize BMA, review important recent developments in BMA research, and argue for a different approach to using the technique in political science. We then illustrate the methodology by reanalyzing models of voting in U.S. Senate elections and international civil war onset using software that respects statistical conventions within political science.

A Spatial Model of Electoral Platforms
Elff, Martin

Uploaded 07-01-2008
Keywords Parties
party families
electoral platforms
party manifestos
spatial models
unobserved data
latent trait models
EM algorithm
Monte Carlo integration
Monte Carlo EM
importance sampling
SIR algorithm
ideological dimensions
Abstract The reconstruction of political positions of parties, candidates and governments has made considerable headway during the last decades, not the least due to the efforts of the Manifesto Research Group the and Comparative Manifestos Project, which compiled and published a data set on the electoral platforms of political parties from most major democracies for most of the post-war era. A central assumption underlying the coding of electoral platforms into quantitative data as done by the MRG/CMP is that parties take positions by selective emphases of policy objectives, which put their accomplishments in a most positive light (Budge 2001) or are representative for their current polital/ideological positions. Consequently, the MRG/CMP data consist of percentages of the respective manifesto texts that refer to various policy objectives. As a consequence both of this underlying assumption and of the structure of the CMP data, methods of classical multivariate analysis are not well suited to these data, due to the requirements to the data for an appropriate application of these methods (van der Brug 2001; Elff 2002). The paper offers an alternative method for reconstructing positions in political spaces based on latent trait modelling, which both reÔ¨?ects the assumptions underlying the coding of the texts and the peculiar structure of the data. Finally, the validity of the proposed method is demonstrated with respect to the average position of party families within reconstructed policy spaces. It turns out that communist, socialist, and social democrat parties differ clearly from ‚??bourgeois‚?? parties with regards to their positions on an economic left/right dimension, while British and Scandinavian conservative parties can be distinguished from Christian democratic parties by their respective positions on a libertarian/authoritarian and a traditionalist/modernist dimension. Similarly, the typical political positions of green (or ‚??New Politics‚??) parties can be distinguished from the positions of other party families.

Problematic Choices: Testing for Correlated Unit Specific Effects in Panel Data
Troeger, Vera

Uploaded 07-07-2008
Abstract The (generalized) Hausman specification test (Hausman 1978) is the gold-standard for political scientists using time-series cross-section data to check whether unit specific effects are correlated with right-hand-side variables. More than 500 articles (published in SSCI journals) over the last 20 years in Economics and Political Science used the Hausman test to justify the model choice, e.g. whether to employ a fixed effects or random effects/ pooled OLS specification. The asymptotic properties of the Hausman test and its variants are well known and formal power analyses have shown that the Hausman test performs reasonably well. Yet, the differences in the estimates of fixed effects and random effects models in finite samples can originate from two different sources: On the one hand, the Hausman test might rightly pick up differences that are caused by the inconsistency of the random effects estimator if unit specific effects are correlated with any of the explanatory variables and the random effects model therefore produces biased coefficients. On the other hand, differences might also stem from the inefficiency of the fixed effects estimator if explanatory variables are rarely changing and therefore only have a very small within variation. This inefficiency does not only lead to large standard errors but also to very unreliable point estimates that might be far away from the true relationship. While the Hausman test (and especially more recent variants and augmentations of the specification test) acknowledge the inefficiency of the fixed effects model and control for the differences in the asymptotic variances of the two estimators, this inefficiency in combination with correlated unit effects might still lead to unreliable test results. In International Relations and International and Comparative Political Economy where many of our explanatory variables measure institutions which do not change much over time this result might be especially harmful since the fixed effects model in this case produces very unreliable point estimates. This paper analyses the finite sample properties and power of the Hausman specification test by using Monte Carlo experiments. It shows under what conditions, e.g. the size of the correlation between unit specific effects and explanatory variables, and the between-within variance ratio of right-hand-side variables, the Hausman test generates misleading results.

Congressional Careers, Committee Assignments, and Seniority Randomization in the U.S. House of Representatives
Kellermann, Michael
Shepsle, Kenneth

Uploaded 02-01-2008
Keywords Congress
Abstract This paper estimates the effects of initial committee seniority on the career ooetcomes of Democratic members of the Hooese of Representatives from 1949 to 2006. When more than one freshman representative is assigned to a committee, positions in the seniority qoeeoee are established by lottery. This ensoeres that qoeeoee positions are oencorrelated in expectation with other legislator characteristics within these grooeps. This natoeral experiment allows oes to estimate the caoesal effect of seniority on a variety of ooetcomes. Lower ranked committee members are less likely to serve as soebcommittee chairs on their initial committee, are more likely to transfer to other committees, and have fewer sponsored bills passed in the joerisdiction of their initial committee. On the other hand, there is little evidence that the seniority randomization has a net effect on reelection, terms of service in the Hooese, or the total noember of sponsored bills passed.

What is the probability your vote will make a difference?
Gelman, Andrew
Silver, Nate
Edlin, Aaron

Uploaded 10-27-2008
Abstract One of the motivations for voting is that one vote can make a difference. In a presidential election, the probability that your vote is decisive is equal to the probability that your state is necessary for an electoral college win, times the probability the vote in your state is tied in that event. We compute these probabilities for each state in the 2008 presidential election, using state-by-state election forecasts based on the latest polls. The states where a single vote is most likely to matter are New Mexico, Virginia, New Hampshire, and Colorado, where your vote has an approximate 1 in 10 million chance of determining the national election outcome. On average, a voter in America has a 1 in 60 million chance of being decisive in the presidential election.

We're Not Lost, But How Did We Get Here? Appended Documents
Jackson, John

Uploaded 07-07-2009
Keywords Society for Political Methodology
Abstract These are various documents to be appended to the narrative on the beginning of the Society for Political Methodology.

Quantitative Discovery from Qualitative Information: A General-Purpose Document Clustering Methodology
King, Gary
Grimmer, Justin

Uploaded 07-19-2009
Keywords unsupervised learning
content analysis
Abstract Many people attempt to discover useful information by reading large quantities of unstructured text, but because of known human limitations even experts are ill-suited to succeed at this task. This difficulty has inspired the creation of numerous automated cluster analysis methods to aid discovery. We address two problems that plague this literature. First, the optimal use of any one of these methods requires that it be applied only to a specific substantive area, but the best area for each method is rarely discussed and usually unknowable ex ante. We tackle this problem with mathematical, statistical, and visualization tools that define a search space built from the solutions to all previously proposed cluster analysis methods (and any qualitative approaches one has time to include) and enable a user to explore it and quickly identify useful information. Second, in part because of the nature of unsupervised learning problems, cluster analysis methods are not routinely evaluated in ways that make them vulnerable to being proven suboptimal or less than useful in specific data types. We therefore propose new experimental designs for evaluating these methods. With such evaluation designs, we demonstrate that our computer-assisted approach facilitates more efficient and insightful discovery of useful information than either expert human coders using qualitative or quantitative approaches or existing automated methods. We (will) make available an easy-to-use software package that implements all our suggestions.

The Democracy Paradox
Gagnon, Jean-Paul
Gagnon, Jean-Paul

Uploaded 02-24-2010
Keywords democracy
political science
what is democracy
Abstract This paper argues that democracy is a governing method endemic to human nature. It also argues that since democracy‚??s growth and stylizations (for example by the Mycenaeans, Greek, Ottoman, and later the modern post-colonial world) it has been misunderstood and incorrectly defined. At present, many scholars (such as Beetham, Breton, Dahl, Diamond, Huntington, Keane, and to a certain extent Touraine) seek to explain democracy as theorists and philosophers have been trying to do for millennia. The lack of explaining the general laws of democracy in a universally accepted definition is a major crux in political theory. The current political science focus on indices which appoint performance scores regarding how ‚??democratic‚?? a country is reveals another example of how, increasingly, more mainstream political thinking seeks to define democracy with general criteria (evincing a desire to appoint universal laws to democracy). This paper will show that there is, and has been for well over 3500 years, a democracy paradox by explaining what it is and how it came about. Such will be done firstly by revealing the history of the paradox; then discussing how it came to the modern era without being solved; finishing with the answer to the paradox derived from the author‚??s doctoral thesis.

Multivariate Matching Methods That are Monotonic Imbalance Bounding
King, Gary
Iacus, Stefano
Porro, Giuseppe

Uploaded 01-03-2011
Keywords Matching
Causal Inference
Abstract We introduce a new ``Monotonic Imbalance Bounding'' (MIB) class of matching methods for causal inference with a surprisingly large number of attractive statistical properties. MIB generalizes and extends in several new directions the only existing class, ``Equal Percent Bias Reducing'' (EPBR), which is designed to satisfy weaker properties and only in expectation. We also offer strategies to obtain specific members of the MIB class, and analyze in more detail a member of this class, called Coarsened Exact Matching, whose properties we analyze from this new perspective. We offer a variety of analytical results and numerical simulations that demonstrate how members of the MIB class can dramatically improve inferences relative to EPBR-based matching methods.

Reasoning about Interference Between Units}
Bowers, Jake
Fredrickson, Mark
Panagopoulos, Costas

Uploaded 07-13-2012
Keywords interference
randomization inference
randomized experiments
Fisher's sharp null hypothesis
causal inference
Abstract If an experimental treatment is experienced by both treated and control group units, tests of hypotheses about causal effects may be difficult to conceptualize let alone execute. In this paper, we show how counterfactual causal models may be written and tested when theories suggest spillover or other network-based interference among experimental units. We show that the ``no interference'' assumption need not constrain scholars who have interesting questions about interference. We offer researchers the ability to model theories about how treatment given to some units may come to influence outcomes for other units. We further show how to test hypotheses about these causal effects, and we provide tools to enable researchers to assess the operating characteristics of their tests given their own models, designs, test statistics, and data. The conceptual and methodological framework we develop here is particularly applicable to social networks, but may be usefully deployed whenever a researcher wonders about interference between units. Interference between units need not be an untestable assumption; instead, interference is an opportunity to ask meaningful questions about theoretically interesting phenomena.

Causal Inference in Conjoint Analysis: Understanding Multi-Dimensional Choices via Stated Preference Experiments
Hainmueller, Jens
Hopkins, Daniel
Yamamoto, Teppei

Uploaded 12-12-2012
Keywords potential outcomes
average marginal component effects
fractional factorial design
orthogonal design
randomized design
survey experiments
public opinion
vote choice
Abstract For decades, market researchers have used conjoint analysis to understand how consumers make decisions when faced with multi-dimensional choices. In such analyses, respondents are asked to score or rank a set of alternatives, where each alternative is defined by multiple attributes which are varied randomly or intentionally. Political scientists are frequently interested in parallel questions about decision-making, yet to date conjoint analysis has seen little use within the field. In this manuscript, we demonstrate the potential value of conjoint analysis in political science, using examples about vote choice and immigrant admission to the United States. In doing so, we develop a set of statistical tools for drawing causal conclusions from stated preference data based on the potential outcomes framework of causal inference. We discuss the causal estimands of interest and provide a formal analysis of the assumptions required for identifying those quantities. Prior conjoint analyses have typically used designs which limit the number of unique conjoint profiles. We employ a survey experiment to compare this approach to a fully randomized approach. Both our formal analysis of the causal estimands and our empirical results highlight the potential biases of common approaches to conjoint analysis which restrict the number of profiles.

Cluster analysis for political scientists
Filho, Dalson
Rocha, Enivaldo
Silva, Mariana
Paranhos, Ranulfo
Alexandre, José

Uploaded 03-19-2014
Keywords cluster analysis
Q analysis
political regimes
Abstract This paper provides an intuitive introduction to cluster analysis. Our targeting audience is both undergraduate and graduate students in their initial training state. Methodologically, we use basic simulation to illustrate the underlying logic of cluster analysis. In addition, we replicate data from Coppedge, Alvarez and Maldonado (2008) to classify political regimes according to Dahl's (1971) polyarchy dimensions: contestation and inclusiveness. With this paper we hope to diffuse cluster analysis technique in Political Science and help novice scholars not only to understand but also to cluster analysis in their own research designs.

A False Discovery Framework for Mitigating Publication Bias
Spahn, Bradley
Franco, Annie

Uploaded 07-27-2015
Keywords Multiple Comparisons
Hypothesis Testing
Statistical Inference
Publication Bias
Abstract The social science and biomedical fields are increasingly concerned about the replicability of published results. Preregistration has been proposed as a means of addressing these concerns, but faces criticism for possibly stifling unexpected scientific discovery. We propose a system for experimental inference that mitigates the returns to p-hacking in non-preregistered and observational settings. Our method controls the weighted rate of type-I errors, subjecting unimportant hypotheses (e.g. higher-order interactions) to more stringent null hypothesis rejection thresholds than important ones. We show that this method can be applied to studies in which many p-values go unreported by estimating an upper bound on the number of missing statistics such that a particular null hypothesis would still be rejected. We show that preregistration is still advantageous in this framework, as researchers adhering to planned analyses would only be held accountable for the specified statistical tests. Finally, we reanalyze several existing studies using these methods to demonstrate their usefulness.

Bayesian Measures of Explained Variance and Pooling in Multilevel (Hierarchical) Models
Gelman, Andrew
Pardoe, Iain

Uploaded 04-16-2004
Keywords adjusted R-squared
Bayesian inference
hierarchical model
multilevel regression
partial pooling
Abstract Explained variance (R2) is a familiar summary of the fit of a linear regression and has been generalized in various ways to multilevel (hierarchical) models. The multilevel models we consider in this paper are characterized by hierarchical data structures in which individuals are grouped into units (which themselves might be further grouped into larger units), and there are variables measured on individuals and each grouping unit. The models are based on regression relationships at different levels, with the first level corresponding to the individual data, and subsequent levels corresponding to between-group regressions of individual predictor effects on grouping unit variables. We present an approach to defining R2 at each level of the multilevel model, rather than attempting to create a single summary measure of fit. Our method is based on comparing variances in a single fitted model rather than comparing to a null model. In simple regression, our measure generalizes the classical adjusted R2. We also discuss a related variance comparison to summarize the degree to which estimates at each level of the model are pooled together based on the level-specific regression relationship, rather than estimated separately. This pooling factor is related to the concept of shrinkage in simple hierarchical models. We illustrate the methods on a dataset of radon in houses within counties using a series of models ranging from a simple linear regression model to a multilevel varying-intercept, varying-slope model.

The Macro Mechanics of Social Capital
Keele, Luke

Uploaded 10-15-2003
Keywords social capital
time series
public opinion
Abstract Interest in social capital has grown as it has become apparent that it is an important predictor of collective well-being. Recently, however, attention has shifted to how levels of social capital have changed over time. But focusing on how a society moves from one level of social capital to another requires that we alter current theory. In particular, by moving to the context of temporal change, we must not treat it as a lumpy concept with general causes and effects. Instead, we need a theory that explains the macro mechanics between civic activity and interpersonal trust. In the following analysis, I develop a macro theory of social capital through a careful delineation of the social capital aggregation process which demonstrates that we should expect civic engagement to affect interpersonal trust over time with the reverse not being true. Then, I develop and use new longitudinal measures of civic engagement and interpersonal trust to test the direction of causality between the two components of social capital. Finally, I model civic engagement as a function of resources and demonstrate how the decline in civic engagement has adversely affected levels of interpersonal trust over the last thirty years.

Political Preference Formation: Competition, Deliberation, and the (Ir)relevance of Framing Effects
Druckman, Jamie

Uploaded 07-09-2003
Keywords framing effects
rational choice theory
political psychology
Abstract A framing effect occurs when different, but logically equivalent, words or phrases such as 95% employment or 5% unemployment cause individuals to alter their preferences. Framing effects challenge the foundational assumptions of much of the social sciences (e.g., the existence of coherent preferences or stable attitudes), and raise serious normative questions about democratic responsiveness. Many scholars and pundits assume that framing effects are highly robust in political contexts. Using a new theory and an experiment with more than 550 participants, I show that this is not the case framing effects do not occur in many political settings. Elite competition and citizens inter- personal conversations often vitiate and eliminate framing effects. However, I also find that when framing effects persist, they can be even more pernicious than often thought not only do they suggest incoherent preferences but they also stimulate increased confidence in those preferences. My results have broad implications for preference formation, rational choice theory, political psychology, and experimental design.

< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 next>