logoPicV1 logoTextV1

Search Results

Below results based on the criteria 'Benford'
Total number of records returned: 874

The Most Liberal Senator: Analyzing and Interpreting Congressional Roll Calls
Clinton, Joshua
Jackman, Simon
Rivers, Doug

Uploaded 05-12-2004
Keywords ideal points
roll call voting
2004 presidential election
Abstract The non-partisan National Journal recently declared Senator John Kerry to be the "top liberal" in the Senate based on analysis of 62 roll calls in 2003. Although widely reported in the media (and the subject of a debate among the Democratic presidential candidates), we argue that this characterization of Kerry is misleading in at least two respects. First, when we account for the "margin of error: in the voting scores -- which is considerable for Kerry given that he missed 60% of the National Journal's key votes while campaigning -- we discover that the probability that Kerry is the "top liberal" is only .30, and that we cannot reject the conclusion that Kerry is only the 20th most liberal senator. Second, we compare the position of the President Bush on these key votes; including the President's announced positions on these votes reveals the President to be just as conservative as Kerry is liberal (i.e., both candidates are extreme relative to the 108th Senate). A similar conclusion holds when we replicate the analysis using all votes cast in the 107th Senate. A more comprehensive analysis than that undertaken by National Journal (including an accounting of the margins of error in voting scores) shows although Kerry belongs to the most liberal quintile of the contemporary Senate, Bush belongs to the most conservative quintile.

Unanticipated Delays: A Unified Model of Position Timing and Position Content
Boehmke, Frederick

Uploaded 12-09-2003
Keywords duration
discrete choice
seemingly unrelated
position taking
Abstract On potentially contentious votes or when the margin of an upcoming vote is expected to be small, public position announcements by elected representatives may be strategically linked to position content and ultimately, to vote choice. Strategic position timing may occur when legislators announce early in order to sway others' vote choice; it may occur late when legislators stall in order to gain more information or are hoping that a close margin will make their vote valuable to participants willing to make side payments. Since intentions behind delay may often be unobserved or even unobservable, existing empirical analyses are unable to capture them. In this paper I argue that unobserved factors that influence position timing are related to unobserved factors influencing position content. To test this prediction, I develop a seemingly unrelated discrete-choice duration model that estimates the relationship between unobserved factors in the two processes. I then estimate this model using data on position timing and position content from the vote for the North American Free Trade Agreement. The results provide clear evidence that the two processes are linked and are consistent with my arguments about the sources of unanticipated delay.

Presidential Elections and the Stock Market: Comparing Markov-Switching and (FIE)GARCH Models of Stock Volatility
Leblang, David
Mukherjee, Bumba

Uploaded 07-07-2003
Keywords Volatility
Political Business Cycle
Stock Markets
Abstract Existing theoretical research on electoral politics and financial markets predict that when investors expect left parties Democrats (US), Labor (UK), to win elections market volatility increases. In addition, current econometric research on stock market volatility suggests that Markov-Switching models provide more accurate volatility forecasts and fit stock price volatility data better than linear or non-linear GARCH (Generalized Autoregressive Conditional Heteroscedasticity) models. We take issue with both of these claims. We construct a formal model which predicts that if traders anticipate that the Democratic candidate will win the Presidential election stock market volatility decreases. Using two data sets from the 2000 Presidential election we test our claim by estimating several GARCH, Exponential-GARCH (EGARCH), Fractionally Integrated Exponential-GARCH (FIEGARCH) and Markov-Switching models. We also conduct extensive out-of-sample forecasting tests to evaluate these competing statistical models. Results from the out-of-sample forecasts show^×in contrast to prevailing claims that GARCH and EGARCH models provide substantially more accurate forecasts than the Markov-Switching models. Estimates from all the competing statistical models support the predictions from our formal model.

Have Turnout Effects Really Declined? Testing the Partisan Implications of Marginal Voters
Gill, Jeff
Martinez, Michael

Uploaded 08-09-2002
Keywords voting
partisan effects
multinomial logit
Abstract In this paper, we review the theoretical foundations of the debate about whether higher election turnout advantages left parties, suggest a method of assessing the effects of turnout within a single election, and provide evidence from four U.S. elections that the partisan effects of turnout are contingent on the strength and polarity of the short-term forces. Our methodological approach to addressing whether the Democrats would have benefited from higher turnout (and whether the Republicans would have benefited from lower turnout) in a given election is to employ a new type of simulation based on multinomial logit estimates of the choices made by individual citizens. Our substantive approach is similar to Lacy and Burden (1999), in that we posit that U.S. citizens have three unordered choices in each election: vote Democratic, vote Republican, or abstain. We first estimate vote choice (including the abstention category) as an unordered multinomial logit function of standard variables associated with both candidate preference and the likelihood of voting. From that estimation, we derive probabilities for each respondent's selection of each of the three choices (abstain, vote Democratic, or vote Republican). From those probabilities, we simulate several levels of turnout. Higher turnout is simulated by progressively adding to the pool of voters actual abstainers with the lowest probability of abstaining of those remaining in the pool of abstainers. Whereas lower turnout is simulated by progressively subtracting from the electorate actual voters with the highest probability of abstaining. Our results across the four elections provide partial support for both the conventional SES-based model and the alternative defection-based model, though neither model's predictions are completely borne out empirically. As predicted by the conventional model, we find that the electorate has a greater Democratic tilt at higher levels of turnout, although that relationship has significantly weakened over time.

What to do When Your Hessian is Not Invertible: Alternatives to Model Respecification in Nonlinear Estimation
Gill, Jeff
King, Gary

Uploaded 05-14-2002
Keywords Hessian
generalized inverse
maximum likelihood
statistical computing
importance sampling
generalized linear model
singular normal
Abstract What should a researcher do when statistical analysis software terminates before completion with a message that the Hessian is not invertable? The standard textbook advice is to respecify the model, but this is another way of saying that the researcher should change the question being asked. Obviously, however, computer programs should not be in the business of deciding what questions are worthy of study. Although noninvertable Hessians are sometimes signals of poorly posed questions, nonsensical models, or inappropriate estimators, they also frequently occur when information about the quantities of interest does exist in the data, through the likelihood function. We explain the problem in some detail and lay out two preliminary proposals for ways of dealing with noninvertable Hessians without changing the question asked.

Distortion magnified: New Labour and the British
Johnston, Ron
Rossiter, David
Charles, Pattie
Dorling, Danny

Uploaded 07-26-2001
Keywords electoral bias
Abstract UK election results show not only the characteristic disproportionality associated with plurality systems but also considerable bias in the allocation of seats relative to votes for the two main political parties (Conservative and Labour). Over the period 1950-1997 (the 1950 election was the first using constituencies defined by independent Boundary Commissions) this bias both increased and shifted from favouring the Conservatives to favouring Labour. By 1997, Labour would have won 82 more seats than Conservative with equal vote shares - the largest bias recorded for the period: and then in 2001 the pro-Labour bias increase to 141. This paper explores the reasons for this shift, using a procedure developed by Brookes for measuring and decomposing bias. Labour benefited because of the geography of iots successful campaigns in 1997 and 2001.

Mixed Logit Models in Political Science
Glasgow, Garrett

Uploaded 07-08-2001
Keywords mixed logit
discrete choice
Abstract Mixed logit (MXL) is a general discrete choice model that is applicable to a wide range of political science problems. Mixed logit assumes the unobserved portions of utility are a mixture of an IID extreme value term and another multivariate distribution selected by the researcher. This general specification allows MXL to avoid imposing the independence of irrelevant alternatives (IIA) property on the choice probabilities. Further, and more importantly, MXL is a flexible tool for examining heterogeneity in individual behavior through random-coefficients specifications. Three empirical examples are presented. Two are drawn from studies of voting behavior. The first uses data from the 1987 British general election and examines heterogeneity in the impact of social class on voting, and the second uses data from the 1992 U.S. presidential election and examines heterogeneity in the impact of union membership on voting. A third example examines heterogeneity in the factors that lead to various Congressional career decisions. These empirical examples demonstrate the utility of mixed logit in political science research. This paper has both a methodological and substantive contribution for political science. Methodologically, it expands the tool set available to researchers for studying various phenomena in political science. More importantly, this paper contributes substantively by allowing for more realistic models of individual behavior. Most models currently used in political science assume the independent variables have a homogeneous effect on the dependent variable. This assumption is usually made to keep models tractable, even though few believe it is an accurate description of behavior. MXL is a tractable way to relax this assumption and study heterogeneity in a variety of settings.

An Individual-Level Approach to Health Inequality: Child Survival in 50 Countries
King, Gary
Gakidou, Emmanuela

Uploaded 11-27-2000
Keywords beta-binomial
health inequality
survey research
Abstract BACKGROUND: Reducing health inequalities is an important part of the agenda of health policymakers globally. Studies of health inequalities have revealed large variations in average health status across social, economic, and other _groups_. However, no studies have been conducted on the distribution of the risk of ill-health across _individuals_. METHODS: We use an extended beta-binomial model to estimate the distribution the risk of death in children under the age of two in the 50 developing countries where data from a Demographic and Health Survey are available. Inequality in these distributions is measured by the WHO health inequality index. FINDINGS: At the same level of average child mortality, inequality in the risk of death across children can vary considerably across countries. Representing the entire distribution of risk with a single measure of inequality involves normative choices that we delineate and then formalise with quantitative measures. The results are not very sensitive to the choice of measure. Liberia, Mozambique and the Central African Republic have the largest inequalities in child survival, while Colombia, the Philippines and Kazakhstan have the lowest among the 50 countries measured. Exploratory analyses indicate that health inequality is predicted by low GDP, low health expenditures, and poverty, but not by income inequality or democratization. INTERPRETATION: Inequality estimates should routinely be reported alongside average levels of health, as they reveal important information about the distribution of health across individuals within populations. Measuring inequality with individual level data, rather than quantifying differences in average levels of health across social groups, enables meaningful comparisons of inequality across countries and analyses of the determinants of inequality. This approach should be extended to the measurement of inequalities in health expectancy (i.e., life expectancy discounted by expected disabilities).

Is There a Gender Gap in Fiscal Political Preferences
Alvarez, R. Michael
McCaffery, Edward J.

Uploaded 08-12-2000
Keywords Gender gap
fiscal politics
budget surplus
multinomial logit
missing data
survey experiments
Abstract This paper examines the relationship between attitudes on potential uses of the budget surplus and gender. Survey results show relatively weak support overall for using a projected surplus to reduce taxes, with respondents much likelier to prefer increased social spending on education or social security. There is a significant gender gap with men being far more likely than women to support tax cuts or paying down the national debt. Given a menu of particular types of tax cuts, women are marginally more likely to favor child-care relief or working poor tax credits whereas men are marginally more likely to favor capital gains reduction or tax rate cuts. When primed that the tax laws are biased against two-worker families, men significantly change their preferences, moving from support for general tax rate cuts to support for working poor tax relief, but not to child-care relief. One of the strongest results to emerge is that women are far more likely than men not to express an opinion or to confess ignorance about fiscal matters. Both genders increase their ``no opinion'' answer in the face of priming, but men more so than women. Further research will explore this no opinion/uncertainty aspect.

Stability and Change in State Electorates, Carter through Clinton
Erikson, Robert S.
Wright, Gerald C.
McIver, John P.
Holian, David B.

Uploaded 04-25-2000
Keywords partisanship
political ideology
public opinion
state electorates
Abstract This paper extends the time series and advances the argument presented in _Statehouse Democracy_, which provided a public opinion basis for the study of state politics. The analysis covers the dynamics of partisanship and ideology in state electorates from 1977 through 1999. Incorporating the Bush and Clinton years allows for a number of conclusions. In the aggregate, state partisanship changed over the course of the last two presidential administrations, but state ideology did not. However, this change was not uniform across the country, but differed by region and resulted in higher levels of polarization between party and ideological identifications. Finally, consistent with the findings in _Statehouse Democracy_, state partisanship and

The Insignificance of Null Hypothesis Significance Testing
Gill, Jeff

Uploaded 02-06-1999
Keywords hypothesis testing
inverse probability
Bayesian approaches
confidence sets
Abstract The current method of hypothesis testing in the social sciences is under intense criticism yet most political scientists are unaware of the important issues being raised. Criticisms focus on the construction and interpretation of a procedure that has dominated the reporting of empirical results for over fifty years. There is evidence that null hypothesis significance testing as practiced in political science is deeply flawed and widely misunderstood. This is important since most empirical work in political science argues the value of findings through the use of the null hypothesis significance test. In this article I review the history of the null hypothesis significance testing paradigm in the social sciences and discuss major problems, some of which are logical inconsistencies while others are more interpretive in nature. I suggest alternative techniques to convey effectively the importance of data-analytic findings. These recommendations are illustrated with examples using empirical political science publications.

Statistical Estimation in the Presence of Multiple Causal Paths
Braumoeller, Bear F.

Uploaded 04-13-1999
Keywords multiple
Abstract Large-N statistical methodology has in the past been criticized for its inability to model a phenomenon believed to be fundamental to social science research: equifinality, or multiple causal paths. I examine this claim and demonstrate that, although multiple causal paths are often hypothesized in political science research, tests rarely if ever reflect their logic. I then describe a procedure for constructing likelihood functions directly from the logic of multiple causal path theories so that the truth-status of such theories can be correctly evaluated with maximum likelihood techniques.

Statistical Analysis of Finite Choice Models in Extensive Form
Signorino, Curtis S.

Uploaded 07-09-1999
Keywords random utility
discrete choice
finite choice
game theory
Abstract Social scientists are often confronted with theories where one or more actors make choices over finite sets of options leading to a finite set of outcomes. Such theories have addressed everything from whether states go to war, to how citizens or senators vote, to the form of transportation taken by commuters. Over the last thirty years, the most common way to analyze finite (or discrete) choice data has been to use nonstrategic random utility models, even when the theory posited as generating the data is explicitly strategic. Moreover, the source of uncertainty --- what makes the utility random --- is often paid little attention. In this paper, I generalize an entire class of statistical finite choice models, with both well-known and new nonstrategic and strategic special cases. I demonstrate how to derive statistical models from theoretical finite choice models and, in doing so, I address the statistical implications of three sources of uncertainty: agent error, private information about payoffs, and unobserved variation in regressors. I provide conditions for the types of choice structures that result in observationally equivalent statistical models. For strategic choice models, the type of uncertainty matters, resulting in observationally nonequivalent statistical models. Moreover, misspecifying the type of uncertainty in strategic models leads to biased and inconsistent estimates. Version: June 22, 1999

Candidate Viability and Voter Learning in the Presidential Nomination Process
Paolino, Philip

Uploaded 08-30-1999
Keywords beta distribution
maximum likelihood
Abstract Candidates' viability and momentum are important features of the presidential nomination process in the United States, and much work has examined how both influence the outcome of the nomination campaign (e.g. Aldrich 1980a, Aldrich 1980b, Bartels 1988, Brady and Johnston 1987) Previous treatments, however, have focused upon candidates' expectations of winning or losing the nomination. A critical feature that has been mentioned, but not addressed directly is the volatility of these expectations. In this paper, I use a view of viability and momentum that considers both expectations and the variance of the public's perceptions about candidates' viability which allows us to examine how voters use new information to update their beliefs about both elements of candidates' viability and provides a basis for assessing candidates' potential momentum.

The Government Agenda in Parliamentary Democracies
Martin, Lanny W.

Uploaded 12-06-1999
Keywords agenda politics
survival analysis
nonproportional hazard
Abstract In this paper, I examine the effects of party ideology and legislative institutions on the organization of the government agenda in parliamentary democracies. Analyzing data on the timing and policy content of over eight hundred government bills from the 1980s for four European democracies, I show that cabinets schedule bills earlier on the legislative agenda the greater their saliency to the prime minister and to those ministers responsible for their formulation and implementation. Moreover, I show that cabinets tend to delay bills that impose ideological compromise on cabinet members or that create conflict between the cabinet and parties in the opposition, particularly in periods of minority government. I find that all of these effects are greater at the beginning of a government’s tenure and decline by varying degrees over time.

Mandate Elections and Policy Change in Congress
Peterson, David A.M.
Stimson, James
Gangl, Amy E.
Grossback, Lawrence J.

Uploaded 04-20-1998
Keywords duration
discrete time
fractional polynomial
Abstract We postulate that members of Congress react to occasional elections which are unusual in outcome, unexpected, and carry a clear message about the direction of voter preferences. Our cases are 1964/65, 1980/81, and 1994/95. That reaction is movement in the direction of the perceived mandate, where both aggregate outcomes and individual behavior deviate from long-term norms under the temporary influence of perception of dramatic movements in voter preferences. We undertake analyses that show aggregate shifts in Congressional outcomes and individual departures from personal equilibria. We conclude with an event history analysis designed to capture the duration of this temporary phenomenon.

A Monte Carlo Comparison of Methods for Compositional Data Analysis
Brehm, John
Gates, Scott
Gomez, Brad

Uploaded 07-08-1998
Keywords Compositional data
Additive Logistic
Monte Carlo
Police Behavior
Abstract This paper offers an explication of two techniques for compositional data analysis, which involve non-negative data belonging to mutually exclusive and exhaustive categories. The Dirichlet distribution is a multivariate generalization of the beta distribution that offers considerable flexibility, and ease of use, but requires a strong form of an ``independence of irrelevant alternatives'' (IIA) assumption. The second application, proposed by Aitchison (1986) and applied to political data by Katz and King (1997), is the additive logistic method. This approach addresses the strong IIA assumption, but cannot handle strong forms of independence (Rayens and Srinivasen 1994). Monte Carlo simulations are employed on compositional data to explore the limits of applications of the two methods. Data on police officers' allocation of time across a variety of tasks (Ostrom et al. 1988) is used in this analysis. Comparing both common covariates and unique covariates. When the composites are influenced by common covariates, there appears to be no advantage in the use of additive logistic methods over the Dirichlet. Similarly, the additive logistic and Dirichlet methods appear to be equally successful at estimating the effects of the unique covariates on composites. From these simulation results we conclude that the additive logistic method offers little advantage over the Dirichlet, and suffers from several disadvantages.

Making the Most of Statistical Analyses: Improving Interpretation and Presentation
King, Gary
Tomz, Michael
Wittenberg, Jason

Uploaded 08-10-1998
Keywords presentation
Abstract e demonstrate that social scientists rarely take full advantage of the information available in their statistical results. As a consequence, they miss opportunities to present quantities that are of greatest substantive interest for their research, and to express their degree of certainty about these quantities. In this paper, we offer an approach, built on the technique of statistical simulation, to extract the currently overlooked information from any statistical method, no matter how complicated, and to interpret and present it in a reader-friendly manner. Using this technique requires some sophistication, which we try to provide herein, but its application should make the results of quantitative articles more informative and transparent to all. To illustrate our recommendations, we replicate the results of several published works, showing in each case how the authors' own conclusions can be expressed more sharply and informatively, and how our approach reveals important new information about the research questions at hand. We also offer very easy-to-use software that implements our suggestions.

Representative Bureaucracy and Distributional Equity: Addressing the Hard Question
Wrinkle, Robert D.
Meier, Kenneth J.
Polinard, J.L.

Uploaded 07-21-1998
Keywords representative bureaucracy
organizational outputs
education policy
Abstract Research on representative bureaucracy has failed to deal with whether or not representative bureaucracies produce minority gains at the expense of nonminorities. Using a pooled time series analysis of 350 school districts over six years, this study examines the relationship between representative bureaucracy and organizational outputs for minorities and nonminorities. Far from finding that representative bureaucracy produces minority gains at the expense of nonminorities, this study finds both minority and nonminority students perform better in the presence of a representative bureaucracy. This finding suggests an alternative hypothesis to guide research, that representative bureaucracies are more effective than their nonrepresentative counterparts.

When Politics and Models Collide: Estimating Models of Multi-PartyElections
Alvarez, R. Michael
Nagler, Jonathan

Uploaded 00-00-0000
Keywords elections
multinomial logit
spatial model
multinomial probit
Abstract Theory: The spatial model of elections can better be represented by using conditional logit than by multinomial logit. The spatial model, and random utility models in general, suffer from a failure to adequately consider the substitutability of candidates sharing similar or identical issue positions. Hypotheses: Multinomial logit is not much better than successive applications of binomial logit. Conditional logit allows for considering more interesting political questions than does multinomial logit. The spatial model may not correspond to voter decision-making in multiple-candidate settings. Multinomial probit allows for a relaxation of the IIA condition and this should improve estimates of the effect of adding or removing parties. Methods: Comparisons of binomial logit, multinomial logit, conditional logit, and multinomial probit on simulated data and survey data from a three-party election. Results: Multinomial logit offers almost no benefits over binomial logit. Conditional logit is capable of examining movements by parties, whereas multinomial logit is not. Multinomial probit performs better than conditional logit when considering the effects of altering the set of choices available to voters.

Unit Roots and Causal Inference in Political Science
Freeman, John R.
Williams, John T.
Houser, Daniel
Kellstedt, Paul

Uploaded 01-01-1995
Keywords Time series
unit roots
Abstract In the 1980s political scientists were introduced to vector autoregression (Sims, 1980). In the years that followed, they used this method to evaluate competing theories (Goldstein and Freeman, 1990, 199l; Freeman and Alt, 1994; Williams, 1990) and to test the validity of the restrictions in their regression models (MacKuen, Erikson, and Stimson, 1992). In the process, important empirical anomalies came to light. At about this same time, econometricians identified and began to evaluate the problems which unit roots and cointegration produced in vector autoregression and related time series methods. These problems had to do with nothing less than the validity of Granger causality tests and other inferential tools which are the heart of the approach. This research was important because econometricians had discovered years before that many economic time series are first-order integrated (Nelson and Plosser, 1982). Studying the trend properties of economic time series therefore is considered essential in time series econometrics. Recently political scientists (Ostrom and Smith, 1993; Durr, 1993) have argued that certain political time series contain unit roots as well. Yet, to date, no political scientist has made any such demonstration, let alone explained what should be done to put our results on sounder footings if, in fact, our level VARs are faulty. This is the purpose of this paper. In it, we explain the problems which unit roots and cointegration produce in level VARs--why it is so important to take into account the trend properties of one's data. We then review several approaches to solving these problems. One of these approaches, Phillips's (1995) Fully Modified Vector Autoregression (FM-VAR) is singled out for closer study. The theoretical nature of FM-VAR is briefly explained and some practical difficulties in implementing the associated estimation techniques and hypothesis tests are discussed. Finally, the usefulness of FM-VAR is explored in several analyses which parallel the main uses of level VARs mentioned above. These are a stylized Monte Carlo analysis; a reanalysis of Freeman's (1983) study of arms races; a retest of the specifications of MacKuen, Erikson, and Stimson's (1992) model of approval; and a reexamination of the exogeneity-of-vote intentions anomaly in Freeman, Williams and Lin's (1989) study of British government spending.

Generic Tests for a Nonlinear Model of Congressional Campaign Dynamics
Mebane, Walter R.

Uploaded 08-25-1996
Keywords Congress
differential equations
Hopf bifurcation
non-nested hypothesis tests
Cox tests
nonlinear models
Abstract I develop a statistical model based on a generic third-order Taylor series approximation for differential equation systems that exhibit Hopf bifurcation in order to use district-level cross-sectional data to test a nonlinear dynamic formal model of campaign contributions, district service and voting during and after a U.S. House election. The statistical model represents the key nonlinearities of the formal model's Cournot-Nash equilibrium in a highly robust fashion. For data from the years 1984--85 and 1986--87, non-nested hypothesis tests (implemented using a calibrated, parametric bootstrap method) show that under assumptions of multivariate normality, the nonlinear model is vastly superior to the generic linear alternative defined by the sample mean vector and covariance matrix.

Generally Speaking... The Temporal Dimensions of Party Identification
Timpone, Richard J.
Neely, Francis K.

Uploaded 09-17-1997
Keywords party identification
structural equation models
Abstract The measurement and stability of party identification has been an area of important debate for decades. We hypothesize that a systematic problem may exist in the traditional 7-point scale that has not yet been fully examined. We posit that the follow- up question asking independents the direction of their leaning is asking respondents a fundamentally different question than the other questions of general allegiance and strength. In short, the direction of leaning may be providing more direct evaluation of short-term factors such as candidate evaluation and vote choice. If this is the case, the direction of causality for these individuals may be mis-specified in some traditional studies of voting behavior. We examine this question by using the NES Gulf War Panel Study to investigate the systematic determinants of change in individual party identification through structural equation modeling. While short-term forces do not appear to significantly influence the strength or direction of partisanship for individuals identifying with the parties, they are significantly related to direction of leaning for partisans. This differential endogeneity between short-term forces and party identification has important implications for the measurement and use of the concept of partisanship. It also helps illuminate earlier work into 'paradoxical' relationships such as the intransitivity of candidate support and comparisons of different measures.

Linking Representation and House Member Behavior to Constituents' Voting Behavior
Box-Steffensmeier, Janet M.
Kimball, David
Tate, Katherine

Uploaded 08-21-1997
Keywords representation
legislative activity
public approval
Abstract How one is represented should shape one's opinion about the incumbent's performance in office, one's opinion about Congress as an institution, and ultimately, one's underlying attitude toward democratic practices. In this paper, we develop and test a model in which voters evaluate members of Congress on the basis of their performance in office in addition to other factors. Adding contextual data to the 1994 National Election Study, we analyzed the effects of legislative activity, descriptive characteristics, legislative positions, and constituency variables on constituency recognition of the incumbent and vote choice. We found that the activities of legislators, such as speech making and bill sponsorship, committees membership, and campaign spending, have a significant impact on the attitudes of constituents toward their representatives. We also found party affiliation of House members as well as their gender and race to be important cues that influence their constituents' evaluations. Our research identifies the type of legislative activities that benefit House members the most politically. It also establishes that while voters continue to rely on informational short cuts, such as party membership, gender, and information gathered through campaigns, they also appear responsive to the representative's legislative behavior. Overall, active legislators have more knowledgeable, satisfied, and loyal constituents.

Coordinating Voting in American Presidential and House Elections
Mebane, Walter R.

Uploaded 07-21-1997
Keywords coordinating voting
moderating voting
probabilistic voting
spatial voting
retrospective voting
presidential elections
congressional elections
split-ticket voting
pivotal voter theorem
beta distribution
multinomial logit
maximum likelihood
Abstract I describe and estimate a probabilistic voting model designed to test whether individuals' votes for President and for the House of Representatives are coordinated with respect to two cutpoints on a single spatial dimension, in the way that Alesina and Rosenthal's pivotal voter theorem suggests they should be. In my model the cutpoints are random variables about which each individual has a subjective probability distribution. Each person's probabilistic coordinating voting behavior occurs relative to the cutpoints' expected values under the distribution. The model implements the idea the pattern of coordination depends on an individual's evaluation of the economy. The economic bias in the coordinating pattern implies that voters punish a Democratic President for success in improving the economy. The economically successful Democratic President can avoid losses only if the voters who rate the economy as having improved also believe that the policy position of the Democratic party has shifted to the right.

Economic Voting: Enlightened Self-Interest and Economic Reference Groups
Nagler, Jonathan
Willette, Jennifer R.
Jackman, Simon

Uploaded 04-09-1997
Keywords elections
presidential elections
economic voting
Abstract One of the more robust findings over the last 50 years in research on\r\nelections has been the importance of macroeconomic conditions on\r\nvoting in U.S. presidential elections. An important finding in that\r\nresearch was made by Steven Weatherford in a 1978 article\r\ndemonstrating that working class voters are more sensitive to economic\r\nconditions than are middle class voters in their vote choice.\r\nWeatherford's result was based on the 1956 through 1960 elections. We\r\nextend Weatherford's analysis for the 1956 thru 1992 elections. We are\r\nunable to produce evidence that poor voters are consistently more\r\nsensitive to the economy than are middle class and rich voters in\r\ntheir electoral behavior. We also offer a new theory of economic\r\nvoting. We propose that voters vote based on the economic performance\r\nof their economic reference group - rather than on their own personal\r\nfinances or on the state of the national economy. We offer a very\r\npreliminary and very crude initial test of this theory using NES data\r\nfor 1956 to 1992.

Recovering a Basic Space From a Set of Issue Scales
Poole, Keith T.

Uploaded 01-29-1997
Keywords predictive dimensions
basic space
issue scales
Eckart-Young Decomposition
alternating least squares
Abstract This paper develops a procedure for estimating the basic dimensions underlying a set of issue or attribute scales. A simple Hinich- Ordeshook spatial theory of voting is used to model Converse's fundamental insight that individuals' positions on issues are bundled together, and the knowledge of one or two issue positions makes the remaining positions very predictable. The model assumes that individuals' positions on a set of issue or attribute dimensions are determined by the individuals' positions on a small number of underlying evaluative or basic dimensions. The procedure developed in this paper for estimating these basic dimensions is, in effect, a method of performing singular value decomposition of a matrix with missing elements. Monte Carlo testing shows that the procedure reliably reproduces the missing elements. Because of this reliability, the estimation procedure can be used to produce Eckart-Young lower rank approximations. A number of applications to political data are shown and discussed.

Death by Survey: Estimating Adult Mortality without Selection Bias
King, Gary
Gakidou, Emmanuela

Uploaded 07-14-2005
Keywords surveys
selection bias
mortality data
international relations
Abstract The widely used methods for estimating adult mortality rates from sample survey responses about the survival of siblings, parents, spouses, and others depend crucially on an assumption that we demonstrate does not hold in real data. We show that when this assumption is violated -- so that the mortality rate varies with sibship size -- mortality estimates can be massively biased. By using insights from work on the statistical analysis of selection bias, survey weighting, and extrapolation problems, we propose a new and relatively simple method of recovering the mortality rate with both greatly reduced potential for bias and increased clarity about the source of necessary assumptions.

Validation of software for Bayesian models using posterior quantiles
Cook, Samantha
Gelman, Andrew
Rubin, Donald

Uploaded 08-16-2005
Keywords Bayesian inference
Markov chain Monte Carlo
hierarchical models
Abstract We present a simulation-based method designed to establish the computational correctness of software developed to fit a specific Bayesian model, capitalizing on properties of Bayesian posterior distributions. We illustrate the validation technique with two examples. The validation method is shown to find errors in software when they exist and, moreover, the validation output can be informative about the nature and location of such errors.

Spatial Econometrics and Political Science
Darmofal, David

Uploaded 01-10-2006
Keywords Spatial econometrics
Galton's problem
spatial autocorrelation
Abstract Many theories in political science predict the spatial clustering of similar behaviors among neighboring units of observation. This spatial autocorrelation poses implications for both inference and modeling that are distinct from the more familiar serial dependence in time series analysis. In this paper, I examine how political scientists can diagnose and model the spatial dependence that our theories predict. This diagnosis and modeling entails three simple sequential steps. First, univariate spatial autocorrelation is diagnosed via global and local measures of spatial autocorrelation. Next, diagnostics are applied to a model with covariates to determine whether any spatial dependence diagnosed in the first step persists after the behavior has been modeled. If it does, the researcher simply chooses the spatial econometric specification indicated by the diagnostics. I present Monte Carlo results that demonstrate the importance of diagnosing and modeling spatial dependence in our data. I conclude by discussing how researchers can easily apply spatial econometric models in their research.

Modeling Structural Changes: Bayesian Estimation of Multiple Changepoint Models and State Space Models
Park, Jong Hee

Uploaded 07-17-2006
Keywords Multiple changepoint model
State space model
Markov chain Monte Carlo methods
Bayes factors
Data augmentation.
Abstract While theoretical models in political science are inspired by structural changes in politics, most empirical methods assume stable patterns of causal relationships. Static models with constant parameters do not properly capture dynamic changes in the data and lead to incorrect parameter estimates. In this paper, I introduce two Bayesian time series models, which can detect and estimate possible structural changes in temporal data: multiple changepoint models and state space models. To emphasize the utility of the models to political scientists, I show some examples in the context of discrete dependent variables. Then, I apply these models to an important debate in international politics over U.S. use of force abroad. The findings of the multiple changepoint and state space models demonstrate that the predictors of presidential use of force have shifted dramatically.

Statistical Backwards Induction: A Simple Method for Estimating Statistical Strategic Models
Bas, Muhammet
Signorino, Curtis
Walker, Robert

Uploaded 09-22-2006
Keywords discrete choice
statistical backwards induction
limited information estimation
Abstract We present a simple method for estimating regressions based on extensive-form games. Our procedure, which can be implemented in most standard statistical packages, involves sequentially estimating standard logits (or probits) in a manner analogous to backwards induction. We demonstrate that the technique produces consistent parameter estimates and show how to calculate consistent standard errors using model-dependent analytical and general simulation techniques. To illustrate the method, we replicate Leblang’s (2003) study of speculative attacks by financial markets and government responses to these attacks.

Testing for Interaction in Binary Logit and Probit Models: Is a Product Term Essential?
Berry, William
Esarey, Justin
DeMeritt, Jacqueline

Uploaded 05-06-2007
Keywords interaction
Abstract Political scientists presenting binary dependent variable (BDV) models often offer hypotheses that independent variables interact in their influence on the probability that an event Y occurs, Pr(Y). A consensus appears to have evolved on how to test such hypotheses: (i) estimate a logit or probit model including product terms to specify the interaction, (ii) test the hypothesis by determining whether the coefficients for these terms are statistically significant, and (iii) if they are, describe the nature of the interaction by estimating how the marginal effect of one independent variable on Pr(Y) varies with the value of the other independent variables. We contend that in the BDV context, statistically significant product term coefficients are neither necessary nor sufficient for concluding that there is substantively meaningful interaction among variables in their influence on Pr(Y). Even when no product terms are included in a logit or probit model, if the marginal effect of one variable on Pr(Y) is related to another independent variable then substantively meaningful interaction is present, and describing such interaction is essential to an accurate portrayal of the data generating process at work. We propose a strategy for studying interaction in the BDV context that is consistent with the recent emphasis in the discipline on casting hypotheses in terms of effects on the probability of an event's occurrence and reporting estimated marginal effects on this probability.

Models of Path Dependence with an Empirical Application
Jackson, John
Kollman, Ken

Uploaded 07-17-2007
Keywords Path dependence
non-linear least squares
Abstract It is now commonplace in the social sciences to describe an outcome or process as path dependent. By path dependence, researchers generally mean that the sequence of events prior to the observation of the outcome has explanatory power. The paper develops models that have both path dependent and non-path dependent properties, depending upon the value of a particular parameter. The paper then uses non-linear least squares and a Monte Carlo simulation to explore how well this parameter can be estimated, meaning how well scholars can discriminate betwen the two processes. The methodology is applied to the evolution of attitudes on aid to minorities and partisanship between 1956 and 2000. The results are consistent with the path dependent model.

Estimating Binary Dependent Variable Models Under Conditions of Specification Uncertainty
Berry, William
DeMeritt, Jacqueline
Esarey, Justin

Uploaded 01-25-2007
Keywords logit
binary dependent variable
specification uncertainty
Monte Carlo analysis
Abstract Political scientists routinely use logit or probit models when their data involve binary dependent variables (BDVs). Yet the hypotheses we test with logit and probit are rarely specific enough to justify that one of these models is the correct functional form for the process (or true model) generating the data. In this situation of specification uncertainty, it is reasonable to assume that the model being estimated is misspecified. The only issue is the severity of the resulting distortion in results, i.e., whether logit or probit approximates the true model well enough to yield estimated effects that are acceptably close to the true ones. To study estimation in the presence of specification uncertainty, we conduct Monte Carlo analysis using a strategy of purposeful misspecification: we use various logit and probit models with different terms on data sets generated from a wide range of known true models involving a BDV, none of which takes the exact form of a logit or probit model. We find that a widely-employed approach for using logit or probit to test the hypothesis that an independent variable has a positive (or negative) effect on the probability that some event will occur-­by estimating the effect of the variable at central values of the independent variables­-is highly forgiving of specification uncertainty, yielding reasonably accurate inferences even when the true model is not logit or probit. Unfortunately, other applications of logit and probit­-including a common approach to testing a hypothesis that independent variables interact in influencing the probability of event occurrence­-are not nearly as forgiving of the uncertainty. In some situations of specification uncertainty, we can improve the quality of estimated effects by relying on the Akaike Information Criterion [AIC] to choose the terms to be included in a model, but even these improved estimates leave much to be desired.

Model Specification in Instrumental-Variables Regression
Dunning, Thad

Uploaded 07-03-2008
Keywords Instrumental-Variables Least Squares (IVLS) regression
model specification
specification error
homogenous partial effects
Abstract In many applications of instrumental-variables regression, researchers seek to defend the plausibility of a key assumption: the instrumental variable is independent of the error term in a linear regression model. Although fulfilling this exogeneity criterion is necessary for a valid application of the instrumental variables approach, it is not sufficient. In the regression context, the identification of causal effects depends not just on the exogeneity of the instrument but also on the validity of the underlying model. In this paper, I focus on one feature of such models: the assumption that variation in the endogenous regressor that is related to the instrumental variable has the same effect as variation that is unrelated to the instrument. In many applications, this assumption may be quite strong, but relaxing it can limit our ability to estimate parameters of interest. After discussing two substantive examples, I develop analytic results (simulations are reported elsewhere). I also present a specification test that may be useful for determining the relevance of these issues in a given application.

Estimating Interdependent Duration Models with an Application to Government Formation and Survival
Hays, Jude
Kachi, Aya

Uploaded 07-09-2008
Keywords Government Formation
Seemingly Unrelated Regressions
Simultaneous Equation Models
Weibull Distributions
Abstract This paper is part of a larger project in which we develop methods for estimating the causal effects of variables on (1) the duration of bargaining processes, broadly defined, and (2) the survival of bargained outcomes when both are jointly determined. There are many potential applications in political science including, but not limited to, the duration of war and survival of cease-fire agreements, coalition formation and government survival, and negotiations over and enforcement of international agreements. Our primary claim is that, in most cases, it is inappropriate to estimate the effects of variables on these two durations -- the bargaining and the outcome -- in isolation. Our argument is motivated by game theoretic models that show bargaining duration is correlated with the survival of bargained outcomes because players incorporate their beliefs about the survival of bargained outcomes into their decision-making at the bargaining stage. To address this problem, we develop, and examine the properties of two maximum likelihood estimators -- a seemingly unrelated regresssions (SUR) estimator and a limited information maximum likelihood (LIML) estimator. We use both estimators to analyze the duration of government formation and survival in a sample of European parliamentary democracies over the period 1945 to 1998. We conclude that estimated effects based on single equation models of either government formation or survival, the predominant method of analysis in the existing literature, are likely biased because they fail to capture significant indirect effects generated by strategic and other forms of interdependence that link the two durations.

Survey Context Effects in Anchoring Vignettes
Buckley, Jack

Uploaded 08-22-2008
Keywords anchoring vignettes
survey research
differential item functioning
Abstract Methodologists (King et al. 2004; King and Wand 2007) have recently proposed a novel approach to adjusting for bias in interpersonal and cross- cultural comparisons in survey research. The method centers on the use of anchoring vignettes to allow the statistical correction of differential usage of ordinal response scales at the individual or group level. Using data from a randomized survey experiment I investigate whether analyses based on these vignettes may be vulnerable to the introduction of survey artifacts due to vignette ordering or the placement of the self-assessment item relative to the vignettes. I find several patterns of bias due to context effects. Researchers using anchoring vignettes should consider randomization or other methods to mitigate these problems.

How Prediction Markets can Save Event Studies
Snowberg, Erik
Wolfers, Justin
Zitzewitz, Eric

Uploaded 01-16-2009
Abstract Abstract Event studies have been used to address a variety of political questions -- from the economic effects of party control of government to the importance of complex rules in congressional committees. However, the results of event studies are notoriously sensitive to both choices made by researchers and external events. Specifically, event studies will generally produce different results depending on three interrelated things: which event window is chosen, the prior probability assigned to an event at the beginning of the event window, and the presence or absence of other events during the event window. In this paper we show how each of these may bias the results of event studies, and how prediction markets can mitigate these biases.

Regression Adjustments to Experimental Data: Do David Freedmanâ??s Concerns Apply to Political Science?
Green, Donald

Uploaded 07-15-2009
Keywords Experiments
Analysis of Covariance
Abstract Abstract: One of David Freedman's important legacies was to raise awareness of the assumptions that underlie everyday statistical practice, such as regression analysis. His recent papers (Freedman 2008a, 2008b) offer stern warnings to those who offer regression analysis as an appropriate way to analyze experimental results. In particular, Freedman demonstrates that including pre-treatment covariates as controls leads to bias in finite samples and inaccurate standard errors. Freedman advises researchers against using regression adjustments for experiments involving fewer than 500 observations (2008a, p.191), a recommendation that has gained increasing attention and acceptance among social scientists. This paper argues that the ever-cautious Freedman was probably too cautious in his recommendations. After explicating the special features of Freedman's model, I use a combination of simulated and actual examples to show that as a practical matter the biases that Freedman pointed out tend to be negligible for N > 20. Pathological cases that could generate biases for larger experiments involve extreme outliers that would be readily detected through visual inspection.

When Mayors Matter: Estimating the Impact of Mayoral Partisanship on City Policy
Gerber, Elisabeth
Hopkins, Daniel

Uploaded 09-18-2009
Keywords Regression discontinuity design
urban fiscal policy
Abstract U.S. cities are limited in their ability to set policy. Can these constraints mute the impact of mayorsâ?? partisanship on policy outcomes? We hypothesize that mayoral discretion--and thus partisanshipâ??s influence--will be more pronounced in policy areas where there is the less shared authority between local, state, and federal governments. To test this hypothesis, we create a novel data set combining U.S. mayoral election returns from 1990 to 2006 with urban fiscal data. Using regression discontinuity design, we find that cities that elect a Democratic mayor spend less on public safety, a policy area where local discretion is high, than otherwise similar cities that elect a Republican or Independent. We find no differences on tax policy, social policy, and other areas that are characterized by significant overlapping authority. These results have important implications for political accountability: mayors may not be able to influence the full range of policies that are nominally local responsibilities.

Causality and Statistical Learning

Uploaded 03-16-2010
Keywords causal inference
Abstract We review some approaches and philosophies of causal inference coming from sociology, economics, computer science, cognitive science, and statistics

Using Legislative Districting Simulations to Measure Electoral Bias in Legislatures
Chen, Jowei
Rodden, Jonathan

Uploaded 07-19-2010
Keywords redistricting
Abstract When one of the major parties in the United States wins a substantially larger share of the seats than its vote share would seem to warrant, the conventional explanation lies in overt partisan or racial gerrymandering. Yet this paper uses a unique data set from Florida to demonstrate a common mechanism through which substantial partisan bias can emerge purely from residential patterns. When partisan preferences are spatially dependent and partisanship is highly correlated with population density, any districting scheme that generates relatively compact, contiguous districts will tend to produce bias against the urban party. We apply automated districting algorithms driven solely by compactness and contiguity parameters, building winner-take-all districts out of the precinct-level results of the tied Florida presidential election of 2000. The simulation results demonstrate that with 50 percent of the votes statewide, the Republicans can expect to win around 59 percent of the seats without any “intentional” gerrymandering. This is because urban districts tend to be homogeneous and Democratic while suburban and rural districts tend to be moderately Republican. Thus in Florida and other states where Democrats are highly concentrated in cities, the seemingly apolitical practice of requiring compact, contiguous districts will produce systematic pro- Republican electoral bias.

Modeling Electoral Coordination: Voters, Parties and Legislative Lists in Uruguay
Levin, Ines
Katz, Gabriel

Uploaded 04-20-2011
Keywords electoral coordination
number of parties
Bayesian estimation
multilevel modeling
strategic voting
Abstract During each electoral period, the strategic interaction between voters and political elites determines the number of viable candidates in a district. In this paper, we implement a hierarchical seemingly unrelated regression model to explain electoral coordination at the district level in Uruguay as a function of district magnitude, previous electoral outcomes and electoral regime. Elections in this country are particularly useful to test for institutional effects on the coordination process due to the large variations in district magnitude, to the simultaneity of presidential and legislative races held under different rules, and to the reforms implemented during the period under consideration. We find that district magnitude and electoral history heuristics have substantial effects on the number of competing and voted-for parties and lists. Our modeling approach uncovers important interaction-effects between the demand and supply side of the political market that were often overlooked in previous research.

Multiparty Government, Fiscal Institutions, and Public Spending
Martin, Lanny
Vanberg, Georg

Uploaded 03-12-2012
Keywords public spending
fiscal crisis
Abstract In the wake of the 2008 global financial crisis, the size of the public sector has been a central, and often controversial, item on the political agenda, as governments from Europe to the United States have embarked on new campaigns to reduce public spending. Previous research on the political factors underlying public spending has naturally focused on the characteristics of the governments that make budgetary decisions. Most recently, scholars have argued, and shown empirically, that spending tends to be larger when cabinets are composed of multiple political parties, and larger still when those coalitions include more members. The key theoretical insight is that spending constitutes a ``common pool resource" problem, which is more difficult to solve for multiparty governments than for single-party administrations because doing so requires the cooperation of actors who are electorally accountable to separate constituencies. In this study, drawing on recent research on the impact of institutions on coalition policymaking, we challenge the prevailing wisdom in this area. Specifically, we argue that rules that reduce the influence of individual government parties in budget formulation, and increase their incentives to oppose the spending demands of their partners, significantly mitigate the common pool resource problem and thus reduce the expansionary effect of coalition governance on spending. Our empirical analysis of public spending in fifteen European democracies over a thirty-five year period supports our argument. Our findings demonstrate that in certain institutional environments, multiparty governments will spend no more than their single-party counterparts. Our conclusions also offer hope that appropriate institutional reforms may be part of a political solution to the financial woes currently confronting multiparty governments across Europe.

Using Campaign Contributions to Estimate the Political Ideology of Individual Public Bureaucrats Across Time
Chen, Jowei

Uploaded 07-15-2012
Abstract Over the past decade, political scientists have devised various methods to measure the political ideologies of administrative agencies and high-ranking public bureaucrats. This paper uses political campaign contributions to estimate public bureaucrats’ political ideologies. Bureaucrat ideal points estimated via our method vary across time, compare meaningfully with ideological estimates in other branches of government, cover employees across a wide range of agencies, yield insight into intra-agency ideological variation, and can be updated with minimal labor. To demonstrate our method, we estimate the political ideologies of politically appointed administrators in the U.S. federal government. We then use those estimates to test hypotheses about how U.S. presidents strategically manage the process of appointing individuals to federal bureaucratic posts requiring Senate confirmation.

Definition and Diagnosis of Problematic Attrition in Randomized Controlled Experiments
Martel GarcĂ­a, Fernando

Uploaded 04-25-2013
Keywords attrition
randomized controlled experiments
field experiments
causal diagrams
directed acyclic graphs
average treatment effect
Abstract Attrition is the Achilles' Heel of the randomized experiment: It is fairly common, and it can completely unravel the benefits of randomization. Using the structural language of causal diagrams I demonstrate that attrition is problematic for identification of the average treatment effect (ATE) if -- and only if -- it is a common effect of the treatment and the outcome (or a cause of the outcome other than the treatment). I also demonstrate that whether the ATE is identified and estimable for the full population of units in the experiment, or only for those units with observed outcomes, depends on two d-separation conditions. One of these is testable ex-post under standard experimental assumptions. The other is testable ex-ante so long as adequate measurement protocols are adopted. Missing at Random (MAR) assumptions are neither necessary nor sufficient for identification of the ATE.

The Perils of Failed Randomization: Investigating Regression Adjustment of Regionally Confounded Cross-National Data
Paine, Jack

Uploaded 07-18-2013
Keywords Natural experiment
Causal Inference
Political Regimes
Abstract Many important papers studying cross-national outcomes such as political regime type or economic development exploit treatment variables generated by either geological or pre-modern historical processes. A general and major problem with these treatments, however, derives from their heavy regional concentration. Despite not being caused by other variables that independently affect the dependent variable, due to geological or historical accidents, variables such as oil or settler mortality claimed to be exogenous are nonetheless highly correlated with potential confounders that impede drawing causal inferences. With the goal of eliminating bias by controlling for observables, many papers studying variables such as these use parametric procedures to control for regional dummies. While estimation techniques such as ordinary least squares (OLS) provide a seemingly straightforward methodological fix, OLS also obscures particular shortcomings of the data, and imposes strong assumptions to combine information across regions. The current paper takes a closer look at these assumptions and provides examples from top political science and economic journals to show how disaggregating the data can either help to support or to severely qualify existing results.

Unresponsive, Unpersuaded: The Unintended Consequences of Voter Persuasion Efforts
Bailey, Michael
Hopkins, Daniel
Rogers, Todd

Uploaded 08-09-2013
Keywords causal inference
field experiments
multiple imputation
Approximate Bayesian Bootstrap
Abstract Can randomized experiments at the individual level help assess the persuasive effects of campaign tactics? In the contemporary U.S., vote choice is not observable, so one promising research design to assess persuasion involves randomizing appeals and then using a survey to measure vote intentions. Here, we analyze one such field experiment conducted during the 2008 presidential election in which 56,000 registered voters were assigned to persuasion in person, by phone, and/or by mail. Persuasive appeals by canvassers had two unintended consequences. First, they reduced responsiveness to the follow-up survey, lowering the response rate sharply among infrequent voters. Second, various statistical methods to address the resulting biases converge on a counter-intuitive conclusion: the persuasive canvassing reduced candidate support. Our results allow us to rule out even small effects in the intended direction, and illustrate the backlash that persuasion can engender.

Cluster analysis for political scientists
Filho, Dalson
Rocha, Enivaldo
Silva, Mariana
Paranhos, Ranulfo
Alexandre, José

Uploaded 03-19-2014
Keywords cluster analysis
Q analysis
political regimes
Abstract This paper provides an intuitive introduction to cluster analysis. Our targeting audience is both undergraduate and graduate students in their initial training state. Methodologically, we use basic simulation to illustrate the underlying logic of cluster analysis. In addition, we replicate data from Coppedge, Alvarez and Maldonado (2008) to classify political regimes according to Dahl's (1971) polyarchy dimensions: contestation and inclusiveness. With this paper we hope to diffuse cluster analysis technique in Political Science and help novice scholars not only to understand but also to cluster analysis in their own research designs.

< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 next>