logoPicV1 logoTextV1

Search Results

Below results based on the criteria 'direct and indirect e'
Total number of records returned: 915

Macro vs. Micro-Level Perspectives on Economic Voting: Is the Micro-Level Evidence Endogenously Induced?
Erikson, Robert S.

Uploaded 07-10-2004
Keywords economic voting
vote choice
Abstract Many of the findings regarding economic voting derive from the micro-level analyses of survey data, in which respondents' survey evaluations of the economy are shown to predict the vote. This paper investigates the causal nature of this relationship and argues that cross-sectional consistency between economic evaluations and vote choice is mainly if not entirely due to vote choice influencing the survey response. Moreover, the evidence suggest that apart from this endogenously induced partisan bias, almost all of the cross-sectional variation in survey evaluations of the economy is random noise rather than actual beliefs about economic conditions In surveys, the mean evaluations reflect the economic signal that predicts the aggregate vote. Following Kramer (1983), economic voting is best studied at the macro-level rather than the micro-level.

Discriminating Methods: Tests for Nonnested Discrete Choice Models
Clarke, Kevin A.
Signorino, Curtis S.

Uploaded 07-15-2003
Keywords discrete choice
nonnested testing
strategic choice
Vuong test
nonparametric test
Abstract We consider the problem of choosing between rival models that are nonnested in terms of their functional forms. We discuss both a parametric and distribution-free procedure for making this choice, and demonstrate through a monte carlo simulation that discrimination is possible. The results of the simulation also allow us to compare the relative power of the two tests.

Empirical Social Inquiry and Models of Causal Inference
Yang, David

Uploaded 03-05-2003
Keywords causal inference
method nesting
small-N research
Abstract This essay examines several alternative theories of causality from the philosophy of science literature and considers their implications for methods of empirical social inquiry. In particular, I argue that the epistemology of counterfactual causality is not the only logic of causal inference in social inquiry, and that different methods of research appeal to different models of causal inference. As these models are often philosophically inter-dependent, a more eclectic understanding of causation in empirical research may afford greater methodological versatility and provide a more complete understanding of causality. Some common statistical critiques of small-N research are then considered from the perspective of mechanistic causal theories, and alternative strategies of strengthening causal arguments in small-N research are discussed.

Models of Causal Inference: Going Beyond the Neyman-Rubin-Holland Theory
Brady, Henry E.

Uploaded 07-17-2002
Keywords causality
Abstract This paper explores various statistical and philosophical theories of causality, including the Neyman-Rubin-Holland (NRH) theory that is widely used in statistics. Although the NRH theory is increasingly well-known in the social sciences, philosophical theories are mostly unknown, partly because philosophers so seldom concern themselves with the everyday problems of practicing scientists, much less social scientists. Part of my goal is to bring these theories to a wider audience. I argue that the NRH theory is sometimes too "thin" a theory for reliable causal inferences (especially when it is naively extrapolated to all research situations without an understanding of the difficulty of supporting its assumptions) and that it would be better to develop a "thick" theory of causality that requires researchers to verify a number of different conditions before claiming causal relationships. Ideally, I argue, researchers should verify that any purported causal relationship satisfies the Humean conditions of the temporal precedence of causes before effects and the constant conjunction, or association, between causes and their effects. In addition, if a cause is observed in conjunction with its effect, then it should also satisfy the counterfactual condition that when the cause is absent in the same situation, it leads to the absence of the effect as well. Researchers should strive to find observable substitutes for this counterfactual situation in order to verify this condition. Causes should also be actively manipulated (and manipulatable) in ways that produce effects before claims are made about a causal relationship. Causes that cannot be manipulated or whose manipulation cannot be definitively described should be discounted. Finally, cause-effect relationships should be describable in terms of micro-mechanisms, typically employing theories at a lower level than the cause-effect relationship itself, that help to explain the details of why a particular cause leads to a particular effect. These mechanisms should themselves produce hypotheses that can be tested with data. I justify these conditions by an eclectic appeal to the philosophical and statistical literatures which have developed theories of causality based upon them, and I argue that methodologists should not be dogmatic about the "correct" theory of causality. Instead, methodologists should consider any philosophical position about causality to be useful if its ideas can improve our inferences. I also justify these conditions by showing that the very popular Neyman-Rubin-Holland theory implicitly requires some of these conditions, even though the theory is not always explicit about them. And where these conditions go beyond the Neyman-Rubin-Holland theory, there are good reasons to argue for their consideration in making causal claims. Ultimately, I suggest that we have been too cavalier about causality, and we must think harder about research design and theory in order to develop better causal claims.

Optimal Campaigning in Presidential Elections: The Probability of Being Florida
Stromberg, David

Uploaded 03-07-2002
Keywords elections
political campaigns
public expenditures
Abstract This paper delivers a precise recommendation for how presidential candidates should allocate their resources to maximize the probability of gaining a majority in the Electoral College. A two-candidate, probabilistic-voting model reveals that more resources should be devoted to states which are likely to be decisive in the electoral college and, at the same time, have very close state elections. The optimal strategies are empirically estimated using state-level opinion-polls available in September of the election year. The model's recommended campaign strategies closely resemble those used in actual campaigns. The paper also analyses how the allocation of resources would change under the alternative electoral rule of a direct national vote for president.

Detection of Multinomial Voting Irregularities
Mebane, Walter R.
Sekhon, Jasjeet
Wand, Jonathan

Uploaded 07-17-2001
Keywords outlier detection
robust estimation
overdispersed multinomial
generalized linear model
2000 presidential election
voting irregularities
Abstract We develop a robust estimator for an overdispersed multinomial regression model that we use to detect vote count outliers in the 2000 presidential election. The count vector we model contains vote totals for five candidate categories: Buchanan, Bush, Gore, Nader and ``other.'' We estimate the multinomial model using county-level data from Florida. In Florida, the model produces results for Buchanan that are essentially the same as in a binomial model: Palm Beach County has the largest positive residual for Buchanan. The multinomial model shows additional large discrepancies that almost always hurt Gore or Nader and help Bush or Buchanan.

Monitoring conflict using automated coding of newswire reports
Schrodt, Philip A.
Gerner, Deborah J.
Simpson Gerner, Erin M.

Uploaded 06-28-2001
Keywords event data
natural language processing
Abstract his paper discusses the experience of the Kansas Event Data System (KEDS) project in developing event data sets for monitoring conflict levels in five geographical areas: the Levant (Arab-Israeli conflict), Persian Gulf, former Yugoslavia, Central Asia (Afghanistan, Armenia-Azerbijan, former Soviet republics), and West Africa (Liberia, Sierra Leone). These data sets were coded from commercial news sources using the KEDS and TABARI automated coding systems. The paper discusses our experience in developing the dictionaries required for this coding, the problems with the number of reported events in the various areas, and provides examples of the statistical summaries that can be produced from event data. We also compare the coverage of the Reuters and Agence France Presse news services for selected years in the Levant and former Yugoslavia. We conclude with suggestions for four topics where additional efforts that could be usefully undertaken by multiple research projects.

Issue Voting and Ecological Inference
Thomsen, Soren R.

Uploaded 09-14-2000
Keywords issue voting
ecological inference
electoral geography
multinomial logit
Abstract This article proposes a unifying framework for individual and aggregate voting behavior. The proposed individual level model is a version of the multinomial logit model that applies to both issue voting, ideological voting and normative voting providing a close fit to survey data. The aggregate model is derived by using the binary logit model as an approximation to the multinomial logit model. The aggregate model is useful for modeling electoral change and for identification of homogenous political regions. Further, the unifying framework derives a method for ecological inference that applies to large tables and gives estimates of voter transitions close to survery results.

A System Model of American Macro Politics
Erikson, Robert S.
MacKuen, Michael
Stimson, James

Uploaded 07-18-2000
Keywords macro
Abstract The paper develops a single macro-level model of American politics based upon the research program of The Macro Polity, (Cambridge, in press). The multi-equation simulation model is constructed from individual regressions on core elements such as Presidential Approval, Macropartisanship, Public Policy Mood, Elections (House, Senate, and Presidency), Policy Activity (House, Senate, and Presidency), Policy (laws) and the lagged feedbacks involved in the full structure of relationships. (Economic performance indicators are also both caused by and causes of the political variables.) In trial analyses, not implementing the full power of simulation methods, the model is tweaked to examine the impact of changing historical outcomes to observe system behavior. Two of them are a shock to unemployment and reversing the outcome of the 1980 presidential election. These are seen to produce unexpected outcomes when the full range of interconnections is allowed to work.

Estimating King's ecological inference normal model via the EM algorithm
Mattos, Rogerio
Veiga, Alvaro

Uploaded 04-20-2000
Keywords ecological inference
disaggregate data
exponential families
truncated normal
EM Algorithm
Abstract Recently, Gary King introduced a new model for ecological inference, based on a truncated bivariate normal, which he estimates by maximum probability and uses to simulate the predictive densities of the disaggregate data. This paper reviews King's model and its assumption of truncated normality, with the aim to implement maximum probability estimation of his model and disaggregate data prediction in an alternative fashion via the EM Algorithm. In addition, we highlight and discuss important modeling issues related to the chance of non-existence of maximum likelihood estimates, and to the degree that corrections for this non-existence by means of suitably chosen priors are effective. At the end, a Monte Carlo simulation study is run in order to compare the two approaches.

Detecting United States Mediation Styles in the Middle East, 1979-1998
Schrodt, Philip A.

Uploaded 03-04-1999
Keywords event data
Middle East
time series
hidden Markov models
Abstract This research is part of the "Multiple Paths to Knowledge Project" sponsored by the James A. Baker III Institute for Public Policy, Rice University, and the Program in Foreign Policy Decision Making, Texas A&M University. The paper deals with the problem of determining whether the mediation styles used by four U.S. Secretaries of State -- George Schultz, James Baker, Warren Christopher and Madeline Albright -- are sufficiently distinct that they can be detected in event data. The mediation domain is the Israel-Palestinian conflict from April 1979 to December 1998, the event data are coded from the Reuters news service reports using the WEIS event coding scheme, and the classification technique is hidden Markov models. The models are estimated for each of the four Secretaries based on 16 randomly chosen 32-events sequences of USA>ISR and USA>PAL events during the term of the Secretary. Each month in the data set is then assigned to one of the four Secretarial styles based on the best-fitting model. The models differentiate the mediation styles quite distinctly and this method of detecting styles yields quite different results when applied to ISR-PAL data or random data. The "Baker" and "Albright" styles are most distinctive; the "Schultz" style is least; both results are consistent with many qualitative characterizations of these periods. A series of t-tests is then done on Goldstein-scaled scores to determine whether the mediation styles translate into statistically distinct interactions in the ISR>USA, ISR>PAL, PAL>USA and PAL>ISR dyads. While there are a number of statistically-significant differences when the full sample is used, these may be due simply to the overall changes Israel-Palestinian relations over the course of the time series. When tests are done on months that are out-of-term -- in other words, where the style of one Secretary is being employed during the term of another -- few statistically-significant differences are found, though there is someindication of a lag of a month or so between the change in style and the behavioral response. It appears that the effects of the differing styles are not captured by changes in aggregated data, possibly because these scales force behavior into a single conflict-cooperation dimension. Consistent with other papers in the "Multiple Paths to Knowledge" project, the paper contains commentary on how the research project was actually done, as well as the conventional presentation of results. The file includes the papers in Postscript and PDF formats, the event data (Levant, April 1979 to December 1998) used in the analysis, the C source code for estimating the hidden Markov models. This paper was presented at the International Studies Association meetings, Washington, 16-21 February 1999

Economic Voting: Enlightened Self-Interest and Economic Reference
Nagler, Jonathan
De Boef, Suzanna

Uploaded 04-18-1999
Keywords elections
economic voting
sociotropic voting
Abstract This research tests a new theoretical perspective on economic voting. There is a longstanding debate on whether voters are: `sociotropic' voters', i.e., basing their vote on the state of the national economy; or `pocketbook' voters, i.e., basing their vote on the state of their own finances (Kiewiet 1983, Kinder and Kiewiet 1979). We believe that this debate can be reduced to asking what information voters use to form expectations about their own pocketbooks in the future. We argue that voters use information about the economic fortunes of their own economic reference group, rather than the national economy, to form expectations about the impact of government on their own economic fortunes. This allows voters to evaluate both the economic competence of incumbents, as well as the distributive tendencies of incumbents. Allowing voters to evaluate distributional consequences of alternative parties in power is consistent with research showing that left and right parties pursue different economic policies with different distributional consequences (Hibbs 1977, Alesina, Roubini and Cohen 1997}. Thus it allows for a theoretically richer model of voter behavior; and allows us to synthesize the distinct literatures on sociotropic voting and political business cycles. This work is motivated in part by the divergence of wages for different groups of workers since the 1970s. As variance in economic performance increases across groups, we would expect to see more reliance on economic reference groups and less on the national economy as an indicator of the incumbent's likelihood of providing favorable voter-specific economic performance in the future. We examine presidential approval over time across different demographic groups of voters, and show that those approval ratings are influenced both by national economic performance and by group economic performance measured by the change in the group's mean hourly wage.

Modeling Heterogeneity in Duration Models
Box-Steffensmeier, Janet M.
Zorn, Christopher

Uploaded 07-11-1999
Keywords heterogeneity
survival models
variance correction
random effects
Abstract As increasing numbers of political scientists have turned to event history models to analyze duration data, there has been growing awareness of the issue of heterogeneity: instances in which subpopulations in the data vary in ways not captured by the systematic components of standard duration models. We discuss the general issue of heterogeneity, and offer techniques for dealing with it under various conditions. One special case of heterogeneity arises when the population under study consists of one or more subpopulations which will never experience the event of interest. Split-population, or "cure" models, account for this heterogeneity by permitting separate analysis of the determinants of whether an event will occur and the timing of that event, using mixture distributions. We use the split-population model to reveal additional insights into the strategies of political action committees' allocation decisions, and compare split-population and standard duration models of Congressional responses to Supreme Court decisions. We then go on to explore the general issue of heterogeneity in survival data by considering two broad classes of models for dealing with the lack of independence among failure times: variance correction models and "frailty" (or random effects) duration models. The former address heterogeneity by adjusting the variance matrix of the estimates to allow for correct inference in the presence of that heterogeneity, while the latter approach treats heterogeneity as an unobservable, random, multiplicative factor acting on the baseline hazard function. Both types of models allow us to deal with heterogeneity that results, for example, from correlation at multiple levels of data, or from repeated events within units of analysis. We illustrate these models using data on international conflicts. In sum, we explore the issue of heterogeneity in event history models from a variety of perspectives, using a host of examples from contemporary political science. Our techniques and findings will therefore be of substantial interest to both political methodologists and others engaged in empirical work across a range of subfields.

A Panel Probit Analysis of Campaign Contributions and Roll Call Votes
Wawro, Gregory

Uploaded 09-07-1999
Keywords campaign finance
panel data methods
random effects
GMM estimators
Abstract Political scientists have long been concerned with the effects of campaign contributions on roll call voting. However, methodological problems have hampered attempts to assess the degree to which contributions affect voting. One of the key problems is that it is difficult to untangle the effect of contributions from the effect of a member's predisposition to vote one way or another. That is, political action committees (PACs) contribute to members of Congress who are likely to vote the way the PACs favor even in the absence of contributions. A PAC donation to a friendly member might be misconstrued as causing a member to vote a particular way, when in reality the member would have voted that way to begin with. It is therefore crucial to account for a member's propensity to vote in a particular way in order to assess the influence of contributions. One way that studies have done this is to use ideological ratings developed by interest groups. This approach is problematic, however, because the ratings are built from roll call votes and thus will introduce bias if campaign contributions affect the votes used to compute the ratings. In order to circumvent the problem of accounting for voting predispositions, I use panel data methods which, unfortunately, have seen almost no application in political science. These methods enable us to account for individual specific effects which are difficult or impossible to measure, such as the predisposition to vote for or against a particular type of legislation. To employ these methods, I build panels of roll call votes on legislation that business and labor groups have indicated are important for their interests. Using panel data estimators, I determine the effects of contributions from corporate and labor PACs on the probability of voting ``aye'' or ``nay'', while accounting for members' propensities to vote in particular directions. I find that contributions have minimal to no effects on roll call votes, while short-term factors including monthly unemployment and support for the president in the district have substantial effects.

Comparing Measures of Issue Salience in a Spatial Voting Model
Glasgow, Garrett

Uploaded 03-12-1998
Keywords none submitted
Abstract Few fields of empirical study of voting behavior have generated as many mixed results as the study of issue salience. Voters employing differential weighting of issues when determining candidate preference has long been postulated in theoretical works on the spatial voting model (Davis, Hinich, and Ordeshook 1970, Enelow and Hinich 1984), yet empirical research has failed to consistently uncover salience effects among voters. Much of this inconsistency is due to the use of different measures of issue salience and different models to test for salience effects in each study. What accounts for these different findings? There appear to be two different factors that affect each of these studies to different degrees. The first is the limitations of respondents in reporting their cognitive processes. Psychological research demonstrates that individuals are often unaware or unable to accurately report their decision making rules, a limitation that has severe implications for any attempt to measure the weight individuals assign to particuluar issues when determining candidate preference. The second is the cost of gathering information about candidates that can be employed in the choice rule for candidate preference. An individual who places a great deal of weight on a particular issue, but has no information on candidate positions on that issue, will be forced to rely on less salient issues to determine candidate preference. The cognitive and informational limitations of respondents and the implications that they have for measurement of issue salience are examined and accounted for in a spatial voting model of candidate preference. The tendency for individuals for focus on one or a few issues when determining candidate preference is confirmed, but the hypothesised informational effects fail to emerge.

Persuasion Through the Purse: How Political Contributions Crowd Out Information
Bennedsen, Morton
Feldmann, Sven E.

Uploaded 04-21-1998
Keywords lobbying
campaign contributions
Abstract Interest groups can influence political decisions in two distinct ways: by offering contributions to political actors and by providing them with relevant information that is advantageous for the group. We analyze the conditions under which interest groups are more inclined to use one or the other channel of in uence. First, we identify an indirect cost of searching for information in the form of an information externality that increases the cost of offering contributions. We then show that an extreme interest group might find it beneficial to abandon information search altogether and instead seek in uence solely via contributions. Thus, our analysis lends support to a rather cynical view of lobbying wherein groups provide little or no useful information.

Panel Effects in the American National Election Studies
Bartels, Larry M.

Uploaded 07-11-1998
Keywords panel attrition
panel conditioning
fractional pooling
two-stage auxiliary instrumental variables
American National Election Studies
Abstract Parallel panel and fresh cross-section samples in recent NES surveys provide valuable leverage for assessing the magnitude of biases in statistical analyses of survey data due to panel attrition and panel conditioning. My analysis employing a variety of typical regression models suggests that, on average, panel biases reduce the inferential value of panel data by about ten percent. Biases in individual coefficients are rarely statistically "significant," even when panel and cross-section respondents have markedly different characteristics. Thus, while I propose adjustments for panel effects in both cross-sectional and dynamic analyses, such adjustments are unlikely to be necessary in typical applications using NES or similar data

Candidate Positioning in U.S. House Elections
Ansolabehere, Stephen D.
Snyder, Jr., James M.
Steward, III, Charles

Uploaded 08-17-1998
Keywords spatial
Abstract Relying on two new, unique data sets on the policy stances of candidates for the U.S. Congress, we analyze the ideological positioning of House candidates running for office from 1874 to 1996. We argue that throughout this period congressional candidates have primarily espoused the ideology associated with the national party, moderating very little to accommodate local ideological conditions. District-by-district competition exerts some pressure on candidates to fit with their constituents, and there have been times in American history when this pressure has been more acute than others. From the 1940s to 1970s, candidates became much more responsive to district interests, but that degree of responsiveness wanted in the 1980s and 1990s.

Re-thinking Equilibrium Presidential Approval: Markov-Switching Error Correction
Jackman, Simon

Uploaded 01-01-1995
Keywords (none submitted)
Abstract I present a re-working of an error-correction model of presidential approval. In large measure this is driven by an attempt to make better sense of what appears to be an interesting period in American politics---the phenomenal rise and fall in approval for George Bush. The innovation I present is to allow the weight of the determinants of presidential approval to themselves change over time. I do this by estimating a switching-regime time-series model of aggregate presidential approval, and here I fit two distinct statistical regimes. There are good reasons why we would a model of presidential approval to do this. For one thing, the time series here span a long slew of technological changes in the relationship between the public, the economy, and political leaders:~it seems reasonable that even in the aggregate, ``different things will matter at different times'' in assessing a president. Furthermore (though this point is not pursued in the paper), if there is a fair amount of heterogeneity both in the content and the dynamics of information-flows in the American political economy, then a one-regime statistical model will not characterize the aggregate-level process as well as a (non-linear) switching-regime model. The two-regime model I present does appear to be a better characterization than a single-state model. I find presidential approval to typically track inflation and, to a lesser extent, movements in the stock market, subject to the large shocks associated with a rally event, which dissipate slowly. Occasionally though, and often hot on the heels of a rally event, presidential approval slips into another regime, in which unemployment and economic expectations are dominant, and in which approval quickly reverts to levels in line with those economic indicators. Almost as quickly, approval returns to its former slowly-correcting state. This characterization is quite different to previous models of presidential approval. Importantly, my findings suggest that the mixture between perception and reality brought to bear in assessing a president is not at all constant. Different parts of reality are deemed more relevant than at others. Typically, the piece of the macro-economy relevant to presidential approval is inflation. After this relationship has been disturbed by the disequilibriating shock of a rally event, unemployment and a ``perception'' variable---expectations about business conditions---enter the fray. Also novel here is the finding that economic perceptions (i.e., economic expectations, a subjective variable) operate to ratchet presidential approval back in line with economic fundamentals. Though of course, given that economic expectations are almost always overly optimistic, it would seem that if anything, the post-rally slide in presidential approval would be faster if we possessed perfect economic foresight. What is interesting though is that economic ``perceptions'' (as opposed to economic ``reality'') are not part of the typical set of determinants of presidential approval. This will be news to students of public opinion. Error-laden, subjective information about the future course of the macro-economy comes into play when approval has strayed away from levels we would expect on the basis of inflation. Furthermore, I find that the switching between statistical regimes has become more volatile in recent times. In particular, George Bush's presidency is marked by quite abrupt switching between the two regimes, suggesting that the mix of considerations used to assess Bush was rapidly changing.

Electoral Competition with Endogenous Voter Preferences
Jackson, John

Uploaded 00-00-0000
Keywords electoral competition
path dependence
Abstract The spatial model of electoral competition first proposed by Anthony Downs and subsequently extended by many authors is a core part of formal political theory. It has been and is currently used to study a wide variety of electoral processes and political institutions and its properties under many alternative conditions are now well known. All of this work, however, maintains the assumption that voters' preferences are exogenously given and can be treated as fixed while studying the behavior of competing parties and candidates. This makes all the resulting predictions conditional on this assumption. Empirical studies of voter preferences, by contrast, have connected changes in preferences to the platforms and actions of the competing parties and candidates. The paper connects these two literatures by developing a model of electoral competition that makes preferences endogenous, meaning that they co-evolve with party platforms during the election process. The model, which has both an analytical and a simulation form, is explored to ascertain its implications for the existence of stable outcomes, for the ability to predict these outcomes based on initial conditions and assumptions about party behavior, and for its dynamic properties. The assumption of fixed preferences is treated as a special case of this general model, enabling comparisons of these implications for the two situations. The version with endogenous preferences exhibits path dependent properties, which makes it quite different from the more traditional model.

Senate Voting on NAFTA: The Power and Limitations of MCMC Methods for Studying Voting across Bills and across States
Smith, Alastair
McGillivray, Fiona

Uploaded 07-09-1996
Keywords NAFTA
Gibbs sampling
bivariate probit
Abstract We examine similarities in senate voting within states and across two senate bills: the 1991 fast track authorization bill and the 1993 NAFTA implementation bill. A series of bivariate probit models are estimated by Markov Chain Monte Carlo simulation. We discuss the power of MCMC techniques and how the output of these sampling procedures can be used for Bayesian model comparisons. Having separately explored the similarities in votes across bills and within states, we develop a 4-variate probit model to explain voting on NAFTA. The power of MCMC techniques to estimate this complicated model is demonstrated with two different MCMC procedures. We conclude by discussing the data requirements for these techniques.

On estimates of split-ticket voting
Johnston, Ron
Gschwend, Thomas
Pattie, Charles

Uploaded 09-20-2004
Keywords split-ticket
Ecological Inference
Abstract Kimball on split-ticket voting in the USA, suggesting that their estimates of the volume of such voting (derived using King's EI method) across Congressional Districts and States are unreliable. Using part of the Burden-Kimball data set, we report on a parallel set of estimates generated by a different procedure (EMax), which employs three rather than two sets of bounds. The results are extremely similar to Burden and Kimball's, providing strong circumstantial evidence for their conclusions regarding the impact of campaign spending and other influences on the volume of split-ticket voting.

Overtime Inference from Cross-Sectional Surveys
Schuessler, Alexander
Penubarti, Mohan

Uploaded 08-26-1997
Keywords opinion polls
ecological inference
Abstract To derive panel inferences at the micro level from cross-sectional surveys invites an ecological inference problem. Drawing on King's EI method we derive an application that allows us to estimate overtime change from cross-sectional opinion surveys. We validate our application in panel data where magnitudes of micro-level change are known. We subsequently apply the method to independent surveys on presidential approval. In doing so, we detect micro-level volatility that has been unavailable to previous researchers, leading them to derive imprecise conclusions.

The Structure of Signaling: A Combinatorial Optimization Model with Network-Dependent Estimation
Esterling, Kevin M.
Lazer, David
Carpenter, Daniel

Uploaded 08-18-1997
Keywords lobbying models
combinatorial optimization
count models
network-dependent estimation
structural autocorrelation
Abstract This paper examines the relationship between lobbyists' contact-making behavior and their long-term access to the government. Specifically: 1) Do lobbyists establish social contacts in an individually-rational manner to best receive information from each other? And, 2) does the resulting network position condition their access to the government? We begin by wedding rational choice models to network analysis with a formal model of lobbyists' choice of contacts in a network, adopting the classic combinatorial optimization approach of Boorman (1975). The model predicts that when the demand for political information is low, a cocktail equilibrium prevails: lobbyists will invest their time in gaining "weak tie" acquaintances rather than in gaining "strong tie" trusted partners. When the demand for information in a policy domain is high, then both cocktail equilibria and "chum" equilibria (all strong-tie networks) prevail. We then turn to an empirical analysis of lobbyist contact-making and access, using the data of Laumann and Knoke in The Organizational State. We analyze the communication structure of the policy domains in health policy, using count data models that are adjusted for "structural autocorrelation" by the networks we study. The results support the cocktail equilibrium hypothesis, and offer a result that portends rich questions for future research: Washington lobbyists appear to overinvest in strong ties, in general reducing their credibility with the government in the long-term, as well as reducing the informational efficiency of the overall communication network.

Recent Developments in Econometric Modelling: A Personal Viewpoint
Maddala, G.S.

Uploaded 07-17-1997
Keywords dynamic panel data models
dynamic models with limited dependent variables
unit roots
Abstract The quotation above (more than three thousand years ago) essentially summarizes my perception of what is going on in econometrics. Dynamic economic modelling is a comprehensive term. It covers everything except pure cross-section analysis. Hence, I have to narrow down the scope of my paper. I shall not cover duration models, event studies, count data and Markovian models. The areas covered are: dynamic panel data models, dynamic models with limited dependent variables, unit roots, cointegration, VAR’s and Bayesian approaches to all these problems. These are areas I am most familiar with. Also, the paper is not a survey of recent developments. Rather, it presents what I feel are important issues in these areas. Also, as far as possible, I shall relate the issues with those considered in the work on Political Methodology. I have a rather different attitude towards econometric methods which my own colleagues in the profession may not share. In my opinion, there is too much technique and not enough discussion of why we are doing what we are doing. I am often reminded of the admonition of the queen to Pollonius in Shakespeare’s Hamlet, “More matter, less art.”

Markov Chain Models for Rolling Cross-section Data: How Campaign Events and Political Awareness Affect Vote Intentions and Partisanship in the United States and Canada
Mebane, Walter R.
Wand, Jonathan

Uploaded 04-07-1997
Keywords Markov chains
rolling cross-section data
macro data
measurement error
categorical data
ordinal data
panel data
survey data
party identification
American politics
Canadian politics
Abstract We use a new approach we have developed for estimating discrete, finite-state Markov chain models from ``macro'' data to analyze the dynamics of individual choice probabilities in two collections of rolling cross-sectional survey data that were designed to support investigations of what happens to voters' information and preferences during campaigns. Using data from the 1984 American National Election Studies Continuous Monitoring Study, we show that not only did individual party identification vary substantially during the year, but the dynamics of party identification changed significantly in response to the conclusion of the Democratic party's nomination contest. Party identification appears to have measurement error only when the model misspecifies the dynamics. There are rapid oscillations among some categories of partisanship that may reflect individual stances regarding not only competition between the parties but also competition among party factions. Using data from the 1993 Canadian Election Study, we show that the critical events that shaped voting intentions in the election varied tremendously depending on an individual's level of political awareness, and that the effects of awareness varied across regions of the country.

Analyzing the US Senate in 2003: Similarities, Networks, Clusters and Blocs
Jakulin, Aleks

Uploaded 10-27-2004
Keywords roll call analysis
latent variable models
information theory
Abstract To analyze the roll calls in the US Senate in year 2003, we have employed the methods already used throughout the science community for analysis of genes, surveys and text. With information-theoretic measures we assess the association between pairs of senators based on the votes they cast. Furthermore, we can evaluate the influence of a voter by postulating a Shannon information channel between the outcome and a voter. The matrix of associations can be summarized using hierarchical clustering, multi-dimensional scaling and link analysis. With a discrete latent variable model we identify blocs of cohesive voters within the Senate, and contrast it with continuous ideal point methods. Under the bloc-voting model, the Senate can be interpreted as a weighted vote system, and we were able to estimate the empirical voting power of individual blocs through what-if analysis.

A Markov Switching Model of Congressional Partisan Regimes
Jones, Bryan
Kim, Chang-Jin
Startz, Richard

Uploaded 07-18-2005
Keywords Markov switching
electoral realignment
critical elections
partisan regimes
Congressional elections
Abstract Studies of development and change in partisan fortunes in the US emphasize epochs of partisan stability, separated by critical events or turning points. Yet to date we have no estimates of legislative regimes as they relate to electoral realignments. In this paper we study partisan balances in the US Congress using the method of Markov switching. Our estimates for the House of Representatives are based on election changes from 1854, roughly the date of the establishment of the modern incarnation of the two-party system, to the present. For the Senate, we estimate partisan balance from 1914, the date of popular election of Senators. We use this method to estimate an underlying unobserved state parameter, ‘partisan regime’. Basically a partisan regime denotes a built-in congressional electoral advantage that persists through time, and that changes in a disjoint and episodic fashion. The method allows the direct estimation of critical transition points between Republican and Democratic partisan coalitions. Republican regimes characterized House elections during three periods: 1860 through 1872, 1894 through 1906, and 1918 through 1928. A three-state estimate for the House suggested the emergence of a third state in 1994. For the Senate, the two-state model does not fit adequately. We estimate a three-state model in which a Republican regime dominated from 1914 through 1928; a Democratic regime characterized the period 1930-1934, and a Democratic-leaning regime characterized the period 1938 to the present (1936 is a transition year). Combined with existing historical evidence, our analysis isolates four critical congressional elections: 1874; 1894; 1930; and 1994.

Identifying Intra-Party Voting Blocs in the UK House of Commons
Quinn, Kevin
Spirling, Arthur

Uploaded 07-19-2005
Keywords roll-call analysis
UK House of Commons
Bayesian nonparametrics
Dirichlet process mixtures
Abstract Legislative voting records are an important source of information about legislator preferences, intra-party cohesiveness, and the divisiveness of various policy issues. Standard methods of analyzing a legislative voting record tend to have serious drawbacks when applied to legislatures, such as the UK House of Commons, that feature highly disciplined parties, strategic voting, and large amounts of missing data. We present a method (based on a Dirichlet process mixture model) for analyzing such voting records that does not suffer from these same problems. We apply the method to the voting records of Labour and Conservative Party MPs in the 1997-2001 session of the UK House of Commons. Our method has a number of advantages over existing approaches. It is model-based and thus allows one to make probability statements about quantities of interest. It allows one to estimate the number of voting blocs within a party or any other group of MPs. It handles missing data in a principled fashion and does not rely on an ad hoc distance metric between voting profiles. Finally, it can be used as both a predictive model and an exploratory model. We illustrate these points in our analysis of the UK data.

A Robust Transformation Procedure for Interpreting Political Texts
Martin, Lanny
Vanberg, Georg

Uploaded 04-25-2006
Keywords content analysis
Abstract In a recent article in the American Political Science Review, Laver, Benoit, and Garry propose a new method for conducting content analysis. Their Wordscores approach, by automating text coding procedures, represents a fundamental advance in content analysis and will potentially have a large long-term impact on research across the discipline. In this research note, we contend that the usefulness of this procedure is unfortunately limited by the fact that the transformation procedure used by the authors (which is meant to allow for the substantive interpretation of results) has two significant shortcomings. Specifically, it distorts the metric on which content scores are placed—hindering the ability of scholars to make meaningful comparisons across texts—and it is very sensitive to the texts that are scored—opening up the possibility that researchers may generate, inadvertently or not, results that depend on the texts they choose to include in their analyses. We propose (and have written program code to implement) a transformation procedure that solves these problems.

Presidential Approval: the case of George W. Bush
Beck, Nathaniel
Jackman, Simon
Rosenthal, Howard

Uploaded 07-19-2006
Keywords presidential approval
public opinion
house effects
dynamic linear model
Bayesian statistics
Markov chain Monte Carlo
state space
pages of killer graphs
Abstract We use a Bayesian dynamic linear model to track approval for George W. Bush over time. Our analysis deals with several issues that have been usually addressed separately in the extant literature. First, our analysis uses polling data collected at a higher frequency than is typical, using over 1,100 published national polls, and data on macro-economic conditions collected at the weekly level. By combining this much poll information, we are much better poised to examine the public's reactions to events over shorter time scales than can the typical analysis of approval that utilizes monthly or quarterly approval. Second, our statistical modeling explicitly deals with the sampling error of these polls, as well as the possibility of bias in the polls due to house effects. Indeed, quite aside from the question of ``what drives approval?'', there is considerable interest in the extent to which polling organizations systematically diverge from one another in assessing approval for the president. These bias parameters are not only necessary parts of any realistic model of approval that utilizes data from multiple polling organizations, but easily estimated via the Bayesian dynamics linear model.

Social Preferences and Political Participation
Dawes, Christopher
Fowler, James

Uploaded 10-23-2006
Abstract This paper examines the link between social preferences and political activity using experimental methods. We conduct a laboratory experiment in which subjects are asked a series of questions about their past political participation and then are instructed to play five rounds of a modified dictator game (Andreoni and Miller 2002). The results of the dictator game are used to classify each subject’s preferences. We find that subjects who are most interested in increasing total welfare are more likely to participate in politics than subjects with selfish preferences, whereas subjects most interested in reducing the difference between their own well-being and the well-being of others are no more likely to participate in politics than subjects with selfish preferences.

Splitting a predictor at the upper quarter or third and the lower quarter or third
Gelman, Andrew
Park, David

Uploaded 07-06-2007
Keywords discretization
linear regression
statistical communication
Abstract A linear regression of $y$ on $x$ can be approximated by a simple difference: the average values of $y$ corresponding to the highest quarter or third of $x$, minus the average values of $y$ corresponding to the lowest quarter or third of $x$. A simple theoretical analysis shows this comparison performs reasonably well, with 80%--90% efficiency compared to the linear regression if the predictor is uniformly or normally distributed. Discretizing $x$ into three categories claws back about half the efficiency lost by the commonly-used strategy of dichotomizing the predictor. We illustrate with the example that motivated this research: an analysis of income and voting which we had originally performed for a scholarly journal but then wanted to communicate to a general audience.

The Spatial Probit Model of Interdependent Binary Outcomes: Estimation, Interpretation, and Presentation
Franzese, Robert
Hays, Jude

Uploaded 07-20-2007
Keywords Spatial Probit
Bayesian Gibbs-Sampler Estimator
Recursive Importance-Sampling Estimator
Abstract We have argued and shown elsewhere the ubiquity and prominence of spatial interdependence in political science research and noted that much previous practice has neglected this interdependence or treated it solely as nuisance to the serious detriment of sound inference. Previously, we considered only linear-regression models of spatial and/or spatio-temporal interdependence. In this paper, we turn to binary-outcome models. We start by stressing the ubiquity and centrality of interdependence in binary outcomes of interest to political and social scientists and note that, again, this interdependence has been ignored in most contexts where it likely arises and that, in the few contexts where it has been acknowledged, the endogeneity of the spatial lag has not be recognized. Next, we explain some of the severe challenges for empirical analysis posed by spatial interdependence in binary-outcome models, and then we follow recent advances in the spatial-econometric literature to suggest Bayesian or recursive-importance-sampling (RIS) approaches for tackling estimation. In brief and in general, the estimation complications arise because among the RHS variables is an endogenous weighted spatial-lag of the unobserved latent outcome, y*, in the other units; Bayesian or RIS techniques facilitate the complicated nested optimization exercise that follows from that fact. We also advance that literature by showing how to calculate estimated spatial effects (as opposed to parameter estimates) in such models, how to construct confidence regions for those (adopting a simulation strategy for the purpose), and how to present such estimates effectively.

Endogeneity in Probit Response Models
Freedman, David
Sekhon, Jasjeet

Uploaded 05-28-2008
Keywords Bivariate probit
sample selection
indefinite Hessian
Abstract In this paper, we look at conventional methods for removing endogeneity bias in regression models, including the linear model and the probit model. The usual Heckman two-step procedure should not be used in the probit model: from a theoretical perspective, this procedure is unsatisfactory, and likelihood methods are superior. However, serious numerical problems occur when standard software packages try to maximize the biprobit likelihood function, even if the number of covariates is small. The log likelihood surface may be nearly flat, or may have saddle points with one small positive eigenvalue and several large negative eigenvalues. We draw conclusions for statistical practice. Finally, we describe the conditions under which parameters in the model are identifable; these results appear to be new.

How Similar Are They? Rethinking Electoral Congruence
Wittenberg, Jason

Uploaded 07-05-2008
Keywords voting
Abstract Electoral continuity and discontinuity have been a staple of voting research for decades. Most researchers have employed Pearson's r as a measure of congruence between two electoral outcomes across a set of geographic units. This paper argues that that practice should be abandoned. The correlation coefficient is almost always\r\nthe wrong measure. The paper recommends other quantities that better accord with\r\nwhat researchers usually mean by electoral persistence. Replications of prior studies in American and comparative politics demonstrate that the consequences of using r\r\nwhen it is inappropriate can be stark. In some cases what we think are continuities are actually discontinuities.

Buying Votes with Public Funds in the US Presidential Election: Are Swing or Core Voters Easier to Buy Off?
Chen, Jowei

Uploaded 07-09-2008
Keywords distributive politics
Abstract In the aftermath of the summer 2004 Florida hurricane season, the Federal Emergency Management Agency (FEMA) distributed $1.2 billion in disaster aid among 2.6 million individual applications for assistance. This research measures the relative costs and benefits of using FEMA aid to buy votes from swing voters and core voters. First, I compare precinct-level vote counts and individual voter turnout records from the post-hurricane (November 2004) and pre-hurricane (2000 and 2002) elections to measure the effect of FEMA aid on Bush's vote share. Using a two-stage least squares estimator, with hurricane severity measures as instruments for FEMA aid, this analysis reveals that core Republican voters are most electorally responsive to FEMA aid -- $7,000 buys one additional vote for Bush. By contrast, in moderate precincts, each additional Bush vote costs $21,000, while voters in Democratic neighborhoods are unresponsive to receiving FEMA aid. Additionally, by tracking the geographic location of each aid recipient, the data reveal that FEMA favored applicants from Republican neighborhoods over those from Democratic or moderate neighborhoods, even conditioning on hurricane severity, average home values, and demographics. Collectively, these results demonstrate the Bush administration's disproportionate distribution of FEMA disaster aid toward core Republican areas was the optimal strategy for maximizing votes in the Presidential election.

What Can Be Learned from a Simple Table? Bayesian Inference and Sensitivity Analysis for Causal Effects from 2x2 and 2x2xK Tables in the Presence of Unmeasured Confounding
Quinn, Kevin

Uploaded 09-07-2008
Keywords causal inference
bayesian inference
sensitivity analysis
unmeasured confounding
Abstract What, if anything, should one infer about the causal effect of a binary treatment on a binary outcome from a $2 imes 2$ cross-tabulation of non-experimental data? Many researchers would answer ``nothing'' because of the likelihood of severe bias due to the lack of adjustment for key confounding variables. This paper shows that such a conclusion is unduly pessimistic. Because the complete data likelihood under arbitrary patterns of confounding factorizes in a particularly convenient way, it is possible to parameterize this general situation with four easily interpretable parameters. Subjective beliefs regarding these parameters are easily elicited and subjective statements of uncertainty become possible. This paper also develops a novel graphical display called the confounding plot that quickly and efficiently communicates all patterns of confounding that would leave a particular causal inference relatively unchanged.

Prior distributions for Bayesian data analysis in political science
Gelman, Andrew

Uploaded 02-25-2009
Keywords Bayesian inference
hierarchical models
mixture models
prior information
Abstract Prior information is often what makes Bayesian inference work. In the political science examples of which we are aware aware, information needs to come in, whether as regression predictors or regularization (that is, prior distributions) on parameters. We illustrate with a few examples from our own research.

Causal Mediation Analysis in R
Imai, Kosuke
Keele, Luke
Tingley, Dustin
Yamamoto, Teppei

Uploaded 07-20-2009
Abstract Causal mediation analysis is widely used across many disciplines to investigate possible causal mechanisms. Such an analysis allows researchers to explore causal pathways, going beyond the estimation of simple causal effects. Recently, Imai, Keele and Yamamoto (2008) and Imai, Keele, and Tingley (2009) developed general algorithms to estimate causal mediation effects with the variety of data types that are often encountered in practice. The new algorithms can estimate causal mediation effects for linear and nonlinear relationships, with parametric and nonparametric models, with continuous and discrete mediators, and various types of outcome variables. In this paper, we show how to implement these algorithms in the statistical computing language R. Our easy-to-use software, mediation, takes advantage of the object-oriented programming nature of the R language and allows researchers to estimate causal mediation effects in a straightforward manner. Finally, mediation also implements sensitivity analyses which can be used to formally assess the robustness of findings to the potential violations of the key identifying assumption. After describing the basic structure of the software, we illustrate its use with several empirical examples.

Answer Key (Odd Numbers): Esssential Mathematics for Political and Social Research
Gill, Jeff

Uploaded 05-02-2009
Keywords answer key
mathematics for political science
matrix algebra
probability and set theory
Markov chains
Abstract Hopefully this post is not too self-promotional. I get lots of requests for the answer key to ESSENTIAL MATHEMATICS FOR POLITICAL AND SOCIAL RESEARCH (Cambridge University Press 2006), which the press has restricted to teaching faculty. Recently, however, I have recently received permission to make generally available the answers worked-out in detail for odd numbered problems. This file provides these.

The “Unfriending” Problem: The Consequences of Homophily in Friendship Retention for Causal Estimates of Social Influence
Noel, Hans
Nyhan, Brendan

Uploaded 07-08-2010
Keywords peer effects
social networks
monte carlo
Abstract Christakis, Fowler, and their colleagues have recently published numerous articles estimating “contagion” effects in social networks. In response to concerns that their results are driven by homophily, Christakis and Fowler describe Monte Carlo results showing no evidence of homophily-induced bias in their statistical model’s estimates of peer effects. However, their simulations do not address the role of homophily in friendship retention, which may cause significant problems in longitudinal social network data. We investigate the effects of this mechanism using Monte Carlo simulations and demonstrate that homophily in friendship retention induces significant upward bias and decreased coverage levels in the Christakis and Fowler model if there is non-negligible attrition over time.

Statistical Inference for the Item Count Technique
Imai, Kosuke

Uploaded 07-19-2010
Keywords list experiments
sensitive questions
survey experiments
unmatched count technique
Abstract The item count technique is a survey methodology that is designed to elicit respondents' truthful answers to sensitive questions such as racial prejudice and drug use. The method is also known as the list experiment or the unmatched count technique and is an alternative to the commonly used randomized response method. In this paper, I propose new nonlinear least squares (NLS) and maximum likelihood (ML) estimators for a multivariate analysis with the item count technique. The two-step estimation procedure and the Expectation Maximization algorithm are developed to facilitate the computation. Enabling a multivariate statistical analysis is essential because the item count technique provides respondents with privacy at the expense of statistical efficiency. As an empirical illustration, the proposed methodology is applied to the 1991 National Race and Politics survey where the investigators used the item count technique to measure the degree of racial hatred in the United States. A small-scale simulation study suggests that the ML estimator can be substantially more efficient than the NLS estimator. The software package is made available to implement the proposed methodology.

Geometric construction of voting methods that protect voters' first choices
Small, Alex

Uploaded 08-23-2010
Keywords Geometry
Gibbard-Satterthwaite Theorem
Election Methods
Ranked Voting
Abstract We consider the possibility of designing an election method that eliminates the incentives for a voter to rank any other candidate equal to or ahead of his or her sincere favorite. We refer to these methods as satisfying the ``Strong Favorite Betrayal Criterion" (SFBC). Methods satisfying our strategic criteria can be classified into four categories, according to their geometrical properties. We prove that two categories of methods are highly restricted and closely related to positional methods (point systems) that give equal points to a voter's first and second choices. The third category is tightly restricted, but if criteria are relaxed slightly a variety of interesting methods can be identified. Finally, we show that methods in the fourth category are largely irrelevant to public elections. Interestingly, most of these methods for satisfying the SFBC do so only ``weakly," in that these methods make no meaningful distinction between the first and second place on the ballot. However, when we relax our conditions and allow (but do not require) equal rankings for first place, a wider range of voting methods are possible, and these methods do indeed make meaningful distinctions between first and second place.

Copula Functions for Approval Ratings and Endogenous Political Events
Quiroz Flores, Alejandro

Uploaded 07-27-2011
Keywords copula functions
approval ratings
bivariate distribution
Abstract Empirical investigations of US presidential approval ratings often control for the exogenous materialization and duration of particular events such as wars or political scandals. Although the materialization of some of these events might be exogenous, their duration is not. Once an event takes place, a President has sufficient power to reduce its duration if the event reduces approval ratings, or increase its duration if the event is politically beneficial. In other words, the duration of these events is endogenous. In order to address this potential problem, this paper uses a copula function to jointly estimate presidential approval ratings (with a Normal distribution) and the duration of significant political events (with a Weibull distribution). The estimation of this bivariate Normal-Weibull distribution will help us test whether and to what extent American presidents manipulate political events in order to increase their approval ratings. Estimation results from the bivariate Normal-Weibull distribution suggest that there is a positive, two-way association between presidential approval ratings and the duration of political events.

Partisanship, Political Knowledge, and Changing Economic Conditions
Lawrence, Christopher

Uploaded 05-18-2012
Keywords political knowledge
party identification
hierarchical modeling
economic voting
public opinion
political sophistication
ANES 2008-09 Panel
Abstract Existing research is replete with evidence that individuals’ perceptions of the state of the economy are seemingly only loosely connected to more objective evaluations of its state and are contaminated by partisan influences. This paper provides further evidence of why these partisan influences come about, by advancing the hypothesis that citizen political knowledge moderates the effect of partisanship on economic evaluations, grounded in Zaller’s Receive-Accept-Sample model of opinion formation and articulation. The paper also advances the hypothesis that more knowledgeable partisans will respond to changes in elite messaging regarding the economy fairly rapidly after a change in control of the government. I examine these propositions using data from the ANES panel study of public opinion between January 2008 and June 2010, and find evidence affirming the essential interactive role of knowledge and partisanship in the formation and articulation of evaluations of the national economy.

Using Regression Discontinuity to Uncover the Personal Incumbency Advantage
Erikson, Robert S.
Titiunik, Rocio

Uploaded 07-17-2012
Keywords regression discontinuity
incumbency advantage
Abstract We study the conditions under which estimating the incumbency advantage using a regression discontinuity (RD) design recovers the personal incumbency advantage in a two-party system. Lee (2008) introduced RD as a method for estimating the party incumbency advantage. We develop a simple model that expands the interpretation of the RD design and leads to unbiased estimates of the personal incumbency advantage. Our model yields the surprising result that the RD design double counts the personal incumbency advantage. We estimate the incumbency advantage using our model with data from U.S. House elections between 1968 and 2008. We also explore the estimation of the incumbency advantage beyond the limited RD conditions where knife-edge electoral shifts create the leverage for causal inference.

A Step in the Wrong Direction: An Appraisal of the Zero-Intelligence Model of Government Formation
Martin, Lanny
Vanberg, Georg

Uploaded 10-15-2013
Keywords government formation
zero-intelligence model
Abstract In a recent article in the Journal of Politics, Golder, Golder, and Siegel argue that models of government formation should be rebuilt "from the ground up." They propose to do so with a "zero-intelligence" model of government formation, which they claim makes no theoretical assumptions beyond the requirement that a potential government, to be chosen, must be preferred by all its members and a legislative majority to the incumbent administration. They also claim that, empirically, their model does significantly better than existing models in predicting formation outcomes. We disagree with both claims. Theoretically, their model is unrestrictive in terms of its institutional assumptions, but it imposes a highly implausible behavioral assumption that drives the key results. Empirically, their assessment of the performance of the zero intelligence model turns on a misunderstanding of the relevant data for testing coalition theories. We demonstrate that the predictions of the zero-intelligence model are no more accurate than random guesses, in stark contrast to the predictions of well-established approaches in traditional coalition research. We conclude that scholars would be ill advised to dismiss traditional approaches in favor of the approach advanced by Golder, Golder, and Siegel.

Has Democracy reduced Inequalities in Child Mortality? An analysis of 5 million births from 50 developing countries since 1970
Ramos, Antonio Pedro

Uploaded 07-28-2014
Keywords Child Mortality
Longitudinal Models
Abstract This paper offers the first large scale analysis of the effects of democratization on the within- country, rich-poor gap in child mortality across the developing world. Using an unique data set with more than 5 million birth records from 50 middle and low income countries, this study is the first one to test whether those at the bottom of the income distribution benefit more from the democratic transitions than those at the top. Contrary to the widespread beliefs that democratic transitions helped the poor, most evident in reduced child mortality, this study shows evidence that this is not the case. Although mortality gap between the rich and poor is decreasing over time, this change is not driven by regime type. However, there is remarkable heterogeneity on the effects of democratization on health that deserves further investigation.

Cuing and Coordination in American Elections
Mebane, Walter R.

Uploaded 07-16-2004
Keywords evolutionary game
political behavior
strategic coordination
policy moderation
Abstract I use evolutionary game models based on pure imitation to reexamine recent findings that strategic coordination characterizes the American electorate. Imitation means that voters who are dissatisfied with their strategy adopt the strategy of the first voter they encounter who is similar to them. In the replicator dynamics such imitation implies, everyone ultimately uses the coordinating strategy, but I study what happens over time spans that are relevant for voters. I consider three evolutionary models, including two that involve partisan cuing. Simulations using National Election Studies data from presidential years 1976-96 suggest that many voters use an unconditional strategy, usually a strategy of voting a straight ticket matching their party identification. I then estimate a choice model that incorporates an approximation to the evolutionary dynamics. The results support partisan cuing and confirm that most voters vote unconditionally. The estimates also support previous findings regarding policy moderation and institutional balancing.

< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 next>