logoPicV1 logoTextV1

Search Results


Below results based on the criteria 'Benford'
Total number of records returned: 911

1
Paper
Parameterization and Bayesian Modeling
Gelman, Andrew

Uploaded 06-15-2004
Keywords censored data
data augmentation
Gibbs sampler
hierarchical model
missing data imputation
parameter expansion
prior distribution
truncated data
Abstract Progress in statistical computation often leads to advances in statistical modeling. For example, it is surprisingly common that an existing model is reparameterized, solely for computational purposes, but then this new configuration motivates a new family of models that is useful in applied statistics. One reason this phenomenon may not have been noticed in statistics is that reparameterizations do not change the likelihood. In a Bayesian framework, however, a transformation of parameters typically suggests a new family of prior distributions. We discuss examples in censored and truncated data, mixture modeling, multivariate imputation, stochastic processes, and multilevel models.

2
Paper
Identification of Multidimensional Spatial Voting
Rivers, Doug

Uploaded 07-15-2003
Keywords Identification
Multidimensional Spatial Models
Abstract No abstract provided.

3
Paper
The Statistical Analysis of Roll Call Data
Clinton, Joshua
Jackman, Simon
Rivers, Doug

Uploaded 05-07-2003
Keywords spatial voting model
item response theory
roll call voting
Bayeisan simulation
Abstract We develop a Bayesian procedure for estimation and inference for spatial models of roll call voting. Our appraoch is extremely flexible, applicable to any legislative setting, irrespective of size, the extremism of legislative voting histories, or the number of roll calls available for analysis. Our model is easily extended to let other sources of information inform the analysis of roll call data, such as the number and nature of the underlying dimensions, the presence of party whipping, the determinants of legislator preferences, or the evolution of the legislative agenda; this is especially helpful since it is gernally inappropriate to use estimates of extant methods (usually generated under assumptions of sincere voting) to test models embodying alternative assumptions (e.g., log-rolling). A Bayesian approach also provides a coherent framework for estimation and inference with roll call data that eludes extant methods; moreover, via Bayesian simulation methods, it is straightforward to generate uncertainty assessments or hypothesis tests concerning any auxiliary quantity of interest or to formally compare models. In a series of examples we show how our method is easily extended to accommodate theoretically interesting modesl of legislative behavior. Our goal is to move roll call analysis away from pure measurement or description towards a tool for testing substantive theories of legislative behavior.

4
Paper
Estimation of Evolutionary Processes
Honaker, James

Uploaded 07-18-2002
Keywords evolution
replicator
dynamics
compositional
ECM
Abstract Evolutionary game theory has accumulated an enormous body of theoretical work and even some proposed substantive applications. However, current empirical work has shown little evidence of evolutionary models matching field or experimental data. We argue that this is in part because estimation has been of overly restrictive models that make unwarranted assumptions either on the matrix of fitnesses in the evolutionary game, or more often, on the rule mapping the selection process. These heavy assumptions facilitate easy estimation procedures, but cripple the ability for evolutionary models to describe the data and for the researcher to reveal from the data the true quantities of interest in the evolutionary model. We demonstrate an EM based algorithm capable of estimating both the matrix of fitnesses and the selection mechanism, and apply this to experimental data. We show that the evolutionary model fits the experimental data progressively better as the assumptions of the evolutionary model are incorporated into the experiment. We also show that the model we propose can be used as a flexible estimator for deducing flows over compositional variables across time, and compare it to the more typical compositional model of Aitchison (1986).

5
Paper
Turnout Effects on the Composition of the Electorate: A Multinomial Logit Simulation of the 2000 Presidential Election
Martinez, Michael

Uploaded 03-18-2002
Keywords turnout
multinomial logit
simulation
Abstract Conventional wisdom among pundits and some scholars posits that higher turnout should benefit liberal parties, since lower socioeconomic classes comprise a disproportionate share of the nonvoting population. Empirical tests of this prediction across elections have produced a wide variety of results, ranging from support for the conventional wisdom to suggestions that Republicans benefit from higher turnout to null findings. In this paper, we provide a simulation of the possible impact of increasing or decreasing turnout in a single election. Using data from the 2000 American National Election Study, we find that Gore would have benefitted slightly from higher turnout and would have been harmed slightly by lower turnout, but the overall magnitude of the effects of turnout on Gore's share of the two party vote is small. At higher levels of turnout, Democrats comprise a larger share of the electorate, but they also have a higher defection rate.

6
Paper
Random Coefficient Models for Time-Series--Cross-Section Data: The 2001 Version
Beck, Nathaniel
Katz, Jonathan

Uploaded 07-17-2001
Keywords random coefficients
generalized least squares
empirical Bayesian
Stein-rule
TCSC
Abstract This paper considers random coefficient models (RCMs) for time-series--cross-section data. These models allow for unit to unit variation in the model parameters. After laying out the various models, we assess several issues in specifying RCMs. We then consider the finite sample properties of some standard RCM estimators, and show that the most common one, associated with Hsaio, has very poor properties. These analyses also show that a somewhat awkward combination of estimators based on Swamy's work performs reasonably well; this awkward estimator and a Bayes estimator with an uninformative prior (due to Smith) seem to perform best. But we also see that estimators which assume full pooling perform well unless there is a large degree of unit to unit parameter heterogeneity. We also argue that the various data driven methods (whether classical or empirical Bayes or Bayes with gentle priors) tends to lead to much more heterogeneity than most political scientists would like. We speculate that fully Bayesian models, with a variety of informative priors, may be the best way to approach RCMs.

7
Paper
Automated Coding of International Event Data Using Sparse Parsing Techniques
Schrodt, Philip A.

Uploaded 06-28-2001
Keywords event data
natural language processing
conflict
content analysis
open source
Abstract "Event data" record the interactions of political actors reported in sources such as newspapers and news services; this type of data is widely used in research in international relations. Over the past ten years, there has been a shift from coding event data by humans -- typically university students -- to using computerized coding. The automated methods are dramatically faster, enabling data sets to be coded in real time, and provide far greater transparency and consistency than human coding. This paper reviews the experience of the Kansas Event Data System (KEDS) project in developing automated coding using "sparse parsing" machine coding methods, discusses a number of design decisions that were made in creating the program, and assesses features that would improve the effectiveness of these programs.

8
Paper
Aggregation and Dynamics of Survey Responses: The Case of Presidential Approval
Alvarez, R. Michael
Katz, Jonathan

Uploaded 10-01-2001
Keywords presidential approval
integration
time-series
fractional integration
surveys
Abstract In this paper we critique much of the empirical literature on the important political science concept of presidential approval. We first argue that dynamics attributed to the aggregate presidential approval series are often logically inconsistent and always substantively implausible. In particular, we show that is no way for a bounded series, such as the approval series, to be integrated. However, even in non-integrated models often lead to implausible substantive findings due to aggregation both across Presidential administrations and from models of individual level behavior to aggregate survey marginals. We argue that using individual-level survey responses is superior for methodological and theoretical reasons, and we provide an example of such an analysis using Gallup Organization survey data.

9
Paper
A Note Relating Ideal Point Estimates to the Spatial Model
Clinton, Joshua
Meirowitz, Adam

Uploaded 07-18-2000
Keywords ideal points
preference estimation
NOMINATE
spatial model
Abstract Existing preference estimators do not incorporate the full structure of the spatial model. Specifically, they fail to use the sequential nature of the agenda by not constraining the nay location of a bill to be the yea location of the last successful policy. The consequences of this omission may be far-reaching. Not only is information useful for the identification of the model neglected, but more seriously, the dimensionality of the policy space may be incorrectly estimated. Preference and bill location estimates are uninterpretable in terms of the spatial model. We show that under very general assumptions, ML estimates of ideal points that do not constrain the nay locations will differ from ML estimates that constrain the nay locatios -- a difference that does not vanish as the numbers of votes goes to infinity. Additionally, unconstrained models underestimate the true dimensionality of the policy space. We derive a Maximum Likelihood estimator of legislative preferences and bill locations that shares basic assumptions with the spatial model of voting.

10
Paper
Flexible Prior Specifications for Factor Analytic Models with an Application to the Measurement of American Political Ideology
Quinn, Kevin M.

Uploaded 04-20-2000
Keywords factor analysis
intrinsic autoregression
hierarchical modeling
Bayesian inference
political ideology
Abstract Factor analytic measurement models are widely used in the social sciences to measure latent variables and functions thereof. Examples include the measurement of: political preferences, liberal democracy, latent determinants of exchange rates, and latent factors in arbitrage pricing theory models and the corresponding pricing deviations. Oftentimes, the results of these measurement models are sensitive to distributional assumptions that are made regarding the latent factors. In this paper I demonstrate how prior distributions commonly used in image processing and spatial statistics provide a flexible means to model dependencies among the latent factor scores that cannot be easily captured with standard prior distributions that treat the factor scores as (conditionally) exchangeable. Markov chain Monte Carlo techniques are used to fit the resulting models. These modeling techniques are illustrated with a simulated data example and an analysis of American political attitudes drawn from the 1996 American National Election Study.

11
Paper
Two-Stage Estimation of Non-Recursive Choice Models
Alvarez, R. Michael
Glasgow, Garrett

Uploaded 03-02-1999
Keywords two-stage estimation
probit
endogeneity
two-stage probit least squares
two-stage conditional maximum likelihood
Abstract Questions of causation are important issues in empirical research on political behavior. Most of the discussion of the econometric problems associated with multi-equation models with reciprocal causation has focused on models with continuous dependent variables (e.g. Markus and Converse 1979; Page and Jones 1979). Yet many models of political behavior involve discrete or dichotomous dependent variables; this paper describes two techniques which can consistently estimate reciprocal relationships between dichotomous and continuous dependent variables. The first, two-stage probit least squares (2SPLS), is very similar to two-stage instrumental variable techniques. The second, two-stage conditional maximum likelihood (2SCML), may overcome problems associated with 2SPLS, but has not been used in the political science literature. First, we demonstrate the potential pitfalls of ignoring the problems of reciprocal causation in non-recursive choice models. Then, we show the properties of both techniques using Monte Carlo simulations: both the two-stage models perform well in large samples, but in small samples the 2SPLS model has superior statistical properties. However, the 2SCML model offers an explicit statistical test for endogeneity. Last, we apply these techniques to an empirical example which focuses on the relationship between voter preferences in a presidential election and the voter's uncertainty about the policy positions taken by the candidates. This example demonstrates the importance of these techniques for political science research.

12
Paper
Heterogeneity in the Impact of Issues on Vote Choice
Glasgow, Garrett

Uploaded 04-18-1999
Keywords random parameters logit
heterogeneity
issue salience
Abstract There is a great deal of diversity in the issues than members of the American electorate are concerned with. It seems logical that these different concerns will lead voters to evaluate political candidates in different ways when voting. Unfortunately, the models currently employed by political scientists ignore the possibility of heterogeneity in the weights that individuals place on issues when voting. In order to create a tractable model of vote choice, most researchers assume that the weights placed on issues are homogeneous across voters. Estimating such a model tells us if an issue was salient to the electorate on average, but gives us no information about heterogeneity in the use of the issue. Allowing for heterogeneity in issue weights allows for a much more complete picture of the impact of issues on vote choice. I assume that issue weights are distributed among voters by some known probability distribution, and estimate the parameters of that distribution. This assumption leads to random parameters logit. I present the results of a random parameters logit model for the 1996 presidential election, and compare these results to those from a conditional logit model with the homogeneity assumption. I show that random parameters logit contains all of the information that models that assume homogeneity do, plus I uncover evidence of heterogeneity in the weights placed on issues by voters.

13
Paper
The MAP-B Program with Macro and Micro Applications
Hinich, Melvin J.
Ghobarah, Hazen

Uploaded 07-11-1999
Keywords spatial maps
maximum likelihood
resampling
Abstract We present and apply a general methodology for obtaining a multi-dimensional map of the 'political space' in a given country based on the rating by members of the public of a set candidates or parties. This methodology produces spatial ideology maps of the stimuli (candidates or parties) and the distribution of ideal points of the public. Earlier versions of this methodology, have been applied to American presidential elections (e.g. Enelow and Hinich, 1984:169-216), and to elections in Taiwan (Lin, Chu and Hinich, in World Politics, 1996), Germany (Pappi and Eckstein, in Public Choice, 1998), Ukraine (Hinich, Khmelko and Ordeshook, in Post-Soviet Affairs, 1999), and Russia (Myagkov and Ordeshook, in Public Choice, 1998) among others. This methodology allows for the specification of a valence dimension to capture non-policy characteristics of parties and candidates. The new version (MapB) is capable of recovering the distribution of the estimates of the candidate/party positions in the ideological space through a modified resampling technique. Moreover, it is available as a standalone user-friendly program for the PC/Windows platform.

14
Paper
The `Turnout Twist' in Japanese Elections
Horiuchi, Yusaku

Uploaded 09-07-1999
Keywords voter turnout
Japanese elections
local elections
multiple imputation
random-effect model
simulation
Abstract In the United States, as well as in most other democracies, national elections usually attract more votes than local elections. In Japan, they attract more votes in large municipalities but attract less votes in small municipalities. This paper attempts to explain such a puzzling turnout pattern, which is defined as the ``turnout twist''. The random-effect model estimation and the post-estimation simulation find that the most important variable explaining the turnout twist is the voting-age population per seat. The simulation analysis shows that if this variable did not have any significant effect, national elections would attract more votes than local elections in all municipalities. Since this variable itself and its effect on turnout are largely determined by the disproportional apportionment of seats in both national and local elections, the restrictive regulations to mobilizational activities, and the minimal roles played by political parties in mobilizing votes under the multimember constituency system, the paper concludes that the puzzling turnout twist observed in Japanese elections is a product of Japan's unique institutional arrangements.

15
Paper
A Comparison of Methods for the Analysis of Time Allocation: Donut Shops, Speed Traps, and Paperwork
Brehm, John
Gates, Scott
Gomez, Brad

Uploaded 03-05-1998
Keywords additive logistic
allocation of time
compliance
Dirichlet
Liouville
Abstract Supervisors in public bureaucracies serve a variety of roles, only few of which have been subject to systematic academic scrutiny. In prior work (Brehm and Gates 1997), we dispense with the coercive capacities of supervisors, but this leaves other supervisory functions as potential levers on subordinate compliance. In this paper, we consider the capacity of supervisors as coordinators of subordinate tasks. We first develop propositions from our previously published formal models, and then test these propositions using data from the 1977 Police Services Study (Ostrom, Parks, and Whittaker 1988). We explore the allocation of time by bureaucratic subordinates using a method for compositional data analysis, obtaining maximum likelihood estimates of Dirichlet distributions. We evaluate the propositions using computer simulations of the predicted distributions and conclude that, although supervisors have some capacities to influence subordinate allocation of time across tasks, the major influences stem from inter-subordinate contact and subordinate preferences.

16
Paper
A Theory of Nonseparable Preferences in Survey Responses (Revised, with New Evidence)
Lacy, Dean

Uploaded 04-20-1998
Keywords surveys
nonseparable preferences
question-order effects
temporal instability
Abstract This paper presents two models of individual-level responses to issue questions in public opinion surveys when respondents have nonseparable preferences. Both models imply that even when survey respondents have fixed preferences, their responses will change depending on the order of questions, and responses may vary over time. Results from two survey experiments reveal that question-order effects occur on issues for which people have nonseparable preferences, and order effects do not occur on issues for which most people have separable preferences.

17
Paper
Time to Give: PAC Motivations and Electoral Timing
McCarty, Nolan
Rothenberg, Lawrence S.

Uploaded 07-09-1998
Keywords Interest Groups
Campaign Finance
Tobit
GHK Simulation
Abstract There has been much discussion about how members of Congress desire money early in the campaign season. However, to date, models of how contributions are allocated during the electoral cycle have been lacking. Our analysis attempts to remedy this gap by providing and testing a model which specifies how the process by which bargaining between members of Congress and organized interests produces the pattern of donations observed over the course of the electoral cycle. The results suggest that strategic incumbents can receive money early in the campaign if they desire but that they are generally unwilling to pay the price of lower aggregate fundraising and greater provision of access. These findings, in turn, buttress earlier empirical findings that question the instrumental value of early money; in particular, they imply that challengers have reasonably rational and informed expectations about how much money members of Congress are capable of raising over the electoral cycle and that the value of stockpiling money early is not sufficiently high to induce reelection-seeking incumbents to lower their access price significan

18
Paper
Airline Deregulation: A Financial Markets Perspective on Who Mattered When
Hayes, Jeffery W.

Uploaded 08-17-1998
Keywords deregulation
Abstract Deregulation has been one of the most important developments in the American administrative state during the post-war era. According to the existing literature, a host of factors played a role in the airline reform process specifically. Almost four decades after the deregulation movement began, the task is to re-examine the important developments and parse the more robust explanations. To what extent did different political, bureaucratic, judicial, and industry actors contribute to airline deregulation during the 1970s? A thorough analysis of this phenomenon entails tests of numerous related theories, including the influence of intellectual ideas, the contagion effects of other deregulation initiatives, and the intriguing but untested ``deregulatory snowball'' theory which posits a specific structural form for the reform process over time. Empirical obstacles, however, have prevented the resolution of many of the provocative research issues surrounding deregulation. Financial market data are an ideal resource to fill this lacuna since reform had significant and negative economic ramifications for the regulated air carriers. A financial markets perspective on regulatory reform yields illuminating results. To begin with, a broad range of institutions contributed to the likelihood of airline deregulation. Congressional influence was especially important. On the other hand, the impact of the CAB was generally more limited than many analysts have believed. Interestingly, the influence of Congress and the CAB were inversely related over time, suggesting the agency's limited role solely as an initial policy entrepreneur while Congress' importance grew as statutory deregulation became the strategic choice of reformers. Additionally, we find evidence that both the politics of ideas and legislative contagion were relevant factors in the reform debate. On the other hand, the deregulatory snowball hypothesis is clearly rejected for the case of the airlines. These findings add a new layer of understanding to the literature on political control of regulatory policy.

19
Paper
The Robustness of Normal-theory LISREL Models: Tests Using a New Optimizer, the Bootstrap, and Sampling Experiments, with Applications
Mebane, Walter R.
Sekhon, Jasjeet
Wells, Martin T.

Uploaded 01-01-1995
Keywords statistics
estimation
covariance structures
linear structural relations
LISREL
bootstrap
confidence intervals
BCa
specification tests
goodness-of-fit
hypothesis tests
optimization
evolutionary programming
genetic algorithms
monte carlo
sampling experiment
Abstract Asymptotic results from theoretical statistics show that the linear structural relations (LISREL) covariance structure model is robust to many kinds of departures from multivariate normality in the observed data. But close examination of the statistical theory suggests that the kinds of hypotheses about alternative models that are most often of interest in political science research are not covered by the nice robustness results. The typical size of political science data samples also raises questions about the applicability of the asymptotic normal theory. We present results from a Monte Carlo sampling experiment and from analysis of two real data sets both to illustrate the robustness results and to demonstrate how it is unwise to rely on them in substantive political science research. We propose new methods using the bootstrap to assess more accurately the distributions of parameter estimates and test statistics for the LISREL model. To implement the bootstrap we use optimization software two of us have developed, incorporating the quasi-Newton BFGS method in an evolutionary programming algorithm. We describe methods for drawing inferences about LISREL models that are much more reliable than the asymptotic normal-theory techniques. The methods we propose are implemented using the new software we have developed. Our bootstrap and optimization methods allow model assessment and model selection to use well understood statistical principles such as classical hypothesis testing.

20
Paper
Campaign Advertising and Candidate Strategy
Alvarez, R. Michael
Roberts, Reginald

Uploaded 00-00-0000
Keywords Campaigns
television advertising
campaign strategy
negative advertising
voter learning
senate elections
gubernatorialelections
Abstract Here we present a series of preliminary analyses of data collected during the final eight weeks of two state-wide campaigns in California during 1994: the races for governor and Senate. These campaigns were hard-fought in that year, and provide an interesting laboratory in which to study intense campaigns over time, and to compare the advertising strategies between races. We first begin by presenting data from the television advertisements of these two races. Our database of television advertisements from the last eight weeks of these races gives us an unique opportunity to understand and to examine the strategies used by each campaign as they tried to get their messages through to the electorate. Then we turn to the politically relevant question --- did these advertisements matter? Did the messages the candidates send through their television advertisements influence the electorate? To answer these questions we use two sets of polling data from this election to see if these television advertisements effectively got the messages of each candidate across to the intended audience. We conclude with a discussion of our results, and with an agenda for future research.

21
Paper
Methodology for Estimating the Impact of Partisan Competition on the Economy
Herron, Michael C.

Uploaded 07-08-1996
Keywords political economy
United Kingdom
derivative security
Abstract This paper develops and applies a methodology designed to pinpoint the relation between government partisanship and national--level economic variables. In particular, we consider the British polity since the early 1980s and estimate the economic impact of the Conservative Party's dominance of government since 1983. Furthermore, we consider the counterfactual problem of assessing the economic consequences of a Labour win in any of the three most recent British elections. The analysis conducted in the paper creates snapshots of government partisanship effects at each national election. Thus, the paper is able not only to determine if changes in the partisan nature of British government were reflected in economic variables but also is able to consider how the relation between the partisanship of the British government and the British economy has varied over time. The ability to allow for time--based fluctuations in this relation represents an advance within the existing empirical literature on the subject of government partisanship effects. Indeed, contemporary empirical research on government partisanship and its economic consequences is entirely silent on the possibility of temporal variation in the relation between the two. The methodology employed in this paper is based on election campaign--period movements of prices of publicly--traded financial and equity derivative securities. The dataset of such securities is exceedingly rich; compared to the extant literature on government partisanship and economics, the dataset in conjunction with the models developed here allow for greater precision in estimating the impact of partisan changes in government. Overall, the tools presented augment the scholarly understanding of the implications of government partisanship.

22
Paper
The Selection Effect of International Dispute Settlement Institutions
Reinhardt, Eric

Uploaded 11-11-1996
Keywords compliance
enforcement
dispute settlement
institution
bargaining
game theory
Abstract This paper examines the impact of dispute settlement institutions on the outcome of international conflicts. Realists contend that such institutions are epiphenomenal to underlying power relationships. Neoliberals argue in contrast that institutions make cooperation more likely by clarifying obligations and reducing transaction costs. The paper introduces some puzzling evidence about the role of the dispute process under the General Agreement on Tariffs and Trade (GATT). The evidence highlights a selection effect, in which cooperation is more likely at earlier stages of institutional escalation than after the adjudication is complete. Yet why would defendants plea bargain if they know they can spurn contrary rulings? To address this question, the paper introduces an incomplete information model of international bargaining and escalation within the context of a dispute settlement institution. The model generates a number of surprising and powerful results. First, even defendants who do not fear unfavorable rulings will be more likely to plea bargain in equilibrium because of the dispute settlement institution. Second, those disputes that reach the highest levels of escalation---in which rulings are issued---are much less likely to end cooperatively than those that end before the ruling stage. The model thus explains the puzzling GATT selection effect. It also suggests that dispute settlement institutions can have a positive effect on cooperation (contra realist theory), but not through the mechanisms posited by neoliberals. In order to see the influence of such institutions, we must examine not those cases in which they issue injunctions, but rather those in which their involvement is peripheral or merely threatened.

23
Paper
The Coalition-oriented Evolution of Vote Intentions across Regions and Levels of Political Awareness during the 1993 Canadian Election Campaign: Quotidian Markov Chain Models using Rolling Cross-section Data
Wand, Jonathan
Mebane, Walter R.

Uploaded 08-28-1997
Keywords Markov chains
rolling cross-section data
macro data
categorical data
survey data
Canadian politics
strategic voting
coalitions
estimation
Abstract We use survey data collected in Ontario and Quebec during the 1993 Canadian federal election to assess the extent to which voters were sensitive to the distribution of positions in special institutions that would possibly be created to handle negotiations between Quebec and the rest of Canada following a referendum on Quebec sovereignty expected after the election. We draw on a theory of coalition-oriented voting developed by Austin-Smith and Banks (1988) to argue that voters' anticipations regarding those institutions contributed to the catastrophic losses suffered by the Progressive Conservative party. We use a method we have developed for estimating discrete, finite-state Markov chain models from ``macro'' data to analyze the dynamics of individual choice probabilities in daily rolling cross-sectional survey data from 1993 Canadian Election Study. We allow each transition matrix to be updated as a function of daily vote support for either the Bloc or Reform to test for reactive coalition-oriented voting. We find significant reactive voting among Quebecois non-sovereigntists. The timing of these reactions depended on the individual's level of political awareness. In contrast, we find no evidence of reactive voting among either Quebecois sovereigntists or Ontario voters.

24
Paper
Conflict, Information, and Lobbying Coalitions
Esterling, Kevin M.

Uploaded 08-18-1997
Keywords Policy Alliances
Organizational Deliberation
Nested Logit
Abstract This paper explains lobbying organizations' choice to join alliances on policy matters with respect to 1) the degree of the organization's access to external information sources, and 2) the amount of internal organizational conflict and deliberation. An informational view of lobbying suggests that the more informed an organizational actor is, the more likely it will gain access to governmental decision makers; and greater access to the government will decrease the utility of joining a cooperative lobbying effort. In addition, internal conflict in the definition of a policy position will limit an organization's ability to take any position on a policy issue, while successful internal deliberation will augment a lobbying organization's ability to find cooperation partners. Outcome and explanatory data are taken from an existing dataset housed at ICPSR. Nested logit maximum likelihood estimates for the trichotomous-choice cooperation model are presented and interpreted. Support is lent to both the internal conflict and the informational theories of cooperation in policy lobbying. In particular, the model results suggest that organizations predisposed to internal conflict find both non-policy lobbying and cooperative lobbying appealing, suggesting that these organizations only sometimes successfully deliberate over policy. And consistent with the information view of lobbying, greater access to information sharply decreases the utility of lobbying cooperatively.

25
Paper
Too many Variables? A Comment on Bartels' ModelAveraging Proposal
Erikson, Robert S.
Wright, Gerald C.
McIver, John P.

Uploaded 07-18-1997
Keywords Bayes Factor
Bayesian Information Criterion
Bayesian statistics
model averaging
model specification
specification uncertainty
Bartels
Abstract Abstract: Bartels (1997) popularizes the procedure of model- averaging (Raftery, 1995, 1997), making some important innovations of his own along the way. He offers his methodology as a technology for exposing excessive specification searches in other peoples' research. As a demonstration project, Bartels applied his version of model- averaging to a portion of our work on state policy and purports to detect evidence of considerable model uncertainty. . In response, we argue that Bartels' extensions of model averaging methodology are ill-advised, and show that our challenged findings hold up under the scrutiny of the original Raftery-type model averaging.

26
Paper
A Statistical Model for Multiparty Electoral Data
Katz, Jonathan
King, Gary

Uploaded 04-08-1997
Keywords multiparty elections
compositional data
multivariate-t
Abstract This paper proposes an internally consistent and comprehensive statistical model for analyzing multiparty, district-level aggregate election data. This model can be used to explain or predict how the geographic distribution of electoral results depends upon economic conditions, neighborhood ethnic compositions, campaign spending, and other features of the election campaign or characteristics of the aggregate areas. We also provide several new graphical representations for help in data exploration, model evaluation, and substantive interpretation. The model is more general, but we apply it resolve a controversy over the size of and trend in the electoral advantage of incumbency in Great Britain.

27
Paper
Social Capital, Government Performance, and the Dynamics of
Keele, Luke

Uploaded 10-14-2004
Keywords trust in government
social capital
time series
error correction models
Abstract In extant research on trust in government, a tension has developed between whether the movement of trust over time is a function of political performance or political alienation. In performance based explanations trust responds to the economy, Congress and the President and should move frequently over time. Under theories of political alienation, government performance matters little as hostility toward both political leaders and the political process causes distrust of government, and is a direct threat to government legitimacy. Using aggregate data in a time series analysis of trust in government, I find that both political alienation, as measured by social capital and performance have important but differing effects on trust. Government performance has an immediate effect on trust while movement in social capital sets the long-term level of trust. The qualitative outcome is that trust embodies both performance and political alienation and is an important indicator of citizen satisfaction with government.

28
Paper
Attributing Effects to A Cluster Randomized Get-Out-The-Vote Campaign: An Application of Randomization Inference Using Full Matching
Bowers, Jake
Hansen, Ben

Uploaded 07-18-2005
Keywords causal inference
randomization inference
attributable effects
full matching
instrumental variables
missing data
field experiments
clustering
Abstract Statistical analysis requires a probability model: commonly, a model for the dependence of outcomes $Y$ on confounders $X$ and a potentially causal variable $Z$. When the goal of the analysis is to infer $Z$'s effects on $Y$, this requirement introduces an element of circularity: in order to decide how $Z$ affects $Y$, the analyst first determines, speculatively, the manner of $Y$'s dependence on $Z$ and other variables. This paper takes a statistical perspective that avoids such circles, permitting analysis of $Z$'s effects on $Y$ even as the statistician remains entirely agnostic about the conditional distribution of $Y$ given $X$ and $Z$, or perhaps even denies that such a distribution exists. Our assumptions instead pertain to the conditional distribution $Z vert X$, and the role of speculation in settling them is reduced by the existence of random assignment of $Z$ in a field experiment as well as by poststratification, testing for overt bias before accepting a poststratification, and optimal full matching. Such beginnings pave the way for ``randomization inference'', an approach which, despite a long history in the analysis of designed experiments, is relatively new to political science and to other fields in which experimental data are rarely available. The approach applies to both experiments and observational studies. We illustrate this by applying it to analyze A. Gerber and D. Green's New Haven Vote 98 campaign. Conceived as both a get-out-the-vote campaign and a field experiment in political participation, the study assigned households to treatment and desired to estimate the effect of treatment on the individuals nested within the households. We estimate the number of voters who would not have voted had the campaign not prompted them to --- that is, the total number of votes attributable to the interventions of the campaigners --- while taking into account the non-independence of observations within households, non-random compliance, and missing responses. Both our statistical inferences about these attributable effects and the stratification and matching that precede them rely on quite recent developments from statistics; our matching, in particular, has novel features of potentially wide applicability. Our broad findings resemble those of the original analysis by citet{gerbergreen00}.

29
Paper
Making Inferences from 2x2 Tables: The Inadequacy of the Fisher Exact\r\nTest for Observational Data and a Principled Bayesian Alternative
Sekhon, Jasjeet

Uploaded 08-17-2005
Keywords Fisher exact test
randomization inference
permutation tests
Bayesian tests
difference of proportions
observational data
Abstract The Fisher exact test is the dominant method of making inferences from 2x2 tables where the number of observations is small. Although the Fisher test and approximations to it are used in a large number of studies, these tests rest on a data generating process which is inappropriate for most applications for which they are used. The canonical Fisher test assumes that both of the margins in a 2x2 table are fixed by construction---i.e., both the treatment and outcome margins are fixed a priori. If the data were generated by an alternative process, such as binomial, negative binomial or Poisson binomial sampling, the Fisher exact test and approximations to it do not have correct coverage. A Bayesian method is offered which has correct coverage, is powerful, is consistent with a binomial process and can be extended easily to other distributions. A prominent 2x2 table which has been used in the literature by Geddes (1990) and Sekhon (2004) to explore the relationship between foreign threat and social revolution (Skocpol, 1979) is reanalyzed. The Bayesian method finds a significant relationship even though the Fisher and related tests do not. A Monte Carlo sampling experiment is provided which shows that the Bayesian method dominates the usual alternatives in terms of both test coverage and power when the data are generated by a binomial process.

30
Paper
Taking Time Seriously: Dynamic Regression
Keele, Luke
De Boef, Suzanna

Uploaded 10-14-2004
Keywords Time series
error correction models
lagged dependent variables
ADL models
Abstract Dramatic change in the world around us has stimulated a wealth of interest in research questions about the dynamics of political processes. At the same time, we have seen increases in the number of time series data sets and the length of typical time series. While advances in time series methods have helped us to think about political change in important ways, too often published time series analysis displays shortcomings in three areas. First, analysts often estimate models without testing the restrictions implied by their specification. Second, applied researchers link the theoretical concept of equilibrium with the existence of cointegration and use of error correction models. Third, those estimating time series models have often done a poor job of interpreting their statistical results. The consequences, at best, are poor connections between theory and tests and thus a limited cumulation of knowledge. Often, the costs include biased results and incorrect inferences as well. Here, we outline techniques for the estimation of linear models with dynamic specification. In general, we recommend that analysts start with a combination of general dynamic models and test for restrictions before adopting a particular specification. Finally, we recommend that analysts make use of the wide array of information that can be gleaned from dynamic specifications. We illustrate this strategy with data Congressional approval and tax rates across OECD countries.

31
Paper
Spatio-Temporal Models for Political-Science Panel and Time-Series-Cross-Section Data
Franzese, Robert
Hays, Jude

Uploaded 07-18-2006
Keywords Spatial Econometrics
Spatial-Lag Model
Spatio-Temporal Model
Panel Data
Time-Series-Cross-Section Data
Spatio-Temporal Multiplier
Spatio-Temporal Dynamics
Spatio-Temporal Steady-State Effects
Abstract Building from our broader project exploring spatial-econometric models for political science, this paper discusses estimation, interpretation, and presentation of spatio-temporal models. We first present a generic spatio-temporal-lag model and two methods, OLS and ML, for estimating parameters in such models. We briefly consider those estimators’ properties analytically before showing next how to calculate and to present the spatio-temporal dynamic and long-run steady-state equilibrium effects—i.e., the spatio-temporal substance of the model—implied by the coefficient estimates. Then, we conduct Monte Carlo experiments to explore the properties of the OLS and ML estimators, and, finally, we conclude with a reanalysis of Beck, Gleditsch, and Beardsley’s (2006) state-of-the-art study of directed export flows among major powers.

32
Paper
A Tournament of Party Decision Rules
Fowler, James
Laver, Michael

Uploaded 10-20-2006
Abstract In the spirit of Axelrod’s famous series of tournaments for strategies in the repeat-play prisoner’s dilemma, we conducted a “tournament of party decision rules” in a dynamic agent-based spatial model of party competition. A call was issued for researchers to submit rules for selecting party positions in a two-dimensional policy space. Each submitted rule was pitted against all others in a suite of very long-running simulations in which all parties falling below a declared support threshold for two consecutive elections “died” and one new party was “born” each election at a random spatial location, using a rule randomly drawn from the set submitted. The policy-selection rule most successful at winning votes over the very long run was declared the “winner”. The most successful rule was identified unambiguously and combined a number of striking features. It satisficed rather than maximized in the short run; it was “parasitic” on choices made by other successful rules; and it was hard-wired not to attack other agents using the same rule, which it identified using a “secret handshake”. We followed up the tournament with a second suite of simulations in a more evolutionary setting in which the selection probability of a rule was a function of its “fitness”, measured in terms of the previous success of agents using the same rule. In this setting, the rule that won the original tournament pulled even further ahead of the competition. Treated as a discovery tool, tournament results raise a series of intriguing issues for those involved in the modeling of party competition.

33
Paper
Power-law distributions in empirical data
Clauset, Aaron
Shalizi, Cosma
Newman, Mark

Uploaded 06-11-2007
Keywords Power-law distributions
Pareto
Zipf
maximum likelihood
heavy-tailed distributions
likelihood ratio test
model selection
Abstract Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the empirical detection and characterization of power laws is made difficult by the large fluctuations that occur in the tail of the distribution. In particular, standard methods such as least-squares fitting are known to produce systematically biased estimates of parameters for power-law distributions and should not be used in most circumstances. Here we describe statistical techniques for making accurate parameter estimates for power-law data, based on maximum likelihood methods and the Kolmogorov-Smirnov statistic. We also show how to tell whether the data follow a power-law distribution at all, defining quantitative measures that indicate when the power law is a reasonable fit to the data and when it is not. We demonstrate these methods by applying them to twenty-four real-world data sets from a range of different disciplines. Each of the data sets has been conjectured previously to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data while in others the power law is ruled out.

34
Paper
Path, Phat, and State Dependence in Observation-driven Markov
Walker, Robert

Uploaded 07-17-2007
Keywords Markov models
qualitative time series
ergodic theorem
Abstract Many social science theories posit dynamics that depend in important ways on the present state and focus on a reasonably small number of states. Despite the importance of theoretical notions of path dependence, empirical models, with a few exceptions (Alvarez, Cheibub, Limongi and Przewroski 2000; Epstein, Bates, Goldstein, O'Halloryn, and Kristensen 2006; Beck. Jackman, Epstein, and O'Halloryn 2001), have paid little attention to the implications of state dependence for empirical studies. This despite the fact that there are many possible ways in which history might matter -- we focus on the categorization given by Page (2006) -- and these different ways that history might matter manifest themselves in sets of models that can be tested and compared. This paper considers the basic properties of observation-driven Markov chains [stationarity/time homogeneity, communication, transience, periodicity, irreducibility, and ergodicity] and the issues that arise in their implementation as likelihood estimators to provide a window into methods for the study of path dependence. Application of these concepts to longitudinal data on human rights abuses and exchange rate regime transitions provides evidence that history may also not exert uniform effects. The empirical examples highlight the subtle substantive assumptions that manifest in different modeling choices. The human rights example calls for an important qualification in the widely studied relationship between democracy and human rights abuses. The exchange rate regime example highlights the usefulness of Markov models for multinomial processes.

35
Paper
A New Non-Parametric Matching Method for Bias Adjustment with Applications to Economic Evaluations
Sekhon, Jasjeet

Uploaded 05-11-2008
Keywords semiparametric and nonparametric matching methods
observational studies
randomized controlled trials
health economic evaluation
Abstract In health economic studies that use observational data, a key concern is how to adjust for imbalances in baseline covariates due to the non-random assignment of the programs under evaluation. Traditional methods of covariate adjustment such as regression and propensity score matching are model dependent and often fail to replicate the results of randomized controlled trials. We demonstrate a new non-parametric matching method, Genetic Matching, which is a generalization of propensity score and Mahalanobis distance matching, using two contrasting case studies. In the first, an economic evaluation of a clinical intervention (Pulmonary Artery Catheterization), applying Genetic Matching to observational data replicates the substantive results of a corresponding randomized controlled trial unlike the extant literature. And in the second case study evaluating capitation versus fee-for service, Genetic Matching radically improves balance on baseline covariates and overturns previous conclusions based on traditional methods.

36
Paper
Scaling the Critics: Uncovering the Latent Dimensions of Movie Criticism with An Item Response Approach
Peress, Michael
Spirling, Arthur

Uploaded 07-04-2008
Keywords threshold utility model
film
ideal points
Abstract We study the critical opinions of expert movie reviewers as an item response problem. We develop a framework that models an individual's decision to approve or disapprove of an item. Using this framework, we are able to recover the locations of movies and ideal points of critics in the same multi-dimensional space. We demonstrate that a three dimensional model captures much of the variation in critical opinions. The first dimension signifies movie 'quality' while the other two connote the nature and subject matter of the films. We then demonstrate that the dimensions uncovered from our 'threshold utility model' are statistically significant predictors of a movie's success, and are particularly useful in predicting the success of `independent' films.

37
Paper
Is Matching Really Essential?
Middleton, Joel

Uploaded 07-11-2008
Abstract Conference poster

38
Paper
Noughts and Crosses. Challenges in Generating Political Positions from CMP-Data.
Hans, Silke
Hoennige, Christoph

Uploaded 08-29-2008
Keywords Comparative Politics
Manifesto Data
Party Positions
Abstract The Comparative Manifesto Project (CMP) dataset is the only dataset providing information about the positions of parties for comparative researchers across time and countries. This article evaluates its structure and finds a peculiarity: A high number of zeros and their unequal distribution across items, countries and time. They influence the results of any procedure to build a scale, but especially those using factor analyses. The article shows that zeroes have different meanings: Firstly, there are substantial zeroes in line with saliency theory. Secondly, zeroes exist for non-substantial reasons: The length of a manifesto and the percentage of uncoded sentences, both strongly varying across time and country. We quantify the problem and propose a procedure to identify data points containing non-substantial zeroes. For the future comparative use of the dataset we plead for a theoretical selection of items combined with the information about the likelihood that zeroes are substantially meaningful.

39
Paper
Sampling Schemes for Generalized Linear Dirichlet Random Effects Models
Kyung, Minjung
Gill, Jeff
Casella, George

Uploaded 02-18-2009
Keywords generalized linear mixed Dirchlet model
Markov chain Monte Carlo
Dirichlet process priors for random effects
precision parameters
Scottish Social Attitudes Survey
terrorism targeting
Abstract We evaluate MCMC sampling schemes for a variety of link functions in generalized linear models with Dirichlet random effects. We find that models using Dirichlet process priors for the random effects tend to capture information in the data in a nonparametric fashion. In fitting the the Dirichlet process, dealing with the precision parameter has troubled model specifications in the past. Here we find that incorporating this parameter into the MCMC sampling scheme is not only computationally feasible, but also results in a more robust set of estimates, in that they are marginalized-over rather than conditioned-upon. Applications are provided with social science problems in areas where the data can be difficult to model. In all, we find that these models provide superior Bayesian posterior results in theory, simulation, and application.

40
Paper
A General Approach to Causal Mediation Analysis
Imai, Kosuke
Keele, Luke
Tingley, Dustin

Uploaded 07-20-2009
Keywords causal inference
causal mechanisms
sensitivity analysis
sequential ignorability
structural equation modeling
unobserved confounder
Abstract In a highly influential paper, Baron and Kenny (1986) proposed a statistical procedure to conduct a causal mediation analysis and identify possible causal mechanisms. This procedure has been widely used across many branches of the social and medical sciences and especially in psychology and epidemiology. However, one major limitation of this approach is that it is based on a set of linear regressions and cannot be easily extended to more complex situations that are frequently encountered in applied research. In this paper, we propose an approach that generalizes the Baron-Kenny procedure. Our method can accommodate linear and nonlinear relationships, parametric and nonparametric models, continuous and discrete mediators, and various types of outcome variables. We also provide a formal statistical justification for the proposed generalization of the Baron-Kenny procedure by placing causal mediation analysis within the widely-accepted counterfactual framework of causal inference. Finally, we develop a set of sensitivity analyses that allow applied researchers to quantify the robustness of their empirical conclusions. Such sensitivity analysis is important because as we show the Baron-Kenny procedure and our generalization of it rest on a strong and untestable assumption even in randomized experiments. We illustrate the proposed methods by applying them to a randomized field experiment, the Job Search Intervention Study (JOBS II). We also offer easy-to-use software that implements all of our proposed methods.

41
Paper
Tobler's Law, Urbanization, and Electoral Bias: Why Compact, Contiguous Districts are Bad for the Democrats
Chen, Jowei
Rodden, Jonathan

Uploaded 11-11-2009
Keywords elections
voting
party competition
legislative districting
simulations
electoral geography
spatial autocorrelation
Abstract When one of the major parties in the United States wins a substantially larger share of the seats than its vote share would seem to warrant, the conventional explanation lies in manipulation of maps by the party that controls the redistricting process. Yet this paper uses a unique data set from Florida to demonstrate a common mechanism through which substantial partisan bias can emerge purely from residential patterns. When partisan preferences are spatially dependent and partisanship is highly correlated with population density, any districting scheme that generates relatively compact, contiguous districts will tend to produce bias against the urban party. In order to demonstrate this empirically, we apply automated districting algorithms driven solely by compactness and contiguity parameters, building winner-take-all districts out of the precinct-level results of the tied Florida presidential election of 2000. The simulation results demonstrate that with 50 percent of the votes statewide, the Republicans can expect to win around 59 percent of the seats without any "intentional" gerrymandering. This occurs because urban districts tend to be homogeneous and Democratic while suburban and rural districts tend to be moderately Republican. Thus in Florida and other states where Democrats are highly concentrated in cities, the seemingly apolitical practice of requiring compact, contiguous districts will produce systematic pro-Republican electoral bias.

42
Paper
Improving Inferences in the Study of Crisis Bargaining
Arena, Phil
Joyce, Kyle

Uploaded 07-19-2010
Keywords crisis bargaining
matching
instrumental variables
structural estimation
empirical implications of theoretical models
Abstract We present a simple crisis bargaining model that indicates that targets can generally prevent war by arming. We then create a simulated data set where the bargaining model is assumed to perfectly describe the data-generating process for those states engaged in crisis bargaining, which we assume most pairs of states are not. We further assume researchers cannot observe which states are engaged in crisis bargaining, though observable variables might serve as proxies. We demonstrate that a naive design would indicate a positive relationship between arming and war. We then evaluate the ability of matching, instrumental variables, and statistical backwards induction to uncover the true negative relationship. While each method is capable of doing so under certain conditions, each also faces important limitations. In most cases, statistical backwards induction will be the most practical of the three, but we caution that even this method is no perfect fix.

43
Paper
Seven Deadly Sins of Contemporary Quantitative Political Analysis
Schrodt, Philip

Uploaded 08-23-2010
Keywords collinearity
prediction
explanation
Bayesian
frequentist
control variables
pedagogy
philosophy of science
logical positivists
significance test
Hempel
Thor
Abstract A combination of technological change, methodological drift and a certain degree of intellectual sloth and sloppiness, particularly with respect to philosophy of science,has allowed contemporary quantitative political analysis to accumulate a series of dysfunctional habits that have rendered a great deal of contemporary research more or less scientifically useless. The cure for this is not to reject quantitative methods -- and the cure is most certainly not a postmodernist nihilistic rejection of all systematic method -- but rather to return to some fundamentals, and take on some hard problems rather than expecting to advance knowledge solely through the ever-increasing application of fast-twitch muscle fibers to computer mice. In this paper, these "seven deadly sins" are identified as 1. Kitchen sink models that ignore the effects of collinearity; 2. Pre-scientific explanation in the absence of prediction; 3. Reanalyzing the same data sets until they scream; 4. Using complex methods without understanding the underlying assumptions; 5. Interpreting frequentist statistics as if they were Bayesian; 6. Linear statistical monoculture at the expense of alternative structures; 7. Confusing statistical controls and experimental controls. The answer to these problems is solid, thoughtful, original work driven by an appreciation of both theory and data. Not postmodernism. The paper closes with a review of how we got to this point from the perspective of 17th through 20th century philosophy of science, and provides suggestions for changes in philosophical and pedagogical approaches that might serve to correct some of these problems.

44
Paper
We Have to Be Discrete About This: A Non-Parametric Imputation Technique for Missing Categorical Data
Cranmer, Skyler
Gill, Jeff

Uploaded 04-30-2012
Keywords missing data
categorical
hot-decking
MCAR
multiple imputation
MAR
GLM
regression
missingness
Abstract Missing values are a frequent problem in empirical political science research. Surprisingly, there has been little attention to the match between the measurement of the missing values and the correcting algorithms used. While multiple imputation is a vast improvement over the deletion of cases with missing values, it is often ill suited for imputing highly non-granular discrete data. We develop a simple technique for imputing missing values in such situations, which is a variant of hot deck imputation, drawing from the conditional distribution of the variable with missing values to preserve the discrete measure of the variable. This method is tested against existing techniques using Monte Carlo analysis and then applied to real data on democratisation and modernisation theory. We provide software for our imputation technique in a free and easy-to-use package for the \R\ statistical environment.

45
Paper
Estimating Average Causal Effects Under General Interference
Aronow, Peter
Samii, Cyrus

Uploaded 07-16-2012
Keywords interference
SUTVA
randomized experiments
causal inference
Abstract This paper presents randomization-based methods for estimating average causal effects under arbitrary interference of known form. We present conservative estimators of the randomization variance of the average treatment effects estimators and a justification for confidence intervals based on a normal approximation. Examples relevant to research in environmental protection, networks experiments, "viral marketing," two-stage disease prophylaxis trials, and stepped-wedge designs are presented.

46
Paper
krls: A Stata Package for Kernel-Based Regularized Least Squares
Ferwerda, Jeremy
Hainmueller, Jens
Hazlett, Chad

Uploaded 09-13-2013
Keywords machine learning
regression
classification
prediction
Stata
Abstract The Stata package krls implements Kernel-Based Regularized Least Squares (KRLS), a machine learning method described in Hainmueller and Hazlett (2013) that allows users to solve regression and classification problems without manual specification search and strong functional form assumptions. The flexible KRLS estimator learns the functional form from the data and thereby protects inferences against misspecification bias. Yet, it nevertheless allows for interpretability and inference in ways similar to ordinary regression models. In particular, KRLS provides closed-form estimates for the predicted values, variances, and the pointwise partial derivatives that characterize the marginal effects of each independent variable at each data point in the covariate space. The method is thus a convenient and powerful alternative to OLS and other GLMs for regression-based analyses.

47
Paper
Does Democracy Reduce Infant Mortality? Evidence from New Data, for 181 Countries between 1970 and 2009
Ramos, Antonio Pedro

Uploaded 07-28-2014
Keywords Regime Type
Democratization
Child Mortality
Panel Data
Longitudinal Models
Random Effects Models
Hierarchical Models
Abstract Which form of government is most responsive to its citizens’ needs? This paper focuses on child mortality to investigate the causal link between political regimes and welfare. I use a new data set that includes 181 countries between 1970 and 2008, with no missing observations and less measurement error than has been previously available. While new data suggests that democracies are associated with better health outcomes, it remains unclear whether this is due to a causal effect of regime type on health. I argue that the best way to detect the effects of democracy on child mortality is to investigate whether democratization episodes were followed by significant reductions in child mortality. Child mortality in most countries has declined in the last forty years. My results indicate that democratic transitions accelerate the downward trend in child mortality, especially in low income countries where mortality rates are typically high. Surprisingly, however, I also find that democratic transitions lead to short-term increases in child mortality for middle-income countries. This heterogeneity in the effects of democratic transitions has not been previously documented and calls for further research.

48
Paper
Macro vs. Micro-Level Perspectives on Economic Voting: Is the Micro-Level Evidence Endogenously Induced?
Erikson, Robert S.

Uploaded 07-10-2004
Keywords economic voting
vote choice
Abstract Many of the findings regarding economic voting derive from the micro-level analyses of survey data, in which respondents' survey evaluations of the economy are shown to predict the vote. This paper investigates the causal nature of this relationship and argues that cross-sectional consistency between economic evaluations and vote choice is mainly if not entirely due to vote choice influencing the survey response. Moreover, the evidence suggest that apart from this endogenously induced partisan bias, almost all of the cross-sectional variation in survey evaluations of the economy is random noise rather than actual beliefs about economic conditions In surveys, the mean evaluations reflect the economic signal that predicts the aggregate vote. Following Kramer (1983), economic voting is best studied at the macro-level rather than the micro-level.

49
Paper
Discriminating Methods: Tests for Nonnested Discrete Choice Models
Clarke, Kevin A.
Signorino, Curtis S.

Uploaded 07-15-2003
Keywords discrete choice
nonnested testing
strategic choice
Vuong test
nonparametric test
Abstract We consider the problem of choosing between rival models that are nonnested in terms of their functional forms. We discuss both a parametric and distribution-free procedure for making this choice, and demonstrate through a monte carlo simulation that discrimination is possible. The results of the simulation also allow us to compare the relative power of the two tests.

50
Paper
Empirical Social Inquiry and Models of Causal Inference
Yang, David

Uploaded 03-05-2003
Keywords causal inference
method nesting
small-N research
Abstract This essay examines several alternative theories of causality from the philosophy of science literature and considers their implications for methods of empirical social inquiry. In particular, I argue that the epistemology of counterfactual causality is not the only logic of causal inference in social inquiry, and that different methods of research appeal to different models of causal inference. As these models are often philosophically inter-dependent, a more eclectic understanding of causation in empirical research may afford greater methodological versatility and provide a more complete understanding of causality. Some common statistical critiques of small-N research are then considered from the perspective of mechanistic causal theories, and alternative strategies of strengthening causal arguments in small-N research are discussed.


< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 next>
   
wustlArtSci