logoPicV1 logoTextV1

Search Results

Below results based on the criteria 'Benford'
Total number of records returned: 874

Cuing and Coordination in American Elections
Mebane, Walter R.

Uploaded 07-16-2004
Keywords evolutionary game
political behavior
strategic coordination
policy moderation
Abstract I use evolutionary game models based on pure imitation to reexamine recent findings that strategic coordination characterizes the American electorate. Imitation means that voters who are dissatisfied with their strategy adopt the strategy of the first voter they encounter who is similar to them. In the replicator dynamics such imitation implies, everyone ultimately uses the coordinating strategy, but I study what happens over time spans that are relevant for voters. I consider three evolutionary models, including two that involve partisan cuing. Simulations using National Election Studies data from presidential years 1976-96 suggest that many voters use an unconditional strategy, usually a strategy of voting a straight ticket matching their party identification. I then estimate a choice model that incorporates an approximation to the evolutionary dynamics. The results support partisan cuing and confirm that most voters vote unconditionally. The estimates also support previous findings regarding policy moderation and institutional balancing.

Selection Bias and Continuous-Time Duration Models: Consequences and a Proposed Solution
Boehmke, Frederick
Morey, Daniel
Shannon, Megan

Uploaded 07-15-2003
Keywords duration
selection bias
monte carlo
Abstract In this paper we explore the consequences of non-random sample selection for continuous time duration analysis. While the consequences of selectivity are reasonably well-understood in linear regression and common discrete choice models, we have little or no understanding of how it affects duration models. In this paper we study this issue by conducting a series of Monte Carlo analyses that estimate common duration models on data that suffer from selectivity. Our findings indicate that the consequences are severe: both coefficients and standard errors may be biased in an unknown direction. In addition, we find that selection bias may create the appearance of (non-existent) duration dependence. Given these difficulties, we develop a solution for self-selectivity bias in duration models and present evidence that demonstrates its superiority to models that ignore the problem.

Bayesian exploratory data analysis
Gelman, Andrew

Uploaded 02-11-2003
Keywords bootstrap
Fisher's exact test
mixture model
model checking
multiple imputation
prior predictive check
posterior predictive check
Abstract Exploratory data analysis (EDA) and Bayesian inference (or, more generally, complex statistical modeling)---which are generally considered as unrelated statistical paradigms---can be particularly effective in combination. In this paper, we present a Bayesian framework for EDA based on posterior predictive checks. We explain how posterior predictive simulations can be used to create reference distributions for EDA graphs, and how this approach resolves some theoretical problems in Bayesian data analysis. We show how the generalization of Bayesian inference to include replicated data $y^{ m rep}$ and replicated parameters $ heta^{ m rep}$ follows a long tradition of generalizations in Bayesian theory. On the theoretical level, we present a predictive Bayesian formulation of goodness-of-fit testing, distinguishing between $p$-values (posterior probabilities that specified antisymmetric discrepancy measures will exceed 0) and $u$-values (data summaries with uniform sampling distributions). We explain that $p$-values, unlike $u$-values, are Bayesian probability statements in that they condition on observed data. Having reviewed the general theoretical framework, we discuss the implications for statistical graphics and exploratory data analysis, with the goal being to unify exploratory data analysis with more formal statistical methods based on probability models. We interpret various graphical displays as posterior predictive checks and discuss how Bayesian inference can be used to determine reference distributions. The goal of this work is not to downgrade descriptive statistics, or to suggest they be replaced by Bayesian modeling, but rather to suggest how exploratory data analysis fits into the probability-modeling paradigm. We conclude with a discussion of the implications for practical Bayesian inference. In particular, we anticipate that Bayesian software can be generalized to draw simulations of replicated data and parameters from their posterior predictive distribution, and these can in turn be used to calibrate EDA graphs.

Connecting Interest Groups and Congress: A New Approach to Understanding Interest Group Success
Victor, Jennifer Nicoll

Uploaded 07-16-2002
Keywords Interest Groups
Multiple Imputation
Bayesian Information Criterion
Ordinal Probit
Non-nested Models
Legislative Context
Abstract The primary challenge in explaining interest group legislative success in Congress has been methodological. The discipline requires at least two critical elements to make progress on this important question. First, we need a theory that accounts for the highly interactive spatial game between interest groups and legislators. Second, the discipline needs an empirical model that associates interest groups and their activities with specific congressional bills. In this project I begin to contribute to our understanding of the complex relationship between interest groups and Congress. I develop a theory of group success that is based upon the strategies in which groups engage, the groups' organizational capacity, and the strategic context of legislation. I predict that groups will tailor their activities (and strategically spend their resources) in Congress based upon two critical factors: whether the group supports or opposes the legislation, and the legislative environment for the bill. To test this model I develop a unique sampling procedure and survey design. I use legislative hearings to generate a sample of groups that are associated with specific issues and survey them about their activities on those issues. Then, I associate each group's issue with a specific bill in Congress. I then track the bill to discern its final status. I create a dependent variable of interest group success that is based on the group's position (favor or oppose) and the final status of the bill. This sampling procedure and dependent variable allow me to make inferences about group behavior over specific legislative proposals. I develop independent variables of group activity, group organizational capacity, and legislative context from the survey instrument and objective information about the bills. To fill in gaps in the survey data set, I use a multiple imputation method that generates plausible values based on given distributions of data. I estimate two models-one for groups in favor of legislation, and one for opposition groups. The ordinal probit models generally support the theoretical expectations. In sum, I find that groups can best expend their resources in pursuit of rules that advantage their position rather than fighting for bill content.

Armed Conflict as a Public Health Problem
Murray, Christopher J. L.
King, Gary
Lopez, Alan D.
Tomijima, Niels
Krug, Etienne

Uploaded 02-25-2002
Keywords International Conflict Data
public health
Abstract Armed conflict is a major cause of injury and death worldwide, but we need much better methods of quantification before we can accurately assess its effect. Armed conflict between warring states and groups within states have been major causes of ill health and mortality for most of human history. Conflict obviously causes deaths and injuries on the battlefield, but also health consequences from the displacement of populations, the breakdown of health and social services, and the heightened risk of disease transmission. Despite the size of the health consequences, military conflict has not received the same attention from public health research and policy as many other causes of illness and death. In contrast, political scientists have long studied the causes of war but have primarily been interested in the decision of elite groups to go to war, not in human death and misery. We review the limited knowledge on the health consequences of conflict, suggest ways to improve measurement, and discuss the potential for risk assessment and for preventing and ameliorating the consequences of conflict.

Pre-Election Polls in Nation and State: A Dynamic Bayesian Hierarchical Model
Franklin, Charles

Uploaded 07-17-2001
Keywords campaigns
hierarchical models
Abstract A vast number of national trial heat polls are conducted in the months preceding a presidential election. But as was dramatically demonstrated in 2000, candidates must win states to win the presidency, not just win popular votes. The density of state level polling is much less than that for the nation as a whole. This makes efforts to track candidate support at the state level, and to estimate campaign effects in the states, very difficult. This paper develops a Bayesian hierarchical model of trial heat polls which uses state and national polling data, plus measures of campaign effort in each state, to estimate candidate support between observed state polls. At a technical level, the Bayesian approach provides not only estimates of support but also easily understood estimates of the uncertainty of those estimates. At an applied level, this method can allow campaigns to target polling in states that are most likely to be changing while being alerted to potential shifts in states that are not as frequently polled.

Analyzing the Dynamics of International Mediation Processess in the Middle East and the former Yugoslavia
Gerner, Deborah J.
Schrodt, Philip A.

Uploaded 06-28-2001
Keywords mediation
event data
Middle East
Abstract This paper discusses a new National Science Foundation-funded project that will examine the dynamics of third-party international mediation using statistical time-series analyses of political event data. Third-party mediation was attempted in over half of the conflicts in the post-WWII period and it is likely that the use of mediation has increased following the end of the Cold War. Surprisingly, there have been few systematic studies on mediation. Those that do exist have generally focused on relatively static contextual factors such as the the conflict's attributes and the prior relationship between the mediator and protagonists rather than on dynamic factors' both contextual and process that may contribute to the success or failure of mediation activities. In contrast, the extensive qualitative literature provides numerous hypotheses about dynamic aspects of mediation. This, however, primarily consists of case studies, often by mediation practitioners, that exhibit little cumulation and, when taken as a whole, are rife with contradictory assertions. The project will formally test a number of the hypotheses embedded in the theoretical and qualitative literatures on mediation, using automated coding of event data from news-wire sources and employing time-series and event- history methods. A system of specialized event codes that a sensitive to mediation activities will be developed, then events will be coded from news reports using the TABARI machine coding program. The research will look at the factors that influence (1) whether mediation is accepted by the parties in a conflict, (2) whether formal agreements are reached, and (3) whether the agreements actually reduce the level of conflict. The project will initially focus on conflicts in the Middle East, a region where the principal investigators have substantial field experience. After refining the statistical tests on the Middle East case, the analysis will be extended to event data on conflicts in the former Yugoslavia and West Africa. The paper presents the results of an empirical "plausibility probe" based on existing WEIS-coded event data for the Levant and the former Yugoslavia. It employs a simple measure of third-party mediation efforts as the independent variables and Goldstein-scaled cooperation as the dependent variable. In the Levant, we find a weak but consistent pattern of mediation correlating with past conflictual activity, and resulting in later increases in cooperation. In the former Yugoslavia, the analysis shows strikingly different results for the mediation efforts the UN, European states, and the US. All three respond to increased conflict, but the UN efforts correlate with greater conflict, the US efforts with greater cooperation, and the European efforts have no effect. These results are consistent with many of the qualitative assessments of these efforts, and suggest that the event data approach will produce credible results

A New Look at Cold War Presidents' Use of Force: Aggregation Bias, Truncation, and Temporal Dynamic Issues
Mitchell, Sara McLaughlin
Moore, Will H.

Uploaded 09-07-2000
Keywords aggregation bias
truncation bias
use of force
Abstract This study re-examines the findings reported in a seminal study of US presidents' use of force during the Cold War (Ostrom and Job 1986). We identify three methodological issues that affect inferences drawn in studies of presidential decisions to use force: aggregation, truncation, and dynamics. We suggest that a dichotomous measure of uses of force introduces aggregation bias, while the decision to examine only major uses of force introduces truncation bias. To address these issues, we compare two types of use of force measures (dichotomous and event count), in addition to comparing results for major, minor, and all uses of force. In addition, we argue that Ostrom and Job's focus on rivalry leads one to anticipate the presence of temporal dependence or dynamics in the use of force series. We estimate a Poisson Autoregressive (PAR) model proposed by Brandt and Williams (2000), which is able to account for temporal dynamics in an event count model. Our findings demonstrate the importance of these three methodological issues. Results of the PAR model show dynamics in the use of force series. We also find that international variables have a larger substantive effect on the president's decision to use force than do the domestic variables. Our study thus overturns the most dramatic finding reported in the Ostrom and Job study, a finding that we contend was driven by bias and model specification problems.

What to Do (and Not Do) With Dynamic Panel Data in Political Science (with apologies to Beck and Katz)
Wawro, Gregory

Uploaded 07-16-2000
Keywords dynamic panel data models
lagged endogenous variables
GMM estimators
party identification
campaign finance
Abstract Panel data is a very valuable resource for finding empirical solutions to political science puzzles. Yet numerous published studies in political science that use panel data to estimate models with dynamics have failed to take into account important estimation issues which call into question the inferences we can make from these analyses. Simply put, the failure to account explicitly for unobserved individual effects in panel data leads to inconsistent estimates of parameters of interest. The fundamental requirement for consistency of parameter estimates---that the explanatory variables in a regression equation must be uncorrleated with the disturbance term---is not met unless individual specific effects are adequately accounted for. Dynamic panel data estimators that eliminate this problem have become fairly standard in the economics literature. The purpose of this paper is to introduce these methods to political scientists. First, I show how the problem of inconsistency arises in dynamic panel data. I then show how to correct for this problem using generalized method of moments (GMM) estimators. I then demonstrate the usefulness of these methods with replications of published analyses.

Is Ticket Splitting Strategic? Evidence from the 1998 Election in Germany
Gschwend, Thomas

Uploaded 04-20-2000
Keywords ticket splitting
strategic voting
Abstract Germany provides an especially interesting case for the study of strategic voting because they use a two-ballot system on Election Day. Voters are encouraged to split their votes using different strategies. The paper is an example of how much more can be learned if we reconsider and refine our theories. I provide a first step towards a theory of strategic voting and add it to the typical ticket splitting discussion. In order to test more refined hypotheses about ticket splitting and strategic voting I use cross-sectional data from the German National Post Election Study of 1998. Empirically, the results indicate that strategic voters are different from ordinary ticket splitters. Evidence from separate MNP estimation for East and West Germany shows that identifier of the FDP or the Greens are more likely strategic voters as opposed to non-strategic ticket splitters. Non-strategic ticket splitters in East Germany do not feel close to any political party. In West Germany non-strategic ticket splitters have conflicting party preferences. Thus, it proves useful to separate out strategic voters from ordinary ticket splitters in future work.

The Initiative as a Catalyst for Policy Change
Boehmke, Frederick

Uploaded 03-08-1999
Keywords Initiative
political theory
event history analysis
Abstract In this paper I develop and test a theoretical model of the role that the initiative process plays in shaping policy outcomes. My model builds on Gerber (1996) by introducing uncertainty over the median voter's ideal point and by allowing the interest group to lobby the legislature before a potential initiative is proposed. Successful lobbying may occur due to the uncertainty over the outcome of an initiative. Besides the possibility of lobbying, the results differ from Gerber's since proposal of an initiative is an equilibrium outcome for certain parameter values. I then turn to an event history analysis of state lottery adoptions to test the model's prediction that the initiative process should make it more likely that a state adopt a lottery. This is related to work by Berry and Berry (1990). The empirical hypothesis is found to be supported in the post 1980 period, which I believe is a result of the well-documented resurgence in its use after California's Proposition 13 in 1978. An indirect effect of the initiative in non-initiative states is also found through the importance of neighbors' adoptions. This confirms the view that initiative states are often policy leaders, which I argue may lead to less effective policy choices since they have less information about how to implement then.

Time Series Models for Compositional Data
Brandt, Patrick T.
Monroe, Burt L.
Williams, John T.

Uploaded 07-09-1999
Keywords compositional data
time series analysis
Monte Carlo simulation
Abstract Who gets what? When? How? Data that tell us who got what are compositional data - they are proportions that sum to one. Political science is, unsurprisingly, replete with examples: vote shares, seat shares, budget shares, survey marginals, and so on. Data that also tell us when and how are compositional time series data. Standard time series models are often used, to detrimental consequence, to model compositional time series. We examine methods for modeling compositional data generating processes using vector autoregression (VAR). We then use such a method to reanalyze aggregate partisanship in the United States.

Poisson-Normal Dynamic Generalized Linear Mixed Models of U.S. House Campaign Contributions
Mebane, Walter R.
Wand, Jonathan

Uploaded 07-11-1999
Keywords GLMM
count model
campaign contributions
campaign finance
U.S. House of Representatives
1984 election
Abstract We develop generalized linear mixed models to analyze itemized contributions to U.S. House campaigns. Our basic model is a system of Poisson processes that have means that are log-linear functions of normally distributed random effects. Our model permits multiple random effects, including serially correlated effects. The mixed model specification involves an integration over the random effects that is analytically intractable. When there is only one, serially independent random effect, the model may be estimated using quadrature to evaluate the integral. With multiple random effects, quadrature is infeasible but the model may be estimated using the Monte Carlo EM (MCEM) algorithm proposed by McCulloch (1997). We illustrate these various estimation methods. The system we analyze includes contributions to Democratic and Republican candidates from different sources, including individuals and PACs. We estimate dynamic effects both within and across contributions series. The cross-series dynamics measure how contributions to one candidate react to contributions to the other. The cross-series dynamics also measure how contributions to a candidate from one source can mobilize contributions from other sources. We use a combination of observed variables and random effects to test the hypothesis of dynamic mobilization against several hypotheses that imply constant differences between candidates and between districts. One such hypothesis is that some candidates received persistently higher contributions from all sources because of PAC endorsements. Another is that some candidates are simply better at raising money than others. We also test how national expectations about presidential election outcomes affect contributions. We apply our model to itemized contributions data for open seat races in the 1984 election.

Campaign Timing and Vote Determinants
Peterson, David A.M.

Uploaded 09-07-1999
Keywords Campaign effects
hierarchical models
random coefficient
Markov chain Monte Carlo
Abstract Questions about the role of campaigns in making different considerations more important for voters have been central to the study of political behavior for fifty years (Lazardsfeld et al 1948). The basic concern is does the information presented during the campaign alter how voters evaluate and choose between candidates. This paper develops a random coefficient or hierarchical logit model to analyze the 1984 NES Continuous Monitoring Survey. The specification treats the effect of partisanship, policy distance and candidate character traits as a function of the campaign timing. Of the theories tested in this paper, the attitude strength model best predicts the changes in vote determinants across the campaign.

Strategic Allocation of Party Money
Glasgow, Garrett

Uploaded 03-17-1998
Keywords none submitted
Abstract This paper models and empirically examines the strategic behavior of American political parties in House elections. In each election cycle, political parties must decide which candidates competing for seats will receive aid from party coffers. Parties pursue two goals when distributing funds; they wish to maximize their representation in the House by winning seats, and they wish to fill those seats with candidates who are likely to vote in a way that furthers the party's goals. I assume that in each district both the Republicans and the Democrats pursue these goals by sending a signal about the strength of their support for their respective candidates. This signal takes the form of monetary contributions to a candidate's campaign -- the greater the contribution, the stronger the signal of support for the candidate. These signals attract other contributions to the candidate receiving party support, as other investors in candidates regard the party's signal as credible evidence that the candidate is of high quality. These added contributions increase the likelihood of the candidate winning the seat. Private investors wish to contribute to candidates who already have a good chance of winning the election, as contributions to winning candidates give them access to a legislator, while contributions to a losing candidate are lost. Parties know that they can improve the chances of victory for a candidate by offering aid and thus sending a signal of support, but incentives to win a maximal number of seats and reward party loyalists are balanced with the desire to maintain a credible signal. As party resources are not infinite, each party would like to distribute its limited resources in such a way as to maximize the chances of achieving its goals. A game-theoretic model that captures the decision problem faced by each party is developed and empirically tested. I conclude that parties do not always distribute funds in a rational manner (in particular, too many resources are allocated to incumbents in secure districts), but as electoral outcomes in a particular district become uncertain, parties begin to adhere to the predictions of the game-theoretic model much more closely.

A Random Effects Approach to Legislative Ideal Point Estimation
Bailey, Michael

Uploaded 04-21-1998
Keywords ideal points
random effects models
Bayesian estimation
em algorithm
Abstract Conventionally, scholars use either standard probit/logit techniques or fixed-effect methods to estimate legislative ideal points. However, these methods are unsatisfactory when a limited number of votes are available: standard probit/logit methods are poorly equipped to handle multiple votes and fixed-effect models disregard serious ``incidental parameter'' problems. In this paper I present an alternative approach that moves beyond single-vote probit/logit analysis without requiring the large number of votes needed for fixed-effects models. The method is based on a random effects, panel logit framework that models ideal points as stochastic functions of legislator characteristics. Monte Carlo results and an application to trade politics demonstrate the practical usefulness of the method.

A Method-Matching Approach to Maximum Likelihood Estimation of the Beta Distribution
Paolino, Philip

Uploaded 07-11-1998
Keywords maximum-likelihood
beta distribution
Monte Carlo
Abstract The beta distribution is a flexible distribution that can produce a uniform, unimodal, or bimodal distribution of points that can be either symmetric or skewed, but because the two shape parameters in a standard beta distribution do not correspond to the mean and the variance of the distribution, it it not obvious how one tests for the statistical significance of independent variables upon the mean or variance. In this paper, I will first discuss a "standard" approach to this problem as well as develop a "moment-matching" approach. Second, I will use Monte Carlo simulations to examine how well these approaches reproduce the true values of the function given different sample sizes and conditions. Third, I will present some empirical results using the moment-matching approach and compare these results with those obtained from the "standard" approach. From this work, I conclude that while the "moment-matching" approach produces reasonable estimates under the most common situations, the "standard" approach, using a Wald test to evaluate statistical significance, generally outperforms the "moment-matching" approach. As such, while the "moment-matching" approach has the attractive feature of allowing the researcher to estimate a variable's effect upon the mean or variance directly, its use is probably limited to instances where the researcher has a very good reason for wanting to constrain certain parameters to having zero effect upon either the mean or varianc

Nonnested Model Testing for World Politics: Assessing Binary Choice Models
Clarke, Kevin A.

Uploaded 08-17-1998
Keywords nonnested hypothesis testing
Cox test
model selection
Abstract he major goal of this project is to introduce and develop a methodology of nonnested hypothesis testing that researchers in world politics will find useful. I make use of both the Cox test for nonnested hypotheses and the Vuong test for nonnested model selection. I argue for a sequential approach where the Vuong test will be used depending upon the outcome of the Cox test. In keeping with the goal of making this methodology useful for world politics research, I discuss both tests in the context of binary choice models, specifically probits. I apply the methodology developed to the problem of testing alternative models of the escalation of great power militarized disputes.

Pooling Disparate Observations
Bartels, Larry M.

Uploaded 01-01-1995
Keywords induction
statistical inference
Bayesian statistics
fractional pooling
Abstract Data analysts frequently face difficult choices about whether to pool disparate observations in their statistical analyses. I explore the inferential ramifications of such choices, and propose a new technique, dubbed "fractional pooling," which provides a simple way to incorporate prior beliefs about the theoretical relevance of disparate observations. The technique is easy to implement and has a plausible rationale in Bayesian statistical theory. I illustrate the potential utility of fractional pooling by applying the technique to political data originally analyzed by Ashenfelter (1994), Powell (1982), and Alesina et al. (1993). These examples demonstrate that conventional approaches to analyzing disparate observations can be seriously misleading, and that the approach proposed here can enrich our understanding of the inferential implications of unavoidably subjective judgments about the theoretical relevance of available data.

Steps Towards a Political Science of Compliance: Common Insights and Recurring Problems
Brehm, John

Uploaded 00-00-0000
Keywords compliance
formal models
review essay
Abstract Compliance problems permeate politics at many levels: between individuals, between individuals and states, and between states. Although empirical work on compliance problems blossoms at all levels, less effort has been made to unify the empirical work on compliance than the effort devoted towards formal theoretic approaches. This essay begins by identifying examples of the different levels of compliance, and begins speculation about the common findings across levels, and the regular research obstacles at each level.

Macropartisanship: A Replication and Critique
Palmquist, Bradley
Green, Donald P.
Schickler, Eric

Uploaded 07-11-1996
Keywords partisanship
presidential approval
time series
Abstract This paper reevaluates the thesis of MacKuen, Erikson, and Stimson (1989, 1992) that aggregate party identification balance (macropartisanship) shifts significantly over short periods of time in response to changes in presidential popularity and consumer sentiment. The data originally used by MacKuen, et al. were a sample of the complete set of Gallup polls available from 1953 to 1988. Because their data are no longer extant, and to make use of more information, we analyze party id data from 677 personal and 305 telephone Gallup polls, aggregated quarterly from 1953 to 1995. Comparisons are also made with analyses from CBS/New York Times data. As well as attempting to replicate the MacKuen, et al. results (an attempt which is not entirely successful, perhaps because of the data differences), we develop a more flexible and parsimonious time series model linking approval, consumer sentiment, and macropartisanship. The estimates obtained lead to the conclusion that macropartisanship adjusts to short-term shocks in a limited and gradual fashion. These shifts are not large enough to call into question the traditional views of realignment and the stabilizing role that party identification plays in a party system.

Conditional Party Government And Member Turnout On Senate Recorded Votes, 1873-1935
Sala, Brian R.
Forgette, Richard

Uploaded 12-29-1997
Keywords rational voter model
time series
roll-call voting
Abstract According to the conditional party government thesis, party members bond or precommit themselves to supporting "party" positions under certain circumstances. A test of this thesis asks whether party members are more likely to participate in a roll call vote when the question has been identified by party leaders as important to the party. We show that party leadership signals systematically affected member turnout levels in the U.S. Senate during 1873-1935. On average, two-party turnout on party-salient votes rose by more than five members during 1873-1923 and more than three members during 1923-35 relative to "non-salient" votes. These results also provide evidence of cohesive partisan behavior in the Senate well before the parties began the regular practice of designating floor leaders and whips.

Non-Compulsory Voting in Australia?: what surveys can (and can't) tell us
Jackman, Simon

Uploaded 08-25-1997
Keywords turnout
Australian politics
compulsory voting
political participation
measurement error
social-desirability heuristic
question-order effects
parametric bootstrap
Abstract Compulsory voting has come under close scrutiny in recent Australian political debate, and influential voices within the (conservative) Coalition government have called for its repeal. Conventional wisdom holds that a repeal of compulsory voting would result in a sizeable electoral boost for the Coalition; the proportion of Coalition voters who would not vote is thought to be smaller than the corresponding proportion of Labor voters. But estimates of Coalition gains under a return to voluntary turnout are quite rough-and-ready, relying on methods hampered by critical shortcomings. In this paper I focus on assessing the counter-factual of non-compulsory turnout via surveys: while turnout is compulsory in Australia, responding to surveys isn't, and the problems raised by high rates of non-response are especially pernicious in attempting to assess the counter-factual of voluntary turnout. Among survey respondents, social-desirability and question-order effects also encourage over-reports of the likelihood of voluntarily turning out. Taking non-response and measurement error into consideration, I conclude that survey-based estimates (a) significantly emph{under-estimate} the extent to which turnout would emph{decline} under a voluntary turnout regime; but (b) emph{over-estimate} the extent to which a fall in turnout would work to the advantage of the Coalition parties. Nonetheless, the larger of the Coalition parties --- the Liberal Party --- unambiguously increases its vote share under a wide range of assumptions about who does and doesn't voluntarily turnout.

The "Miracle" Revisited: An Examination of The Micro-Foundations of Aggregate Public Opinion
Berinsky, Adam

Uploaded 08-18-1997
Keywords public opinion
heteroskedastic probit
ordered probit
selection bias
item non-response
Abstract One of the best-known findings in the public opinion literature is that individual responses to survey questions, by and large, both exhibit little constraint and are highly unstable over time. One response to this bleak finding has been to search for coherence and stability at the aggregate level. Scholars who adopt this approach -- most notably Page and Shapiro (1992) -- argue that though most individuals are poorly informed about politics and may have unstable attitudes, the "miracle TRUNCATED.

Modeling Multilevel Data Structures
Jones, Bradford S.
Steenbergen, Marco R.

Uploaded 07-17-1997
Keywords multilevel models
random coefficients
contextual analysis
Abstract Although integrating multiple levels of data into an analysis can often yield better inferences about the phenomenon under study, traditional methodologies used to combine multiple levels of data are problematic. In this paper, we discuss several methodologies under the rubric of multilevel analysis. Multilevel methods, we argue, provide researchers, particularly researchers using comparative data, substantial leverage in overcoming the typical problems associated with either ignoring multiple levels of data, or problems associated with combining lower-level and higher-level data (including overcoming implicit assumptions of fixed and constant effects). The paper discusses several variants of the multilevel model and provides an application of individual-level support for European integration using comparative political data from Western Europe.

A New Approach for Modeling Strategic Voting in Multiparty Systems
Alvarez, R. Michael
Nagler, Jonathan

Uploaded 04-04-1997
Keywords multinomial probit
strategic voting
Abstract Whether voters vote strategically, using their vote to best further their interests, or vote sincerely, voting for their first choice among the alternatives, is a question of longstanding interest. We offer two innovations in searching for the answer to this question. First, we begin with a more consistent model of sincere voting in multiparty democratic systems than has been presented in the literature to date. Second, we incorporate new operationalizations of the objective potential for strategic behavior than have been used in the past. We offer a test of strategic voting in the 1987 British General Election based on the varience in strategic setting across constituencies in Britain. Prepared for presentation at the Annual Meeting of the Midwest Political Science Association. This is one of many papers by the authors; the ordering of names reflects alphabetic convention. We thank Jonathan Katz and Guy Whitten for supplying helpful data for this project. We also thank Gary Cox and Jonathan Katz for discussions of this subject. Last, we thank Shaun Bowler for his work with us on a related project.

Voting cycles and institutional paradoxes: a model of partisan control and change in state politics
Brierly, Allen

Uploaded 11-05-2004
Keywords EITM
election and voting cycles
measurement of political party competition
state elections
Abstract This study applies a formal model of political competition to analyze partisan control and changes in partisan control of state government. The analysis is a straightforward application of both traditional theories of political parties and a social choice understanding of the role agenda setting plays in electoral competition. The models incorporate the traditional classification and estimation of party competition, while extending the more formal analysis of agenda setting to duopoly competition in a long-run electoral context. The findings synethesize a variety of recent and traditional hypotheses concerning state politics, governance, and elections. The results describe the extent and scope of divided government and compare the stability of unified versus divided partisan control. Theories of party change are also incorporated in the model to test the stability of partisan control and to classify different types of political competition. This study presents both a description and a discussion of the arguments for competition, linking the merits of increasing competition to the consequences of unstable party changes and divided partisan control.

Heterogeneity in Supreme Court Decision-Making: How Case-Level Factors Alter Preference-Based Behavior
Bartels, Brandon

Uploaded 07-19-2005
Keywords Supreme Court decision-making
multilevel modeling
Abstract Many theoretical perspectives of Supreme Court decision-making, most notably the attitudinal model, assume that justices’ policy preferences exhibit a uniform impact on their decisions across a wide variety of situations. I argue that there exists meaningful heterogeneity in the impact of policy preferences that can be explained theoretically and tested empirically. Adopting social psychological insights from theories of the attitude-behavior relationship, I develop a theoretical framework specifying the mechanisms--attitude strength and accountability--that explain variation in the preference-behavior relationship for justices. Case-level factors associated with each mechanism are hypothesized to moderate the impact of preferences. To test the hypotheses, I use a multilevel (hierarchical) modeling framework and conceive of Supreme Court voting data from the 1994-2002 terms as a two-level hierarchy: justices’ choices nested within cases. Estimates from a random coefficient model indicate that case-level variables associated with both attitude strength and accountability systematically explain variation in the preference-behavior relationship. Using an average partial effects post-estimation procedure, I present in-depth substantive interpretations of the results that highlight the compelling ways in which these case-level factors alter the nature of preference-based behavior. In addition to providing important substantive conclusions about Supreme Court decision-making, the paper also illustrates how a multilevel modeling framework is well-qualified to test heterogeneity-related hypotheses in social and behaviorial processes.

Higher-Dimension Markov Models
Epstein, David
O'Halloran, Sharyn

Uploaded 07-18-2005
Keywords Markov models
Abstract Markov transition models are becoming a popular tool for exploring the dynamics of systems that can take on a finite number of states. However, their application in political science has thus far been limited to the two-state case. This paper explains the techniques necessary to estimate and interpret higher-dimension Markov models. We then apply them to the study of democratic transitions, where we find that a threestate model including an intermediary ``partial democracy'' category out-performs the previous two-state model of Przeworski, et. al. (2000).

Fitting Multilevel Models When Predictors and Group Effects Correlate
Bafumi, Joseph

Uploaded 04-27-2006
Keywords Multilevel models
random effects
fixed effects
unit effects
group effects
Abstract Random effects models (that is, regressions with varying intercepts that are modeled with error) are avoided by some social scientists because of potential issues with bias and uncertainty estimates. Particularly, when one or more predictors correlate with the group or unit effects, a key Gauss-Markov assumption is violated and estimates are compromised. However, this problem can easily be solved by including the average of each individual-level predictors in the group-level regression. We explain the solution, demonstrate its effectiveness using simulations, show how it can be applied in some commonly-used statistical software, and discuss its potential for substantive modeling.

Is It Worth Going the Extra Mile to Improve Causal Inference? Understanding Voting in Los Angeles County
Brady, Henry E.
Hui, Iris

Uploaded 07-19-2006
Keywords Counterfactual
Abstract Two seemingly unrelated approaches to quantitative analysis have recently become more popular in social science applications. The first approach is the explicit consideration of counterfactuals in causal inference and the development of various matching techniques to choose control cases comparable to treated cases in terms of some predefined characteristics. To be useful, these methods require the identification of important characteristics that are likely to ensure that a statistical condition called “conditional independence” is met. The second trend is the increased attention given to geography and the use of spatial statistics. Although these two approaches have found their ways into the social science research separately, we think that they can be fruitfully combined. Geography and Geographic Information Systems (GIS) can improve matching and causal inference. Geography can be conceptualized in terms of “distance” and “place” which can provide guidance about potentially important characteristics that can be used to improve matching. After developing a conceptual framework that shows how this can be done, we present two empirical examples which combine counterfactual thinking with geographical ideas. The first example looks at the cost of voting and demonstrates the utility of matching using zip codes and distance to polling place. The second example looks at the performance of the InkaVote voting system in Los Angeles by matching precincts in LA with geographically adjacent precincts in surrounding counties. This example demonstrates the strengths and weaknesses of geographic proximity as a matching variable. In pursuing these examples, we also show how recent progress in GIS techniques provides tools that can deepen researchers’ understanding of their idea.

Forecasting House Seats from Generic Congressional Polls
Bafumi, Joseph
Erikson, Robert S.
Wlezien, Christopher

Uploaded 10-25-2006
Keywords generic ballot polls
2006 midterm election
congressional seats
Abstract We provide some guidance for translating the results of generic congressional polls into the election outcome for 2006. Via computer simulation based on statistical analysis of historical data, we show how generic vote polls can be used to forecast the election outcome. We convert the results of generic vote polls into a projection of the actual national vote for Congress and ultimately into the partisan division of seats in the House of Representatives. Our model allows both a point forecast—our expectation of the seat division between Republicans and Democrats—and an estimate of the probability of partisan control. Based on current generic ballot polls, we forecast an expected Democratic gain of 32 seats with Democratic control (a gain of 15 seats or more) a near certainty.

Partisans without constraint: Political polarization and trends in American public opinion
Gelman, Andrew
Baldassarri, Delia

Uploaded 07-06-2007
Keywords issue alignment
Abstract Political polarization is commonly measured using the variation of responses on an individual issue. By this measure, research has shown that---despite many commentators' concerns about increased polarization---Americans' attitudes have become no more variable in recent decades. What has changed in the electorate is its level of partisanship. We define a new measure of political polarization as increased correlations in political attitudes and we distinguish between issue partisanship---the correlation of issue attitudes with party ID or ideology---and issue alignment---the correlation between pairs of issues. Using the National Election Studies (1972-2004), we find issue alignment to have increased by only 2 percentage points in correlation per decade. Issue partisanship has increased more than twice as fast, thus suggesting that changes in people's attitudes correspond more to a re-sorting of party labels among voters than to greater constraint on issue attitudes. Since parties are more polarized, they are now better at sorting individuals along ideological lines. Increased issue partisanship, in a context of persistently low issue constraint, might give greater voice to political extremists and single-issue advocates, and amplify dynamics of unequal representation.

Bargaining and Society: A Statistical Model of the Ultimatum Game
Signorino, Curtis

Uploaded 07-18-2007
Keywords bargaining
random utility models
Abstract In this paper we derive a statistical estimator for the popular Ultimatum bargaining game. Using monte carlo data generated by a strategic bargaining process, we show that the estimator correctly recovers the relationship between dependent variables, such as the proposed division and bargaining failure, relative to substantive variables that comprise players' utilities. We then use the model to analyze bargaining data in a number of contexts. The current example examines the effects of demographics on bargaining behavior in experiments conducted on U.S. and Russian participants.

The playing field shifts: predicting the seats-votes curve in the 2008 U.S. House election
Kastellec, Jonathan
Gelman, Andrew
Chandler, Jamie

Uploaded 06-01-2008
Keywords Congress
partisan bias
seats-votes curve
Abstract This paper predicts the seats-votes curve for the 2008 U.S House elections. We document how the electoral playing field has shifted from a Republican advantage between 1996 and 2004 to a Democratic tilt today. Due to the shift in incumbency advantage from the Republicans to the Democrats, compounded by a greater number of retirements among Republican members, we show that the Democrats now enjoy a partisan bias, and can expect to win more seats than votes for the first time since 1992. While this bias is not as large as the advantage the Republicans held in 2006, it is likely to help the Democrats win more seats than votes and thus expand their majority.

Adjusting Experimental Data
Keele, Luke
McConnaughy, Corrine
White, Ismail

Uploaded 07-06-2008
Keywords Experiments
Abstract Randomization in experiments allows researchers to assume that the treatment and control groups are balanced with respect to all characteristics except the treatment. Randomization, however, only makes balance probable, and accidental covariate imbalance can occur for any specific randomization. As such, statistical adjustments for accidental imbalance are common with experimental data. The most common method of adjustment for accidental imbalance is to use least squares to estimate the analysis of covariance (ANCOVA) model. ANCOVA, however, is a poor choice for the adjustment of experimental data. It has a strong functional form assumption, and the least squares estimator is notably biased in sample sizes of less than 500 when applied to the analysis of treatment effects. We evaluate alternative methods of adjusting experimental data. We compare ANCOVA to two different techniques. The first technique is a modified version of ANCOVA that relaxes the strong functional form assumption of this model. The second technique is matching, and we test the differences between two matching methods. For the first, we match subjects and then randomize treatment across pairs. For the second, we randomize the treatment and match prior to the estimation of treatment effects. We use all three techniques with data from a series of experiments on racial priming. We find that matching substantially increases the efficiency of experimental designs.

Learning from the Campaign Context: Multivariate Matching with Exposure
Christenson, Dino

Uploaded 07-14-2008
Keywords multivariate matching
non-bipartite matching
signed rank test
sensitivity analysis
political information
presidential campaigns
Abstract PolMeth XXV poster.

Estimating and Bounding Mechanism Specific Causal Effects
Glynn, Adam

Uploaded 07-03-2008
Keywords counterfactuals
Abstract Political scientists often cite the importance of mechanism specific causal knowledge, both for its intrinsic scientific value and as a necessity for informed policy. However, outside the framework of additive linear regression models with homogenous causal effects, mechanism specific effects are, in general, not estimated explicitly. Counterfactual causal models allow the formal definition of such concepts as direct, indirect, and mechanism specific effects, and the derivation of conditions for their identification (point or interval). In this paper, I demonstrate the use of counterfactuals to decompose causal effects into mechanism specific effects, showing that estimation and bounding can be accomplished with minor adjustments to standard techniques. I illustrate this methodology with examples from American and Comparative Politics.

Statistical Inference After Model Selection
Berk, Richard
Brown, Lawrence
Zhao, Linda

Uploaded 04-29-2009
Keywords Statistical Inference
Model Selection
Abstract Conventional statistical inference requires that a model of how the data were generated be known before the data are analyzed. Yet in criminology, and in the social sciences more broadly, a variety of model selection procedures are routinely undertaken followed by statistical tests and confidence intervals computed for a "final" model. In this paper, we examine such practices and show how they are typically misguided. The parameters being estimated are no longer well defined, and post-model-selection sampling distributions are mixtures with properties that are very different from what is conventionally assumed. Confidence intervals and statistical tests do not perform as they should. We examine in some detail the specific mechanisms responsible. We also offer some suggestions for better practice.

Joint Modeling of Dynamic and Cross-Sectional Heterogeneity: Introducing Hidden Markov Panel Models
Park, Jong Hee

Uploaded 07-14-2009
Keywords Bayesian statistics
Hidden Markov models
Markov chain Monte Carlo methods
Reversible jump Markov chain Monte Carlo
Abstract Researchers working with panel data sets often face situations where changes in unobserved factors have produced changes in the cross-sectional heterogeneity across time periods. Unfortunately, conventional statistical methods for panel data are based on the assumption that the unobserved cross-sectional heterogeneity is time constant. In this paper, I introduce statistical methods to diagnose and model changes in the unobserved heterogeneity. First, I develop three combinations of a hidden Markov model with panel data models using the Bayesian framework; (1) a baseline hidden Markov panel model with varying fixed effects and varying random effects; (2) a hidden Markov panel model with varying fixed effects; and (3) a hidden Markov panel model with varying intercepts. Second, I present model selection methods to diagnose the dynamic heterogeneity using the marginal likelihood method and the reversible jump Markov chain Monte Carlo method. I illustrate the utility of these methods using two important ongoing political economy debates; the relationship between income inequality and economic growth and the effect of institutions on income inequality.

How Much is Minnesota Like Wisconsin? States as Counterfactuals
Keele, Luke
Minozzi, William

Uploaded 07-10-2010
Keywords causal inference
voter turnout
placebo tests
research design
Abstract Political scientists are often interested in understanding whether state laws alter individual level behavior. For example, states often alter their election procedures, which can increase or decrease the cost of voting. In this example, it is important to understand whether these changes alter turnout since changes in costs may disproportionally affect those at the margin of voting. Analysts have typically used one of two different regression based research designs to estimate whether changes in state laws increase or decrease turnout. In both instances, voters from states without a change in laws are used as counterfactuals for the voters who experience a change in election law. Here, we carefully examine the assumptions behind both research designs and study their plausibility. Next, we outline a series of research design elements that can be used in addition to the usual designs. These research design elements allow the analyst to better understand the role of unobserved confounders, which is obscured in standard research designs. Using these design elements, we demonstrate that what appears to be clear cut evidence from the usual research designs is often a function confounding. We argue that to truly understand how changes in voting costs alters turnout, a different research design is required. Future work must rely on a research design that makes comparisons among voters who live within the same state. Our work has implications beyond turnout to any investigation of how state level treatments alter individual behavior.

A Statistical Method for Empirical Testing of Competing Theories
Imai, Kosuke
Tingley, Dustin

Uploaded 08-24-2010
Keywords EITM
finite mixture model
Bayesian statistics
multiple testing
false discovery rate
EM algorithm
Abstract Empirical testing of competing theories lies at the heart of social science research. We demonstrate that a very general and well-known class of statistical models, called finite mixture models, provides an effective way of rival theory testing. In the proposed framework, each observation is assumed to be generated from a statistical model implied by one of the theories under consideration. Researchers can then estimate the probability that a specific observation is consistent with either of the competing theories. By directly modeling this probability with the characteristics of observations, one can also determine the conditions under which a particular theory applies. We discuss a principled way to identify a list of observations that are statistically significantly consistent with each theory. Finally, we propose several measures of the overall performance of a particular theory. We illustrate the advantages of our method by applying it to an influential study on trade policy preferences.

Comparative Effectiveness of Matching Methods for Causal Inference
King, Gary
Nielsen, Richard

Uploaded 07-27-2011
Keywords Causal Inference
Propensity Scores
Abstract Matching is an increasingly popular method of causal inference in observational data, but applications of it are often poorly executed. We address this problem by providing a graphical approach for choosing among the numerous possible matching solutions generated by three methods: the venerable "Mahalanobis Distance Matching" (MDM), the commonly used "Propensity Score Matching" (PSM), and a newer approach called "Coarsened Exact Matching" (CEM). In the process of using our approach, we also discover that PSM often approximates random matching, both in real applications and in data simulated by the processes for which PSM theory was designed. Moreover, contrary to conventional wisdom, random matching is not benign: it (and thus PSM) can degrade inferences relative to not matching at all. We find that MDM and CEM do not have this problem, and in practice CEM usually outperforms the other two approaches. However, with our comparative graphical approach, focus is on choosing a matching solution for a particular application, which is what may improve inferences, rather than the particular method used to generate it. The easyto- follow procedures we describe thus enable researchers to improve the application of any one of these methods, to choose among them and from the various matching solutions generated by any one method, and ultimately to increase the validity and extent of causal information extracted from their data. Link to paper: http://gking.harvard.edu/files/psparadox.pdf

Computerized Adaptive Testing for Public Opinion Surveys
Montgomery, Jacob
Cutler, Josh

Uploaded 06-19-2012
Keywords surveys
item response
dynamic surveys
Abstract Survey researchers avoid using large multi-item scales to measure latent traits due to both the financial costs and the risk of driving up non-response rates. Typically, investigators select a subset of available scale items rather than asking the full battery. Reduced batteries, however, can sharply reduce measurement precision and introduce bias. In this paper, we present computerized adaptive testing (CAT) as a method for minimizing the number of questions each respondent must answer while preserving measurement accuracy and precision. CAT algorithms respond to individuals' previous answers to select subsequent questions that most efficiently reveal respondents' position on a latent dimension. We introduce the basic stages of a CAT algorithm and present the details for one approach to item-selection appropriate for public opinion research. We then demonstrate the advantages of CAT via simulation and by empirically comparing dynamic and static measures of political knowledge.

An Alternative Solution to the Heckman Selection Problem: Selection Bias as Functional Form Misspecification
Kenkel, Brenton
Signorino, Curtis

Uploaded 07-18-2012
Keywords selection models
functional form misspecification
nonparametric models
polynomial regression
Abstract The "selection problem" is typically seen as a form of omitted variable bias. We recast the problem as one of functional form misspecification and examine two situations in which flexible or nonparametric estimation techniques may be used as a complement or alternative to traditional selection models. First, we show that such techniques can allow a researcher to recover the conditional relationship between covariates and the expected outcome, even if data on the probability of selection into the subsample is unavailable. We demonstrate the validity of this approach analytically and using Monte Carlo simulations. Second, we show that flexible methods can be used to validate or improve a linear selection model specification when a researcher does possess the prior-stage data. We illustrate this process with an application to data from Mroz (1987) on women's wages.

Improving Experiments by Optimal Blocking: Minimizing the Maximum Within-block Distance
Higgins, Michael
Sekhon, Jasjeet

Uploaded 07-16-2013
Keywords Blocking
Experimental Design
Neyman Model
Abstract We develop a new method for blocking in randomized experiments that works for an arbitrary number of treatments. We analyze the following problem: given a threshold for the minimum number of units to be contained in a block, and given a distance measure between any two units in the finite population, block the units so that the maximum distance between any two units within a block is minimized. This blocking criterion can minimize covariate imbalance, which is a common goal in experimental design. Finding an optimal blocking is an NP-hard problem. However, using ideas from graph theory, we provide the first polynomial time approximately optimal blocking algorithm for when there are more than two treatment categories. In the case of just two such categories, our approach is more efficient than existing methods. We derive the variances of estimators for sample average treatment effects under the Neyman-Rubin potential outcomes model for arbitrary blocking assignments and an arbitrary number of treatments.

A Step in the Wrong Direction: An Appraisal of the Zero-Intelligence Model of Government Formation
Martin, Lanny
Vanberg, Georg

Uploaded 10-15-2013
Keywords government formation
zero-intelligence model
Abstract In a recent article in the Journal of Politics, Golder, Golder, and Siegel argue that models of government formation should be rebuilt "from the ground up." They propose to do so with a "zero-intelligence" model of government formation, which they claim makes no theoretical assumptions beyond the requirement that a potential government, to be chosen, must be preferred by all its members and a legislative majority to the incumbent administration. They also claim that, empirically, their model does significantly better than existing models in predicting formation outcomes. We disagree with both claims. Theoretically, their model is unrestrictive in terms of its institutional assumptions, but it imposes a highly implausible behavioral assumption that drives the key results. Empirically, their assessment of the performance of the zero intelligence model turns on a misunderstanding of the relevant data for testing coalition theories. We demonstrate that the predictions of the zero-intelligence model are no more accurate than random guesses, in stark contrast to the predictions of well-established approaches in traditional coalition research. We conclude that scholars would be ill advised to dismiss traditional approaches in favor of the approach advanced by Golder, Golder, and Siegel.

Unifying Political Metrology: A Probilistic Model of Measurement
Grant, J. Tobin

Uploaded 07-21-2004
Keywords Measurement
Abstract Political science needs an improved metrology, which includes both measurement theory and applied assessments of measurement procedures.  I discuss central metrological concepts and their application to political science.  I present a probilistic model of measurement that is grounded in well-established measurement theory.  The model incorporates recent work in metrology that emphasizes the uncertainty of all measurements.  This model has implications for political science measures, including the criteria used to evaluate measurements, the role of qualitative measurements, and the tasks needed to improve measurements.  I conclude with a discussion of how political science can improve its metrology.

Lagging the Dog?: The Robustness of Panel Corrected Standard Errors in the Presence of Serial Correlation and Observation Specific Effects
Kristensen, Ida
Wawro, Gregory

Uploaded 07-13-2003
Keywords time-series cross-section data
serial correlation
fixed effects
panel data
lag models
Monte Carlo experiments
Abstract This paper examines the performance of the method of panel corrected standard errors (PCSEs) for time-series cross-section data when a lag of the dependent variable is included as a regressor. The lag specification can be problematic if observation-specific effects are not properly accounted for, leading to biased and inconsistent estimates of coefficients and standard errors. We conduct Monte Carlo studies to assess how problematic the lag specification is, and find that, although the method of PCSEs is robust when there is little to no correlation between unit effects and explanatory variables, the method's performance declines as that correlation increases. A fixed effects estimator with robust standard errors appears to do better in these situations.

Estimating incumbency advantage and its variation, as an example of a before/after study
Gelman, Andrew
Huang, Zaiying

Uploaded 02-07-2003
Keywords Bayesian inference
before-after study
Congressional elections
Abstract Incumbency advantage is one of the most studied features in American legislative elections. In this paper, we construct and implement an estimate that allows incumbency advantage to vary between individual incumbents. This model predicts that open-seat elections will be less variable than those with incumbents running, an observed empirical pattern that is not explained by previous models. We apply our method to the U.S. House of Representatives in the twentieth century: our estimate of the overall pattern of incumbency advantage over time is similar to previous estimates (although slightly lower), and we also find a pattern of increasing variation. In addition to the application to incumbency advantage, our approach represents a new method, using multilevel modeling, for estimating effects in before/after studies.

< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 next>