logoPicV1 logoTextV1

Search Results


Below results based on the criteria 'Prisoner'
Total number of records returned: 909

1
Paper
The Estimation of Time-Invariant Variables in Panel Analyses with Unit Fixed Effects
Pluemper, Thomas
Troeger, Vera E.

Uploaded 07-23-2004
Keywords Time Invariant Variables
Unit effects
Monte Carlo
Hausman-Taylor
Abstract This paper analyzes the estimation of time-invariant variables in panel data models with unit-effects. We compare three procedures that have frequently been employed in comparative politics, namely pooled-OLS, random effects and the Hausman-Taylor model, to a vector decomposition procedure that allows estimating time-invariant variables in an augmented fixed effects approach. The procedure we suggest consists of three stages: the first stage runs a fixed-effects model without time-invariant variables, the second stage decomposes the unit-effects vector into a part explained by the time-invariant variables and an error term, and the third stage re-estimates the first stage by pooled-OLS including the time invariant variables plus the error term of stage 2. We use Monte Carlo simulations to demonstrate that this method works better than its alternatives in estimating typical models in comparative politics. Specifically, the unit fixed effects vector decomposition technique performs better than both pooled OLS and random effects in the estimation of time-invariant variables correlated with the unit effects and better than Hausman-Taylor in estimating the time-invariant variables correlated with the unit effects. Finally, we re-analyze recent work by Huber and Stephens (2001) as well as by Beramendi and Cusack (2004). These analyses seek to cope with the problem of time-invariant variables in panel data.

2
Paper
A Reassessment of Presidential Campaign Strategy Formation and Candidate Resource Allocation
Reeves, Andrew
Chen, Lanhee
Nagano, Tiffany

Uploaded 07-11-2003
Keywords comments greatly appreciated
presidential campaign strategy
Abstract Daron Shaw (1999) argues in "The Methods behind the Madness: Presidential Electoral College Strategies, 1988-1996" that candidates formulate state-level general election campaign strategies based on a number of predictable and exogenous factors, such as the cost of television advertisements and electoral vote share. Shaw (1999) further asserts that these strategies are strong independent predictors of candidate resource allocation. His article supports these conclusions with what are claimed to be results from ordered probit and two-stage least squares (2SLS) regressions, but we demonstrate that both are in fact ordinary least squares (LS) regressions. When we implement the methods that Shaw (1999) claims to use, we find that all key substantive conclusions in the article vanish. We show that the factors attributed to the formation of electoral college strategy are insignificant and that whether these strategies have any independent effect on the allocation of campaign resources cannot be ascertained from his (claimed or actual) methods and data.

3
Paper
Standard Voting Power Indexes Don't Work: An Empirical Analysis
Gelman, Andrew
Katz, Jonathan
Bafumi, Joseph

Uploaded 11-02-2002
Keywords Banzhaf index
decisive vote
elections
electoral college
Shapley value
voting power
Abstract Voting power indexes such as that of Banzhaf (1965) are derived, explicitly or implicitly, from the assumption that all votes are equally likely (i.e., random voting). That assumption can be generalized to hold that the probability of a vote being decisive in a jurisdiction with $n$ voters is proportional to $1/sqrt{n}$. We test---and reject---this hypothesis empirically, using data from several different U.S. and European elections. We find that the probability of a decisive vote is approximately proportional to $1/n$. The random voting model (or its generalization, the square-root rule) overestimates the probability of close elections in larger jurisdictions. As a result, classical voting power indexes make voters in large jurisdictions appear more powerful than they really are. The most important political implication of our result is that proportionally weighted voting systems (that is, each jurisdiction gets a number of votes proportional to $n$) are basically fair. This contradicts the claim in the voting power literature that weights should be approximately proportional to $sqrt{n}$.

4
Paper
The Fruit of Jefferson's Dinner Party: Roll Call Analysis of the Compromise of 1790 with Substantive and Relational Constraints
Clinton, Joshua
Meirowitz, Adam

Uploaded 07-12-2002
Keywords ideal point estimation
log roll
First Congress
agenda estimation
Abstract The "Compromise of 1790" -- in which legislative gridlock in the First House (1789-1791) was supposedly resolved by a deal in which Southern states conceded to the assumption of states' Revolutionary War debt by the federal government in exchange for locating the permanent Capitol along the Potomac -- is one of the earliest and most colorful examples of log rolls in American politics. However, historians disagree on the validity or completeness of this story and this account is only directly supported by an account from Jefferson. We assess the extent to which the voting record actually supports the hypothesis that a compromise was reached sometime in mid June. Using substantive information about the roll call votes and relational information about the agenda to specify a model in which bill locations are identified we implement a Bayesian analysis (using MCMC methods). Our results do not support the traditional account of the compromise. In resolving the capital question legislators did not anticipate that assumption would carry. We also find that the final outcome was quite centrist and legislator ideal points are better explained by sectional, as opposed to ideological, theories.

5
Paper
State and American Indian Negotiation of Gaming Compacts: An Event Count Analysis
Boehmke, Frederick
Witner, Richard

Uploaded 01-23-2002
Keywords american indian
gaming
event count
policy adoption
Abstract There has been a proliferation of casino-style Indian gaming in the years since the passage of the Indian Gaming Regulatory Act in 1988. Yet little is known about the factors that influence state and Indian nations’ decisions to enter into gaming compacts. In this paper we seek to achieve two objectives. First, we seek to understand the expansion of Indian-state gaming compacts by studying how characteristics of states and Indian nations, along with spatial and temporal diffusion, affect the number of compacts negotiated. Most importantly, we focus on Indian nation’s relationships with the states; their political influence with respect to the state and the contact they have with state government. Second, we introduce an empirical model new to the study of state politics by modeling the compacting process between Indian nations and states as an event count process. The event count model allows us to explain why some states have more Indian gaming than others and how the compacting process has evolved over time.

6
Paper
Alternative Models of Dynamics in Binary Time-Series--Cross-Section Models: The Example of State Failure
Beck, Nathaniel
Jackman, Simon
Epstein, David
O'Halloran, Sharyn

Uploaded 07-14-2001
Keywords dynamic probit
btscs
state failure
Gibbs sampling
MCMC
transitional models
discrete data
ROC
correlated binary data
generalized residuals
Abstract This paper investigates a variety of dynamic probit models for time-series--cross-section data in the context of explaining state failure. It shows that ordinary probit, which ignores dynamics, is misleading. Alternatives that seem to produce sensible results are the transition model and a model which includes a lagged emph{latent} dependent variable. It is argued that the use of a lagged latent variable is often superior to the use of a lagged realized dependent variable. It is also shown that the latter is a special case of the transition model. The relationship between the transition model and event history methods is also considered: the transition model estimates an event history model for both values of the dependent variable, yielding estimates that are identical to those produced by the two event history models. Furthermore, one can incorporate the insights gleaned from the event history models into the transition analysis, so that researchers do not have to assume duration independence. The conclusion notes that investigations of the various models have been limited to data sets which contain long sequences of zeros; models may perform differently in data sets with shorter bursts of zeros and ones.

7
Paper
How much does a vote count? Voting power, coalitions, and the Electoral College
Gelman, Andrew
Katz, Jonathan

Uploaded 05-08-2001
Keywords coalition
decisive vote
electoral college
popular vote
voting power
Abstract In an election the probability that a single voter is decisive is affected by the electoral system -- that is, the rule for aggregating votes into a single outcome. Under the assumption that all votes are equally likely (i.e., random voting), we prove that the average probability of a vote being decisive is maximized under a popular-vote (or simple majority) rule and is lower under any coalition system, such as the U.S. Electoral College system, no matter how complicated. Forming a coalition increases the decisive vote probability for the voters within a coalition, but the aggregate effect of coalitions is to decrease the average decisiveness of the population of voters. We then review results on voting power in an electoral college system. Under the random voting assumption, it is well known that the voters with the highest probability of decisiveness are those in large states. However, we show using empirical estimates of the closeness of historical U.S. Presidential elections that voters in small states have been advantaged because the random voting model overestimates the frequencies of close elections in the larger states. Finally, we estimate the average probability of decisiveness for all U.S. Presidential elections from 1960 to 2000 under three possible electoral systems: popular vote, electoral vote, and winner-take-all within Congressional districts. We find that the average probability of decisiveness is about the same under all three systems.

8
Paper
Strategic Misspecification in Discrete Choice Models
Signorino, Curtis S.
Yilmaz, Kuzey

Uploaded 08-28-2000
Abstract [not transcribed]

9
Paper
Using Ecological Inference Point Estimates in Second Stage Linear Regressions
Herron, Michael C.
Shotts, Kenneth W.

Uploaded 07-14-2000
Keywords ecological inference
second stage regressions
ordinary least squares
consistency
Abstract The practice of using point estimates produced by the King (1997) ecological inference technique in second stage linear regressions leads to second stage results that, in general, are inconsistent. This conclusion holds, notably, even when all the assumptions behind King's ecological technique are satisfied. Second stage inconsistency is a consequence of the fact that King--based point estimates of disaggregated quantities are themselves inconsistent, and, moreover, these point estimates are contaminated by errors correlated with the true quantities the estimates measure. Our findings on second stage inconsistency follow from econometric theory in conjunction with an analysis of simulated and real ecological datasets, and based on the findings we propose a bootstrap that researchers can use to produce consistent second stage estimates and valid confidence intervals.

10
Paper
Forecasting State Failure
King, Gary
Zeng, Langche

Uploaded 04-19-2000
Keywords forecast
neural network
committee methods
case-control
Abstract We offer the first independent scholarly evaluation of the claims, forecasts, and causal inferences of the State Failure Task Force and their efforts to forecast when states will fail. This Task Force, set up at the behest of Vice President Gore in 1994, has been led by a group of distinguished academics working as consultants to the U.S. Government. State failure is a grave condition that includes civil wars, revolutionary wars, genocides, politicides, and adverse or disruptive regime transitions. State Failure Task Force reports and publications have received widespread attention in the media, in academia, and from public policy decision-makers. In this paper, we identify several methodological errors in the Task Force work that cause their reported forecast probabilities of conflict to be much too large, their causal inferences to be biased in unpredictable directions, and their claims of forecasting performance to be exaggerated. However, we also find that the Task Force has amassed the best and most carefully collected data on state failure in existence, and the required corrections, although very large in effect, are easy to implement. We also reanalyze their data with better statistical and other procedures and demonstrate how to improve forecasting performance to levels significantly greater than even corrected versions of their models. We hope that this work leads to better use of political science and statistical analyses in public policy, but most of the claims analyzed are also of direct relevance to ongoing scholarly debates in political science, public health, and other disciplines.

11
Paper
Statistical Analysis of Finite Choice Models in Extensive Form
Signorino, Curtis S.

Uploaded 03-23-1999
Keywords random utility
discrete choice
strategic
equilibrium
finite choice
game theory
Abstract (not transcribed)

12
Paper
Economic Perceptions and Information in a Heterogeneous Electorate
Willette, Jennifer R.

Uploaded 04-18-1999
Keywords economic voting
ordered probit
economic perceptions
Abstract he relationship between vote choice and voter evaluations of national economic conditions is well established. There is little attention paid to the formation of those economic evaluations, however. This oversight is important since we know that economic perceptions are not direct reflections of objective economic conditions. To address this issue, I develop a model of economic perceptions which considers that the impact of media information on economic evaluations will differ based upon the `information capability' of the individual. I use 1992 American National Election Survey data to estimate an ordered probit model of economic perceptions allowing the impact of personal economic information and media information to vary based upon the respondents information capability. I test the hypothesis that individuals with higher information capability will give greater weight to media information when evaluating the economy. As information capability decreases, respondents will weight personal economic conditions more heavily.

13
Paper
Selection Bias in a Model of Candidate Entry Decisions
Kanthak, Kristen
Morton, Becky
Gerber, Elisabeth R.

Uploaded 07-13-1999
Keywords selection bias
Poisson estimation
population uncertainty
Abstract In recent years, several states have changed or considered changing their laws regulating how political parties nominate candidates for office. We focus on one potentially important consequence of these changes: How do primary election laws affect candidate entry decisions? We have constructed and solved a formal model of individual candidate behavior in which potential candidates can choose to: 1) enter the electoral competition as major party candidates; 2) enter as minor party candidates; 3) enter as independents; or 4) not enter. Based on our analysis of the model, we hypothesize that the expected utility of each choice is a function, in part, of a state's primary election laws. We test our hypotheses with data on candidate choice from recent US Congressional elections. Estimation of our model is complicated, however, by the fact that we do not observe the choices of potential candidates who choose not to enter (i.e., the sample is truncated) and the observed dependent variable (i.e., candidate choices to run as major party, minor party, or independent candidates) is measured as a discrete, unordered polychotomous choice. We employ a two-stage Heckman (1979)-type estimation procedure that utilizes a Poisson framework for estimating candidate entry rates. We find that our estimates of the effects of electoral institutions on the partisan affiliation decisions of independent candidates are unaffected by sample selection. Our estimates of the partisan affiliation decisions of minor party candidates, however, change when we account for non-random sample selection.

14
Paper
Populists in the Pluralist Heaven: How Direct Democracy Reduces Bias in Interest Representation
Boehmke, Frederick

Uploaded 10-15-1999
Keywords Initiative
direct democracy
interest groups
lobbying
fixed effects
representation
Abstract This paper explores the effect of direct democracy on state interest group populations, providing an empirical test of a formal model of how access to the initiative process affects group formation and activities (Boehmke 1999), which predicts that more groups will mobilize and become active in initiative states. This prediction is supported by the findings in this paper, which also suggest that the effect of the initiative on group mobilizations has increased from the late 1970s to 1990. The prediction that groups that face a greater collective action problem are influenced more by the initiative is also confirmed since government and social groups are among those most affected. Counterfactual analysis indicates that the initiative process makes a state's interest group population more diverse, though the gains are decreasing from 1975 to 1990.

15
Paper
Voting, Abstention, and Individual Expectations in the 1992 Presidential Election
Herron, Michael C.

Uploaded 04-07-1998
Keywords voting
abstention
selection bias
1992 election
Abstract This paper develops and applies to the 1992 presidential election a statistical model of voting and abstention in three--candidate elections. The model allows us to estimate key preference--related covariates in 1992, the extent to which abstention rates were correlated with political preferences, and the impact on abstention rates of expectations regarding the election winner. Throughout this paper, we contrast our results with those in Alvarez and Nagler (1995), a study of the 1992 election that does not incorporate abstention, and in so doing we illustrate the selection bias risked by presidential election voting research that ignores abstention. Our results highlight the importance of retrospective voting in 1992, and we identify numerous policy issues, for example, the death penalty, environmental spending, and social security, that individuals used to distinguish the three candidates in the 1992 election. Abortion, we find, played only a minor role in candidate choice. We find support for the angry voting hypothesis, namely, that angry individuals often supported the independent candidate, Ross Perot. Concerning abstention, we find that supporters of the Democratic challenger Bill Clinton abstained at higher rates than supporters of Perot and the incumbent president George Bush. And, we find that expectations concerning the likelihood that Clinton was going to be victorious in 1992 influenced abstention rates. Namely, Clinton supporters who believed that Clinton was likely to win voted at higher rates than individuals who believed otherwise. The opposite relation holds for Bush supporters: such individuals, when they predicted a Clinton victory, frequently abstained from voting. The results in this paper suggests that empirical voting studies should explicitly model the impact of expectations on voting and abstention and, more generally, should model abstention as a viable, individual--level

16
Paper
Cosponsorship Coalitions in the U.S. House of Representatives
Grant, J. Tobin
Pellegrini, Pasquale (Pat) A.

Uploaded 04-22-1998
Keywords clustering
coalitions
cosponsorship
duration models
hazard models
heterogeneity
spatial models
Abstract urrent theories and methods for studying of cosponsorship assume that the decision to cosponsor is identical to decision to vote. In this paper we develop a new theory of cosponsorship that identifies where along the ideological spectrum cosponsors of a bill are more likely to be. Moreover, we predict that members with organizational ties to the sponsor are more likely to cosponsor than other members. To test this theory, we employ a spatial duration model. This method has recently been used by geographers to estimate areas that are more likely to experience an "event." Using this technique permits a statistical test that supports our substantive hypotheses that cosponsorship coalitions are shaped by the characteristics of the location of the bill, the shared ties to the sponsor, and the policy area. In addition, more active sponsors are associated with wider and less clustered coalitions. These findings demonstrate that theories of the voting decision are not applicable to cosponsorship.

17
Paper
The Problem with Quantitative Studies of International Conflict
Beck, Nathaniel
King, Gary
Zeng, Langche

Uploaded 07-15-1998
Keywords Conflict
logit
neural networks
forecasting
Bayesian analysis
Abstract Despite immense data collections, prestigious journals, and sophisticated analyses, empirical findings in the literature on international conflict are frequently unsatisfying. Statistical results appear to change from article to article and specification to specification. Very few relationships hold up to replication with even minor respecification. Accurate forecasts are nonexistent. We provide a simple conjecture about what accounts for this problem, and offer a statistical framework that better matches the substantive issues and types of data in this field. Our model, a version of a ``neural network'' model, forecasts substantially better than any previous effort, and appears to uncover some structural features of international conflict.

18
Paper
If the Assumption Fits...: A Comment on the King Ecological Inference Solution
Cho, Wendy K. T.

Uploaded 08-20-1998
Keywords ecological inference
Abstract I examine a recently proposed solution to the ecological inference problem (King 1997). It is asserted that the proposed model is able to reconstruct individual-level behavior from aggregate data. I discuss in detail both the benefits and limitations of this model. The assumptions of the basic model are often inappropriate for instances of aggregate data. The extended version of the model is able to correct for some of these limitations. However, it is difficult in most cases to apply the extended model properly.

19
Paper
Liberalism, Public Opinion, and their Critics: Some Lessons for Defending Science
Jackman, Simon

Uploaded 00-00-0000
Keywords liberalism
science
Enlightenment
statistics
public opinion
post-modernism
Abstract Science and liberalism were both born out of the Enlightenment; liberalism's more-or-less successful defense against its critics may hold some insights for defenders of science against recent attacks. Liberalism, like science, is normatively thin, but procedurally rich. As such, liberalism and science has been able to accomodate shifting opinions about "the good" or "the truth" while pursuing it. For both science and liberalism, truth and "the good" are socially constructed, just as they themselves are socially constructed. This is sometimes overlooked. A brief history of the study of public opinion shows that liberalism's science -- political science, and the study of public opinion in particular -- is full of abstractions, metaphors, and approxiimations of reality that serve social ends. This can be used to disarm post-modern critics of science. The admission of a conetextualized basis for knowledge is not an abandonment of science, but rather an acknowledgement of the richness of the world, that is, if anything, an invitation to inquiry. This admission was the mutual origin of both science and liberalism, is the source of the their resiliance, and will ensure their safe passage throught the post-modern "storm".

20
Paper
Advancement in the House of Representatives
Wawro, Gregory

Uploaded 00-00-0000
Keywords legislative entrepreneurship
career concerns
legislative institutions
mobility
Markov models
endogeneity
maximumlikelihood
Abstract (None submitted)

21
Paper
Costly Information and the Stability of Equilibria in the Intergenerational Dilemma
Signorino, Curtis S.

Uploaded 07-16-1996
Keywords evolutionary game theory
overlapping generations model
Abstract Past analyses of the intergenerational dilemma have identified a number of subgame-perfect equilibrium strategies. However, nothing has been said about the stability of these equilibria: how robust they are to perturbation or how difficult it is to move to a Pareto-improving equilibrium. Moreover, it is generally assumed that information is costless. In this paper, I incorporate costly information and analyze the stability of the equilibria, identifying (1) the conditions under which CONFORMIST versus DEFECTOR equilibria will be stable and (2) the degree of difficulty in moving from the Pareto-suboptimal DEFECTOR equilibrium to the Pareto-optimal CONFORMIST equilibrium. In general, the maintenance of a CONFORMIST equilibrium becomes more difficult the more the second period is discounted and the higher the information costs. Additionally, when altruists are included in the model and information is only slightly costly, cycling among the homogeneous equilibria can occur. I show that to counter this instability, conformists should always punish altruists --- that to protect one's own future payoffs, one may need to police the interactions of others.

22
Paper
The Reciprocal Relationship Between State Defense Interest and Committee Representation in Congress
Carsey, Thomas
Rundquist, Barry

Uploaded 11-04-1997
Keywords Distributive Politics
LISREL
Pooled Time Series
Abstract Does prior representation of a state on a Congressional defense committee lead to higher levels of per capita defense contract, or do higher levels of prior per capita contract awards to a state increase its probability of being represented on a defense committee? To solve this puzzle, we estimate a cross-lagged three-equation model on data from all 50 states from 1963 to 1989 using maximum likelihood within LISREL. We find a substantial reciprocal but non-confounding relationship between representation and the allocation of benefits for the House, but not for the Senate. Thus, for the House, this more appropriate model of distributive politics in Congress supports both the committee-induced benefits hypothesis and the recruitment hypothesis. Further, the paper elaborates on how this reciprocal relationship plays out over time.

23
Paper
Evaluating Measures of Ideology
Bishin, Benjamin G.
King, Gary
Zeng, Langche

Uploaded 08-24-1997
Keywords FILTER
ADA
NOMINATE
ideology
Abstract A vigorous debate has arisen over the metric used to measure ideology (Jackson and Kingdon 1992, Poole and Rosenthal 1985, Snyder 1991, Krehbiel 1993). Ideology is difficult to measure because legislator's statements may be politically motivated and insincere. This paper evaluates the accuracy of NOMINATE and ADA scores by comparing them to an independent measure, based on background characteristics, developed herein. By Forecasting the Ideology of Legislators Through Elite Response (FILTER), this measure avoids the problems inherent in use of the roll call vote metric. In addition, the FILTER methodology is generalizable to studies of other deliberative bodies. The results show that FILTER scores are highly correlated with NOMINATE and ADA scores.

24
Paper
Heterogeneity, Salience, and Voter Decision Rules for Candidate Preference
Glasgow, Garrett

Uploaded 08-10-1997
Keywords voter behavior
decision rules
rank ordered logit
salience
issue voting
Abstract Voters in American Presidential elections display a wide variety of decision rules when choosing a candidate. One form of this heterogeneity is differential weighting of issues used to make a vote choice. The structure of this heterogeneity and differential salience of issues has important implications for the American political process. Determining the nature of these heterogeneous preferences is vital to understanding electoral politics in the United States. An empirical technique for modeling and exploring heterogeneity is developed and applied to the 1980 NES Panel Study. I show that heterogeneity in voter decision rules is widespread, and that while many voters rely on non-issue considerations when determining candidate preference, issue voting does play a role in the decision rules of many voters.

25
Paper
Forecasting Time Series
Hinich, Melvin J.

Uploaded 07-08-1997
Keywords forecast
autoregressive
vector AR
state space
linear
Abstract The limits of forecasting a linear times series system are discussed.\r\nA stable autoregressive linear system can only be accurately predicted\r\nfor a few steps ahead of the last observation. If the time series is a\r\ndeterministic trend plus random fluctuations then the trend can be\r\npredicted as long as it is stable.

26
Paper
Breaking Up Isn't So Hard to Do: Ecological Inference and Split-Ticket Voting in the 1988 Presidential Election
Burden, Barry C.
Kimball, David

Uploaded 04-01-1997
Keywords ecological inference
split-ticket voting
Abstract This method uses Gary King's (1997) solution to the ecological inference problem to examine split-ticket voting patterns in the 1988 elections. Earlier studies of split-ticket voting used either aggregate data, which suffer from the ecological fallacy, or survey data, which suffer from misreporting and small, unrepresentative sample sizes within states and districts. This paper produces accurate estimates of the proportions of voters splitting their ballots in each state and district for the first time. With these results we test several competing theories of split-ticket voting and divided government. We find, contrary to Fiorina's (1996) balancing argument, that voters are not intentionally splitting their tickets to produce moderate policies. In most cases split outcomes are the result of lopsided congressional campaigns that feature well-funded, high quality candidates versus unknown competitors.

27
Paper
Strange Bedfellows or the Usual Suspects? Spatial Models of Ideology and Interest Group Coalitions
Almeida, Richard

Uploaded 04-01-2005
Keywords Interest groups
coalitions
spatial theory
poisson regression
ideology
Abstract Entering into coalitions has become a standard tactic for interest groups trying to maximize success while minimizing cost. The strategic conditions underlying decisions to form or join coalitions are beginning to be explored in the political science literature, yet very little is known about the process and criteria through which interest groups select coalition partners. In this paper, I explore the partner selection process by applying spatial theories of ideology and coalition formation to interest group participation on amicus curiae briefs. Previous work demonstrates that the lobbying efforts of groups can be used to generate a general measure of ideology for any group. These captured ideology scores are used in statistical models of interest group coalition partner selection on amicus curiae briefs from 1954-1985. This research demonstrates that the ideology scores captured for each group are powerful predictors of interest group coalition partner selection, even when controls for resources, group type, and other potential predictors are included.

28
Paper
A Method for Weighting Survey Samples of Low-Incidence Voters
Nagler, Jonathan
Alvarez, R. Michael

Uploaded 07-19-2005
Abstract In this paper we describe a method for weighting surveys of a sub-sample of voters. We focus on the case of Latino voters. And we analyze data for three surveys: two opinion polls leading up to the 2004 presidential election, and the national exit poll from the 2004 election. We take advantage of much data when it is available, the large amount of data describing the demographics of Hispanic citizens. And we combine this with a model of turnout of those citizens to improve our estimate of the demographics characteristics of Hispanic voters. We show that alternate weighting schemes can substantively alter inferences about population parameters. [This is an incomplete version of the paper, it omits calculations of uncertainty which are some of the fundamental quantities of interest of the paper.]

29
Paper
Parametric and Nonparametric Bayesian Models for Ecological Inference in 2 x 2 Tables
Imai, Kosuke
Lu, Ying

Uploaded 07-21-2004
Keywords Aggregate data
Data augmentation
Density estimation
Dirichlet process prior
Normal mixtures
Racial voting
Abstract The ecological inference problem arises when making inferences about individual behavior from aggregate data. Such a situation is frequently encountered in the social sciences and epidemiology. In this article, we propose a Bayesian approach based on data augmentation. We formulate ecological inference in $2 times 2$ tables as a missing data problem where only the weighted average of two unknown variables is observed. This framework directly incorporates the deterministic bounds, which contain all information available from the data, and allow researchers to incorporate the individual-level data whenever available. Within this general framework, we first develop a parametric model. We show that through the use of an $EM$ algorithm, the model can formally quantify the effect of missing information on parameter estimation. This is an important diagnostic for evaluating the degree of aggregation effects. Next, we introduce a nonparametric Bayesian model using a Dirichlet process prior to relax the distributional assumption of the parametric model. Through simulations and an empirical application, we evaluate the relative performance of our models and other existing methods. We show that in many realistic scenarios, aggregation effects are so severe that more than half of the information is lost, yielding estimates with little precision. We also find that our nonparametric model generally outperforms parametric models. C-code, along with an R interface, is publicly available for implementing our Markov chain Monte Carlo algorithms to fit the proposed models.

30
Paper
Incentives, Complexity, and Motivations in Experiments
Bassi, Anna
Morton, Rebecca
Williams, Kenneth

Uploaded 06-24-2006
Abstract We compare three motivation procedures in a voting experiment: 1) subjects paid a flat fee for participating, 2) subjects paid according to choices as is typical in a political economy experiment, and 3) subjects paid double the typical amount. We also vary complexity of the voting game. Financial incentives significantly increase the probability that subjects choose Bayesian-Nash predicted strategies. In the simpler game the typical financial incentive is sufficient; higher payments have no effect. But in the complex game, increasing financial incentives beyond the typical level is consequential. Further, repetition interacts with typical financial incentives in the complex game to increase the likelihood of Bayesian-Nash strategies. The evidence suggests that financial incentives increase subjects' cognitive attention to experimental tasks as individuals would be in comparable observational settings, which enhances theory evaluation in experiments and the external validity of the results.

31
Paper
The Balance Test Fallacy in Matching Methods for Causal Inference
Imai, Kosuke
King, Gary
Stuart, Elizabeth

Uploaded 06-29-2006
Keywords causal inference
covariate balance
matching
treatment effect
Abstract Matching methods are widely used to adjust for possibly confounded treatment assignment when making causal inferences. The success of the matching adjustment depends on generating as much equivalence as possible between the distribution of pre-treatment covariates in the treated and control groups. In numerous articles across a diverse variety of academic fields that use matching, researchers evaluate the degree of equivalence by conducting hypothesis tests, most commonly the $t$-test for the mean difference of each of the covariates in the two matched groups. We demonstrate that these hypothesis tests are fallacious and discuss better alternatives.

32
Paper
Using Graphs Instead of Tables to Improve the Presentation of Empirical Results in Political Science
Kastellec, Jonathan
Leoni, Eduardo

Uploaded 11-15-2006
Keywords statistical graphics
tables
presentation
descriptive statistics
regression results
Abstract When political scientists present empirical results, they are much more likely to use tables rather than graphs, despite the fact that the latter greatly increases the clarity of presentation and makes it easier for a reader or listener to draw clear and correct inferences. Using a sample of leading journals, we document this tendency and suggest reasons why researchers prefer tables. We argue the extra work required in producing graphs is rewarded by greatly enhanced presentation and communication of empirical results. We illustrate their benefits by turning several published tables into graphs, including tables that present descriptive data and regression results. We show that regression graphs properly emphasize point estimates and confidence intervals rather than null significance hypothesis testing, and that they can successfully present the results of multiple regression models. A move away from tables and towards graphs would increase the quality of the discipline's communicative output and make empirical findings more accessible to every type of audience.

33
Paper
Extracting Systematic Social Science Meaning from Text
Hopkins, Daniel
King, Gary

Uploaded 07-12-2007
Keywords automated content analysis
machine learning
simulated extrapolation
non-parametric estimation
internet
2008 U.S. Presidential election
Abstract We develop two methods of automated content analysis that give approximately unbiased estimates of quantities of theoretical interest to social scientists. With a small sample of documents hand coded into investigator-chosen categories, our methods can give accurate estimates of the proportion of text documents in each category in a larger population. Existing methods successful at maximizing the percent of documents correctly classified allow for the possibility of substantial estimation bias in the category proportions of interest. Our first approach corrects this bias for any existing classifier, with no additional assumptions. Our second method estimates the proportions without the intermediate step of individual document classification, and thereby greatly reduces the required assumptions. For both methods, we also correct statistically, apparently for the first time, for the far less-than-perfect levels of inter-coder reliability that typically characterize human attempts to classify documents, an approach that will normally outperform even population hand coding when that is feasible. We illustrate these methods by tracking the daily opinions of millions of people about candidates for the 2008 presidential nominations in online blogs, data we introduce and make available with this article, and through evaluations in available corpora from other areas, including movie reviews, university web sites, and Enron emails. We also offer easy-to-use software that implements all methods described.

34
Paper
Sharp Bounds on the Causal Effects in Randomized Experiments with ``Truncation-by-Death''
Imai, Kosuke

Uploaded 08-23-2007
Keywords Average treatment effect
Causal inference
Direct and indirect effect
Identification
Principal stratification
Quantile treatment effect.
Abstract Many randomized experiments suffer from the ``truncation-by-death'' problem where potential outcomes are not defined for some subpopulations. For example, in medical trials, quality-of-life measures are only defined for surviving patients, and various skip-pattern questions are analyzed in social science survey experiments. In this paper, I derive the sharp bounds on causal effects under various assumptions. My identification analysis is based on the idea that the ``truncation-by-death'' problem can be formulated as the contaminated data problem. The proposed analytical techniques can be applied to other settings in causal inference including the estimation of direct and indirect effects and the analysis of three-arm randomized experiments with noncompliance.

35
Paper
Going beyond the book: Toward critical reading in statistics teaching
Gelman, Andrew

Uploaded 06-01-2008
Keywords categorical and continuous variables
handedness
menstruation
primary sources
secondary sources
sex ratio
teaching
textbooks
traffic accidents
Abstract We can improve our teaching of statistical examples from books by collecting further data, reading cited articles, and performing further data analysis. This should not come as a surprise, but what might be new is the realization of how close to the surface these research opportunities are: even influential and celebrated books can have examples where more can be learned with a small amount of additional effort. We discuss three examples that have arisen in our own teaching: an introductory textbook that motivated us to think more carefully about categorical and continuous variables; a book for the lay reader that misreported a study of menstruation and accidents; and a monograph on the foundations of probability that overinterpreted statistically insignificant fluctuations in sex ratios.

36
Paper
Voter transition estimation in multiparty systems
Andreadis, Ioannis

Uploaded 07-07-2008
Keywords Elections
Voter transition rates
Ecological inference
Multiparty systems
Abstract Recent advances in the field of ecological inference have provided researchers with new tools to estimate voter transition in two-party systems. Although some researchers have dealt with the R x C ecological inference problem, voter transition estimation remains a difficult and tedious goal. As a result scholars of multi-party systems still struggle with their electoral data. In this paper we present a new approach and we propose a new method that deals with this issue.

37
Paper
Exploiting a Rare Shift in Communication Flows to Document News Media Persuasion: The 1997 United Kingdom General Election
Ladd, Jonathan
Lenz, Gabriel

Uploaded 07-30-2008
Keywords Media persuasion
endorsements
campaigns
elections
matching
causal inference
Abstract Using panel data and matching techniques, we exploit a rare change in communication flows -- the endorsement switch to the Labour Party by several prominent British newspapers before the 1997 United Kingdom general election -- to study the persuasive power of the news media. These unusual events provide an opportunity to test for news media persuasion while avoiding methodological pitfalls that have plagued previous studies. By comparing readers of newspapers that switched endorsements to similar individuals who did not read these newspapers, we estimate that these papers persuaded a considerable share of their readers to vote for Labour. Depending on the statistical approach, the point estimates vary from about 10 percent to as high as 25 percent of readers. These findings provide rare, compelling evidence that the news media exert a powerful influence on mass political behavior.

38
Paper
Causal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity Analyses of a Field Experiment on Democratic Deliberations
Imai, Kosuke
Yamamoto, Teppei

Uploaded 06-30-2008
Keywords differential misclassification
nonparametric bounds
retrospective studies
sensitivity analysis
survey measurements
Abstract Political scientists have long been concerned about the validity of survey measurements. Although many have studied classical measurement error in linear regression models where the error is assumed to arise completely at random, in a number of situations the error may be correlated with the outcome. We analyze the impact of differential measurement error on causal estimation. The proposed nonparametric identification analysis avoids arbitrary modeling decisions and formally characterizes the roles of additional assumptions. We show the serious consequences of differential misclassification and offer a new sensitivity analysis that allows researchers to evaluate the robustness of their conclusions. Our methods are motivated by a field experiment on democratic deliberations, in which one set of estimates potentially suffers from differential misclassification. We show that an analysis ignoring differential measurement error may considerably overestimate the causal effects. This finding contrasts with the case of classical measurement error which always yields attenuation bias.

39
Paper
Modeling Dynamics in Time-Series-Cross-Section Political Economy Data
Beck, Nathaniel
Katz, Jonathan

Uploaded 06-04-2009
Keywords dynamics
TSCS
political economy
lagged dependent variable
non-stationary
Abstract This paper deals with a variety of dynamic issues in the analysis of time-series-cross-section (TSCS) data. While the issues raised are more general, we focus on applications to political economy. We begin with a discussion of specification and lay out the theoretical differences implied by the various types of time series models that can be estimated. It is shown that there is nothing pernicious in using a lagged dependent variable and that all dynamic models either implicitly or explicitly have such a variable; the differences between the models relate to assumptions about the speeds of adjustment of measured and unmeasured variables. When adjustment is quick it is hard to differentiate between the various models; with slower speeds of adjustment the various models make sufficiently different predictions that they can be tested against each other. As the speed of adjustment gets slower and slower, specification (and estimation) gets more and more tricky. We then turn to a discussion of estimation. It is noted that models with both a lagged dependent variable and serially correlated errors can easily be estimated; it is only OLS that is inconsistent in this situation. We then show, via Monte Carlo analysis shows that for typical TSCS data that fixed effects with a lagged dependent variable performs about as well as the much more complicated Kiviet estimator, and better than the Anderson-Hsiao estimator (both designed for panels).

40
Paper
Party Polarization in Congress: A Social Networks Approach
Waugh, Andrew
Pei, Liuyi
Fowler, James
Mucha, Peter
Porter, Mason

Uploaded 07-23-2009
Abstract We use the network science concept of modularity to measure polarization in the United States Congress. As a measure of the relationship between intra-community and extra-community ties, modularity provides a conceptually-clear measure of polarization that directly reveals both the number of relevant groups and the strength of their divisions. Moreover, unlike measures based on spatial models, modularity does not require predefined assumptions about the number of coalitions or parties, the shape of legislator utilities, or the structure of the party system. Importantly, modularity can be used to measure polarization across all Congresses, including those without a clear party divide, thereby permitting the investigation of partisan polarization across a broader range of historical contexts. Using this novel measure of polarization, we show that party influence on Congressional communities varies widely over time, especially in the Senate. We compare modularity to extant polarization measures, noting that existing methods underestimate polarization in periods in which party structures are weak, leading to artificial exaggerations of the extremeness of the recent rise in polarization. We show that modularity is a significant predictor of future majority party changes in the House and Senate and that turnover is more prevalent at medium levels of modularity. We utilize two individual-level variables, which we call "divisiveness" and "solidarity," from modularity and show that they are significant predictors of reelection success for individual House members, helping to explain why partially-polarized Congresses are less stable. Our results suggest that modularity can serve as an early-warning signal of changing group dynamics, which are reflected only later by changes in formal party labels.

41
Paper
Bargaining Power in Practice: US Treaty-making with American Indians, 1784–1911
Spirling, Arthur

Uploaded 07-15-2010
Keywords American Indians
Native Americans
Text as Data
Scaling
Kernel methods
String Kernels
Abstract Native Americans are unique among domestic actors in that their relations with the United States government involve treaty-making, with almost 600 such documents signed between the Revolutionary War and the turn of the twentieth century. We contend that the changing nature of their treaty negotiations can be seen as part of a theoretical, bargaining framework familiar to scholars of international relations. We then construct a comprehensive new data set by digitizing all of the treaties for systematic textual analysis. Employing scaling techniques validated with word use information, we show that a single dimension characterizes the treaties as more or less 'harsh' in land and resource cession terms. With a mind to earlier historical and legal literatures, we also show that the 'broken' treaties are not obviously distinguishable from contemporaneous valid ones, and that the post-1871 'agreements' represent a straightforward continuation of earlier treaty policy in both style and substance. In bargaining terms, we find evidence suggestive of a detrimental 'losing' effect for Indians involved in war with the US.

42
Paper
Automated Production of High-Volume, Near-Real-Time Political Event Data
Schrodt, Philip

Uploaded 08-30-2010
Keywords event data
ICEWS
DARPA
natural language processing
open source
forecasting
prediction
conflict
Abstract This paper summarizes the current state-of-the-art for generating high-volume, near-real-time event data using automated coding methods, based on recent efforts for the DARPA Integrated Crisis Early Warning System (ICEWS) and NSF-funded research. The ICEWS work expanded by more than two orders of magnitude previous automated coding efforts, coding of about 26-million sentences generated from 8-million stories condensed from around 30 gigabytes of text. The actual coding took six minutes. The paper is largely a general ``how-to'' guide to the pragmatic challenges and solutions to various elements of the process of generating event data using automated techniques. It also discusses a number of ways that this could be augmented with existing open-source natural language processing software to generate a third-generation event data coding system.

43
Paper
Enhancing a Geographic Regression Discontinuity Design Through Matching to Estimate the Effect of Ballot Initiatives on Voter Turnout
Keele, Luke
Titiunik, Rocio
Zubizarreta, Jose

Uploaded 07-13-2012
Keywords matching
causal inference
geopgraphy
regression discontinuity
Abstract Of late there has been a renewed interest in natural experiments as a method for drawing causal inferences from observational data. One form of natural experiment exploits variation in geography where units in one geographic area receive a treatment while units in another area do not. In this kind of geographic natural experiment, the hope is that assignment to treatment via geographic location creates as-if random variation in treatment assignment. When this happens, adjustment for baseline covariates is unnecessary. In many applications, however, some adjustment for baseline covariates may be necessary due to strategic sorting around the border between treatment and control areas. As such, analysts may wish to combine identification strategies--using both spatial proximity and covariates--for more plausible inferences. Here we explore how to utilize spatial proximity as well as covariates in the analysis of geographic natural experiments. We contend that standard statistical tools are ill-equipped to exploit covariates as well as variation in treatment assignment that is a function of spatial proximity. We use a mixed integer programming matching algorithm to flexibly incorporate information about both the discontinuity and observed covariates which allows us to minimize spatial distance while preserving balance on observed covariates. We argue the combining both information about covariates and the discontinuity creates a method of estimation that can be informally thought of as doubly robust. We demonstrate the method with data on ballot initiatives and turnout in Milwaukee, WI.

44
Paper
A Copula Approach to the Problem of Selection Bias in Models of Government Survival
Chiba, Daina
Martin, Lanny
Stevenson, Randy

Uploaded 01-02-2014
Keywords selection bias
copula theory
duration models
government survival
government formation
Abstract Recent theories of coalition politics in parliamentary democracies suggest that government formation and survival are jointly determined outcomes. An important empirical implication of these theories is that the sample of observed governments analyzed in studies of government survival may be nonrandomly selected from the population of potential governments. This can lead to serious inferential problems. Unfortunately, current empirical models of government survival are unable to account for the possible biases arising from nonrandom selection. In this study, we use a copula-based framework to assess, and correct for, the dependence between the processes of government formation and survival. Our results suggest that existing studies of government survival, by ignoring the selection problem, significantly overstate the substantive importance of several covariates commonly included in empirical models.

45
Paper
Can Voting Reduce Welfare? Evidence from the US Telecommunications Sector
Falaschetti, Dino

Uploaded 06-15-2004
Keywords Electoral Institutions
Voter Turnout
Capture Theory
Regulatory Commitment
Telecommunications Policy
Economic Welfare
Abstract Voter turnout is popularly cited as reflecting a polity's health. The ease with which electoral members influence policy can, however, constrain an economy's productive capacity. For example, while influential electorates might carefully monitor political agents, they might also "capture" them. In the latter case, electorates transfer producer surplus to consumers at the expense of social welfare - i.e., a "healthy" polity's economy rests at an inferior equilibrium. I develop evidence that the US telecommunications sector may have realized such an outcome. This evidence is remarkably difficult to dismiss as an artifact of endogeneity bias, and appears important for several audiences. For example, the normative regulation literature calls for constraints on producers' market power, while the institutions and commitment literature calls for checks on political agents' opportunism. Evidence that I develop here suggests that, unbound by similar constraints, electoral principals might effectively control their political agents while significantly retarding their economic agents' productive incentives.

46
Paper
Time-Series--Cross-Section Issues: Dynamics, 2004
Beck, Nathaniel
Katz, Jonathan

Uploaded 07-24-2004
Keywords Time-series--cross-section data
lagged dependent variables
Nickell bias
specification
integration
Abstract This paper deals with a variety of dynamic issues in the analysis of time-series--cross-section (TSCS) data raised by recent papers; it also more briefly treats some cross-sectional issues. Monte Carlo analysis shows that for typical TSCS data that fixed effects with a lagged dependent variable performs about as well as the much more complicated Kiviet estimator, and better than the Anderson-Hsiao estimator (both designed for panels). It is also shown that there is nothing pernicious in using a lagged dependent variable, and all dynamic models either implicitly or explicitly have such a variable; the differences between the models relate to assumptions about the speeds of adjustment of measured and unmeasured variables. When adjustment is quick it is hard to differentiate between the models, and analysts may choose on grounds of convenience (assuming that the model passes standard econometric tests). When adjustment is slow it may be the case that the data are integrated, which means that no method developed for the stationary case is appropriate. At the cross-sectional level, it is argued that the critical issue is assessing heterogeneity; a variety of strategies for this assessment are discussed.

47
Paper
Imitative and Evolutionary Processes that Produce Coordination Among American Voters
Mebane, Walter R.

Uploaded 07-11-2003
Keywords imitation
evolutionary game
strategic coordination
voting
Abstract I examine the extent to which evolutionary game models based on the idea of pure imitation may help to explain recent empirical findings that the American electorate is involved in a situation of large-scale strategic coordination. Pure imitation in this context is the idea that some voters who are dissatisfied with their current strategy look around and adopt the strategy of the first voter they encounter who has attributes similar to theirs. The current analysis is part of a plan to use evolutionary models to motivate simulations based on National Election Studies data. The model implies that all voters ultimately use strategic coordination, although competing strategies disppear at different rates, depending on the voter's partisanship.

48
Paper
The Binomial-Beta Hierarchical Model for Ecological Inference: Methodological Issues and Fast Implementation via the ECM Algorithm
de Mattos, Rogerio S.
Veiga, Alvaro

Uploaded 10-17-2002
Keywords ecological inference
hierarchical models
binomial-beta distribution
ECM Algorithm
Abstract The binomial-beta hierarchical model from King, Rosen, and Tanner (1999) is a recent contribution to ecological inference. Developed for the 2x2 tables case and from a bayesian perspective, the model is featured by the compounding of binomial and beta distributions into a hierarchical structure. From a sample of aggregate observations, inference with this model can be made regarding values of unobservable disaggregate variables. The paper reviews this EI model with two purposes: First, a faster approach to use it in practice, based on explicit modeling of the disaggregate data generation process along with posterior maximization implemented via the ECM algorithm, is proposed and illustrated with an application to a real dataset; second, limitations concerning the use of marginal posteriors for binomial probabilities as the vehicle of inference (basically, the failure to respect the accounting identity) instead of the predictive distributions for the disaggregate proportions are pointed. In the concluding section, principles for EI model building in general and directions for further research are suggested.

49
Paper
State-Level Opinions from National Surveys: Poststratification using Hierarchical Logistic Regression
Park, David K.
Gelman, Andrew
Bafumi, Joseph

Uploaded 07-12-2002
Keywords Bayesian Inference
Hierarchical
Logit
Poststratification
Public Opinion
States
Elections
Abstract Previous researchers have pooled national surveys in order to construct state-level opinions. However, in order to overcome the small n problem for less populous states, they have aggregated a decade or more of national surveys to construct their measures. For example, Erikson, Wright and McIver (1993) pooled 122 national surveys conducted over 13 years to produce state-level partisan and ideology estimates. Brace, Sims-Butler, Arceneaux, and Johnson (2002) pooled 22 surveys over a 25-year period to produce state-level opinions on a number of specific issues. We construct a hierarchical logistic regression model for the mean of a binary response variable conditional on poststratification cells. This approach combines the modeling approach often used in small-area estimation with the population information used in poststratification (see Gelman and Little 1997). We produce state-level estimates pooling seven national surveys conducted over a nine-day period. We first apply the method to a set of U.S pre-election polls, poststratified by state, region, as well as the usual demographic variables and evaluate the model by comparing it to state-level election outcomes. We then produce state-level partisan and ideology estimates by comparing it to Erikson, Wright and McIver's estimates.

50
Paper
Monotone Comparative Statics in Models of Politics: A Method for Simplifying Analysis and Enhancing Empirical Content
Bueno de Mesquita, Ethan
Ashworth, Scott

Uploaded 08-18-2004
Keywords game theory
formal theory
empirical implications of theoretical models
comparative statics

Abstract We elucidate a powerful yet simple method for deriving comparative statics conclusions for a wide variety of models: Monotone Comparative Statics (Milgrom and Shannon, 1994). Monotone comparative static methods allow researchers to extract robust, substantive empirical implications from formal models that can be tested using ordinal data and simple non-parametric tests. They also replace a diverse range of more technically di±cult mathematics (facilitating richer, more realistic models), a large set of assumptions that are hard to understand or justify substantively (highlighting the political intuitions underlying a model's results), and a complicated set of methods for extracting implications from models. We present an accessible introduction to the central monotone comparative statics results and a series of practical tools for using these techniques in applied models (with reference to original sources, when relevant). Throughout we demonstrate the techniques with examples drawn from political science.


< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 next>
   
wustlArtSci