### Search Results

Below results based on the criteria 'Kullback'
Total number of records returned: 915

1
Paper
Bargaining Power in Practice: US Treaty-making with American Indians, 1784–1911
 Spirling, Arthur Uploaded 07-15-2010 Keywords American IndiansNative AmericansText as DataScalingKernel methodsString Kernels Abstract Native Americans are unique among domestic actors in that their relations with the United States government involve treaty-making, with almost 600 such documents signed between the Revolutionary War and the turn of the twentieth century. We contend that the changing nature of their treaty negotiations can be seen as part of a theoretical, bargaining framework familiar to scholars of international relations. We then construct a comprehensive new data set by digitizing all of the treaties for systematic textual analysis. Employing scaling techniques validated with word use information, we show that a single dimension characterizes the treaties as more or less 'harsh' in land and resource cession terms. With a mind to earlier historical and legal literatures, we also show that the 'broken' treaties are not obviously distinguishable from contemporaneous valid ones, and that the post-1871 'agreements' represent a straightforward continuation of earlier treaty policy in both style and substance. In bargaining terms, we find evidence suggestive of a detrimental 'losing' effect for Indians involved in war with the US.

2
Paper
Automated Production of High-Volume, Near-Real-Time Political Event Data
 Schrodt, Philip Uploaded 08-30-2010 Keywords event dataICEWSDARPAnatural language processingopen sourceforecastingpredictionconflict Abstract This paper summarizes the current state-of-the-art for generating high-volume, near-real-time event data using automated coding methods, based on recent efforts for the DARPA Integrated Crisis Early Warning System (ICEWS) and NSF-funded research. The ICEWS work expanded by more than two orders of magnitude previous automated coding efforts, coding of about 26-million sentences generated from 8-million stories condensed from around 30 gigabytes of text. The actual coding took six minutes. The paper is largely a general how-to'' guide to the pragmatic challenges and solutions to various elements of the process of generating event data using automated techniques. It also discusses a number of ways that this could be augmented with existing open-source natural language processing software to generate a third-generation event data coding system.

3
Paper
Enhancing a Geographic Regression Discontinuity Design Through Matching to Estimate the Effect of Ballot Initiatives on Voter Turnout
 Keele, Luke Titiunik, Rocio Zubizarreta, Jose Uploaded 07-13-2012 Keywords matchingcausal inferencegeopgraphyregression discontinuity Abstract Of late there has been a renewed interest in natural experiments as a method for drawing causal inferences from observational data. One form of natural experiment exploits variation in geography where units in one geographic area receive a treatment while units in another area do not. In this kind of geographic natural experiment, the hope is that assignment to treatment via geographic location creates as-if random variation in treatment assignment. When this happens, adjustment for baseline covariates is unnecessary. In many applications, however, some adjustment for baseline covariates may be necessary due to strategic sorting around the border between treatment and control areas. As such, analysts may wish to combine identification strategies--using both spatial proximity and covariates--for more plausible inferences. Here we explore how to utilize spatial proximity as well as covariates in the analysis of geographic natural experiments. We contend that standard statistical tools are ill-equipped to exploit covariates as well as variation in treatment assignment that is a function of spatial proximity. We use a mixed integer programming matching algorithm to flexibly incorporate information about both the discontinuity and observed covariates which allows us to minimize spatial distance while preserving balance on observed covariates. We argue the combining both information about covariates and the discontinuity creates a method of estimation that can be informally thought of as doubly robust. We demonstrate the method with data on ballot initiatives and turnout in Milwaukee, WI.

4
Paper
A Copula Approach to the Problem of Selection Bias in Models of Government Survival
 Chiba, Daina Martin, Lanny Stevenson, Randy Uploaded 01-02-2014 Keywords selection biascopula theoryduration modelsgovernment survivalgovernment formation Abstract Recent theories of coalition politics in parliamentary democracies suggest that government formation and survival are jointly determined outcomes. An important empirical implication of these theories is that the sample of observed governments analyzed in studies of government survival may be nonrandomly selected from the population of potential governments. This can lead to serious inferential problems. Unfortunately, current empirical models of government survival are unable to account for the possible biases arising from nonrandom selection. In this study, we use a copula-based framework to assess, and correct for, the dependence between the processes of government formation and survival. Our results suggest that existing studies of government survival, by ignoring the selection problem, significantly overstate the substantive importance of several covariates commonly included in empirical models.

5
Paper
TOGARY: A automated toll for data collection
 Filho, Dalson Silva, Lucas Rocha, Enivaldo Paranhos, Ranulfo Uploaded 11-08-2014 Keywords automated data collectioninstitutional transparencycorruption Abstract TOGARY is a tool for automated data collection. The program extracts information on corruption judicial sentences from the Brazilian National Council of Justice (CNJ) website, stores it in a structured dataset and then exports it as a spreadsheet format. Our motivation to create this software was the website's lack of systematic disclosure since data is only available for case by case research. The latest TOGARY update has detailed information on 17.505 corruption cases judged by subnational, national, electoral and superior courts between 1992 and 2014. With this paper, we hope to diffuse the application of automated data collection procedures in Social Sciences.

6
Paper
The Estimation of Time-Invariant Variables in Panel Analyses with Unit Fixed Effects
 Pluemper, Thomas Troeger, Vera E. Uploaded 07-23-2004 Keywords Time Invariant VariablesUnit effectsMonte CarloHausman-Taylor Abstract This paper analyzes the estimation of time-invariant variables in panel data models with unit-effects. We compare three procedures that have frequently been employed in comparative politics, namely pooled-OLS, random effects and the Hausman-Taylor model, to a vector decomposition procedure that allows estimating time-invariant variables in an augmented fixed effects approach. The procedure we suggest consists of three stages: the first stage runs a fixed-effects model without time-invariant variables, the second stage decomposes the unit-effects vector into a part explained by the time-invariant variables and an error term, and the third stage re-estimates the first stage by pooled-OLS including the time invariant variables plus the error term of stage 2. We use Monte Carlo simulations to demonstrate that this method works better than its alternatives in estimating typical models in comparative politics. Specifically, the unit fixed effects vector decomposition technique performs better than both pooled OLS and random effects in the estimation of time-invariant variables correlated with the unit effects and better than Hausman-Taylor in estimating the time-invariant variables correlated with the unit effects. Finally, we re-analyze recent work by Huber and Stephens (2001) as well as by Beramendi and Cusack (2004). These analyses seek to cope with the problem of time-invariant variables in panel data.

7
Paper
A Reassessment of Presidential Campaign Strategy Formation and Candidate Resource Allocation
 Reeves, Andrew Chen, Lanhee Nagano, Tiffany Uploaded 07-11-2003 Keywords comments greatly appreciatedpresidential campaign strategy Abstract Daron Shaw (1999) argues in "The Methods behind the Madness: Presidential Electoral College Strategies, 1988-1996" that candidates formulate state-level general election campaign strategies based on a number of predictable and exogenous factors, such as the cost of television advertisements and electoral vote share. Shaw (1999) further asserts that these strategies are strong independent predictors of candidate resource allocation. His article supports these conclusions with what are claimed to be results from ordered probit and two-stage least squares (2SLS) regressions, but we demonstrate that both are in fact ordinary least squares (LS) regressions. When we implement the methods that Shaw (1999) claims to use, we find that all key substantive conclusions in the article vanish. We show that the factors attributed to the formation of electoral college strategy are insignificant and that whether these strategies have any independent effect on the allocation of campaign resources cannot be ascertained from his (claimed or actual) methods and data.

8
Paper
Standard Voting Power Indexes Don't Work: An Empirical Analysis
 Gelman, Andrew Katz, Jonathan Bafumi, Joseph Uploaded 11-02-2002 Keywords Banzhaf indexdecisive voteelectionselectoral collegeShapley valuevoting power Abstract Voting power indexes such as that of Banzhaf (1965) are derived, explicitly or implicitly, from the assumption that all votes are equally likely (i.e., random voting). That assumption can be generalized to hold that the probability of a vote being decisive in a jurisdiction with $n$ voters is proportional to $1/sqrt{n}$. We test---and reject---this hypothesis empirically, using data from several different U.S. and European elections. We find that the probability of a decisive vote is approximately proportional to $1/n$. The random voting model (or its generalization, the square-root rule) overestimates the probability of close elections in larger jurisdictions. As a result, classical voting power indexes make voters in large jurisdictions appear more powerful than they really are. The most important political implication of our result is that proportionally weighted voting systems (that is, each jurisdiction gets a number of votes proportional to $n$) are basically fair. This contradicts the claim in the voting power literature that weights should be approximately proportional to $sqrt{n}$.

9
Paper
The Fruit of Jefferson's Dinner Party: Roll Call Analysis of the Compromise of 1790 with Substantive and Relational Constraints
 Clinton, Joshua Meirowitz, Adam Uploaded 07-12-2002 Keywords ideal point estimationlog rollFirst Congressagenda estimation Abstract The "Compromise of 1790" -- in which legislative gridlock in the First House (1789-1791) was supposedly resolved by a deal in which Southern states conceded to the assumption of states' Revolutionary War debt by the federal government in exchange for locating the permanent Capitol along the Potomac -- is one of the earliest and most colorful examples of log rolls in American politics. However, historians disagree on the validity or completeness of this story and this account is only directly supported by an account from Jefferson. We assess the extent to which the voting record actually supports the hypothesis that a compromise was reached sometime in mid June. Using substantive information about the roll call votes and relational information about the agenda to specify a model in which bill locations are identified we implement a Bayesian analysis (using MCMC methods). Our results do not support the traditional account of the compromise. In resolving the capital question legislators did not anticipate that assumption would carry. We also find that the final outcome was quite centrist and legislator ideal points are better explained by sectional, as opposed to ideological, theories.

10
Paper
State and American Indian Negotiation of Gaming Compacts: An Event Count Analysis
 Boehmke, Frederick Witner, Richard Uploaded 01-23-2002 Keywords american indiangamingevent countpolicy adoption Abstract There has been a proliferation of casino-style Indian gaming in the years since the passage of the Indian Gaming Regulatory Act in 1988. Yet little is known about the factors that influence state and Indian nations decisions to enter into gaming compacts. In this paper we seek to achieve two objectives. First, we seek to understand the expansion of Indian-state gaming compacts by studying how characteristics of states and Indian nations, along with spatial and temporal diffusion, affect the number of compacts negotiated. Most importantly, we focus on Indian nations relationships with the states; their political influence with respect to the state and the contact they have with state government. Second, we introduce an empirical model new to the study of state politics by modeling the compacting process between Indian nations and states as an event count process. The event count model allows us to explain why some states have more Indian gaming than others and how the compacting process has evolved over time.

11
Paper
Alternative Models of Dynamics in Binary Time-Series--Cross-Section Models: The Example of State Failure
 Beck, Nathaniel Jackman, Simon Epstein, David O'Halloran, Sharyn Uploaded 07-14-2001 Keywords dynamic probitbtscsstate failureGibbs samplingMCMCtransitional modelsdiscrete dataROCcorrelated binary datageneralized residuals Abstract This paper investigates a variety of dynamic probit models for time-series--cross-section data in the context of explaining state failure. It shows that ordinary probit, which ignores dynamics, is misleading. Alternatives that seem to produce sensible results are the transition model and a model which includes a lagged emph{latent} dependent variable. It is argued that the use of a lagged latent variable is often superior to the use of a lagged realized dependent variable. It is also shown that the latter is a special case of the transition model. The relationship between the transition model and event history methods is also considered: the transition model estimates an event history model for both values of the dependent variable, yielding estimates that are identical to those produced by the two event history models. Furthermore, one can incorporate the insights gleaned from the event history models into the transition analysis, so that researchers do not have to assume duration independence. The conclusion notes that investigations of the various models have been limited to data sets which contain long sequences of zeros; models may perform differently in data sets with shorter bursts of zeros and ones.

12
Paper
How much does a vote count? Voting power, coalitions, and the Electoral College
 Gelman, Andrew Katz, Jonathan Uploaded 05-08-2001 Keywords coalitiondecisive voteelectoral collegepopular votevoting power Abstract In an election the probability that a single voter is decisive is affected by the electoral system -- that is, the rule for aggregating votes into a single outcome. Under the assumption that all votes are equally likely (i.e., random voting), we prove that the average probability of a vote being decisive is maximized under a popular-vote (or simple majority) rule and is lower under any coalition system, such as the U.S. Electoral College system, no matter how complicated. Forming a coalition increases the decisive vote probability for the voters within a coalition, but the aggregate effect of coalitions is to decrease the average decisiveness of the population of voters. We then review results on voting power in an electoral college system. Under the random voting assumption, it is well known that the voters with the highest probability of decisiveness are those in large states. However, we show using empirical estimates of the closeness of historical U.S. Presidential elections that voters in small states have been advantaged because the random voting model overestimates the frequencies of close elections in the larger states. Finally, we estimate the average probability of decisiveness for all U.S. Presidential elections from 1960 to 2000 under three possible electoral systems: popular vote, electoral vote, and winner-take-all within Congressional districts. We find that the average probability of decisiveness is about the same under all three systems.

13
Paper
Strategic Misspecification in Discrete Choice Models
 Signorino, Curtis S. Yilmaz, Kuzey Uploaded 08-28-2000 Abstract [not transcribed]

14
Paper
Using Ecological Inference Point Estimates in Second Stage Linear Regressions
 Herron, Michael C. Shotts, Kenneth W. Uploaded 07-14-2000 Keywords ecological inferencesecond stage regressionsordinary least squaresconsistency Abstract The practice of using point estimates produced by the King (1997) ecological inference technique in second stage linear regressions leads to second stage results that, in general, are inconsistent. This conclusion holds, notably, even when all the assumptions behind King's ecological technique are satisfied. Second stage inconsistency is a consequence of the fact that King--based point estimates of disaggregated quantities are themselves inconsistent, and, moreover, these point estimates are contaminated by errors correlated with the true quantities the estimates measure. Our findings on second stage inconsistency follow from econometric theory in conjunction with an analysis of simulated and real ecological datasets, and based on the findings we propose a bootstrap that researchers can use to produce consistent second stage estimates and valid confidence intervals.

15
Paper
Forecasting State Failure

16
Paper
Statistical Analysis of Finite Choice Models in Extensive Form
 Signorino, Curtis S. Uploaded 03-23-1999 Keywords random utilitydiscrete choicestrategicequilibriumfinite choicegame theory Abstract (not transcribed)

17
Paper
Economic Perceptions and Information in a Heterogeneous Electorate
 Willette, Jennifer R. Uploaded 04-18-1999 Keywords economic votingordered probiteconomic perceptions Abstract he relationship between vote choice and voter evaluations of national economic conditions is well established. There is little attention paid to the formation of those economic evaluations, however. This oversight is important since we know that economic perceptions are not direct reflections of objective economic conditions. To address this issue, I develop a model of economic perceptions which considers that the impact of media information on economic evaluations will differ based upon the information capability' of the individual. I use 1992 American National Election Survey data to estimate an ordered probit model of economic perceptions allowing the impact of personal economic information and media information to vary based upon the respondents information capability. I test the hypothesis that individuals with higher information capability will give greater weight to media information when evaluating the economy. As information capability decreases, respondents will weight personal economic conditions more heavily.

18
Paper
Selection Bias in a Model of Candidate Entry Decisions
 Kanthak, Kristen Morton, Becky Gerber, Elisabeth R. Uploaded 07-13-1999 Keywords selection biasPoisson estimationpopulation uncertainty Abstract In recent years, several states have changed or considered changing their laws regulating how political parties nominate candidates for office. We focus on one potentially important consequence of these changes: How do primary election laws affect candidate entry decisions? We have constructed and solved a formal model of individual candidate behavior in which potential candidates can choose to: 1) enter the electoral competition as major party candidates; 2) enter as minor party candidates; 3) enter as independents; or 4) not enter. Based on our analysis of the model, we hypothesize that the expected utility of each choice is a function, in part, of a state's primary election laws. We test our hypotheses with data on candidate choice from recent US Congressional elections. Estimation of our model is complicated, however, by the fact that we do not observe the choices of potential candidates who choose not to enter (i.e., the sample is truncated) and the observed dependent variable (i.e., candidate choices to run as major party, minor party, or independent candidates) is measured as a discrete, unordered polychotomous choice. We employ a two-stage Heckman (1979)-type estimation procedure that utilizes a Poisson framework for estimating candidate entry rates. We find that our estimates of the effects of electoral institutions on the partisan affiliation decisions of independent candidates are unaffected by sample selection. Our estimates of the partisan affiliation decisions of minor party candidates, however, change when we account for non-random sample selection.

19
Paper
Populists in the Pluralist Heaven: How Direct Democracy Reduces Bias in Interest Representation
 Boehmke, Frederick Uploaded 10-15-1999 Keywords Initiativedirect democracyinterest groupslobbyingfixed effectsrepresentation Abstract This paper explores the effect of direct democracy on state interest group populations, providing an empirical test of a formal model of how access to the initiative process affects group formation and activities (Boehmke 1999), which predicts that more groups will mobilize and become active in initiative states. This prediction is supported by the findings in this paper, which also suggest that the effect of the initiative on group mobilizations has increased from the late 1970s to 1990. The prediction that groups that face a greater collective action problem are influenced more by the initiative is also confirmed since government and social groups are among those most affected. Counterfactual analysis indicates that the initiative process makes a state's interest group population more diverse, though the gains are decreasing from 1975 to 1990.

20
Paper
Voting, Abstention, and Individual Expectations in the 1992 Presidential Election
 Herron, Michael C. Uploaded 04-07-1998 Keywords votingabstentionselection bias1992 election Abstract This paper develops and applies to the 1992 presidential election a statistical model of voting and abstention in three--candidate elections. The model allows us to estimate key preference--related covariates in 1992, the extent to which abstention rates were correlated with political preferences, and the impact on abstention rates of expectations regarding the election winner. Throughout this paper, we contrast our results with those in Alvarez and Nagler (1995), a study of the 1992 election that does not incorporate abstention, and in so doing we illustrate the selection bias risked by presidential election voting research that ignores abstention. Our results highlight the importance of retrospective voting in 1992, and we identify numerous policy issues, for example, the death penalty, environmental spending, and social security, that individuals used to distinguish the three candidates in the 1992 election. Abortion, we find, played only a minor role in candidate choice. We find support for the angry voting hypothesis, namely, that angry individuals often supported the independent candidate, Ross Perot. Concerning abstention, we find that supporters of the Democratic challenger Bill Clinton abstained at higher rates than supporters of Perot and the incumbent president George Bush. And, we find that expectations concerning the likelihood that Clinton was going to be victorious in 1992 influenced abstention rates. Namely, Clinton supporters who believed that Clinton was likely to win voted at higher rates than individuals who believed otherwise. The opposite relation holds for Bush supporters: such individuals, when they predicted a Clinton victory, frequently abstained from voting. The results in this paper suggests that empirical voting studies should explicitly model the impact of expectations on voting and abstention and, more generally, should model abstention as a viable, individual--level

21
Paper
Cosponsorship Coalitions in the U.S. House of Representatives

22
Paper
The Problem with Quantitative Studies of International Conflict
 Beck, Nathaniel King, Gary Zeng, Langche Uploaded 07-15-1998 Keywords Conflictlogitneural networksforecastingBayesian analysis Abstract Despite immense data collections, prestigious journals, and sophisticated analyses, empirical findings in the literature on international conflict are frequently unsatisfying. Statistical results appear to change from article to article and specification to specification. Very few relationships hold up to replication with even minor respecification. Accurate forecasts are nonexistent. We provide a simple conjecture about what accounts for this problem, and offer a statistical framework that better matches the substantive issues and types of data in this field. Our model, a version of a neural network'' model, forecasts substantially better than any previous effort, and appears to uncover some structural features of international conflict.

23
Paper
If the Assumption Fits...: A Comment on the King Ecological Inference Solution
 Cho, Wendy K. T. Uploaded 08-20-1998 Keywords ecological inference Abstract I examine a recently proposed solution to the ecological inference problem (King 1997). It is asserted that the proposed model is able to reconstruct individual-level behavior from aggregate data. I discuss in detail both the benefits and limitations of this model. The assumptions of the basic model are often inappropriate for instances of aggregate data. The extended version of the model is able to correct for some of these limitations. However, it is difficult in most cases to apply the extended model properly.

24
Paper
Liberalism, Public Opinion, and their Critics: Some Lessons for Defending Science
 Jackman, Simon Uploaded 00-00-0000 Keywords liberalismscienceEnlightenmentstatisticspublic opinionpost-modernism Abstract Science and liberalism were both born out of the Enlightenment; liberalism's more-or-less successful defense against its critics may hold some insights for defenders of science against recent attacks. Liberalism, like science, is normatively thin, but procedurally rich. As such, liberalism and science has been able to accomodate shifting opinions about "the good" or "the truth" while pursuing it. For both science and liberalism, truth and "the good" are socially constructed, just as they themselves are socially constructed. This is sometimes overlooked. A brief history of the study of public opinion shows that liberalism's science -- political science, and the study of public opinion in particular -- is full of abstractions, metaphors, and approxiimations of reality that serve social ends. This can be used to disarm post-modern critics of science. The admission of a conetextualized basis for knowledge is not an abandonment of science, but rather an acknowledgement of the richness of the world, that is, if anything, an invitation to inquiry. This admission was the mutual origin of both science and liberalism, is the source of the their resiliance, and will ensure their safe passage throught the post-modern "storm".

25
Paper
Advancement in the House of Representatives
 Wawro, Gregory Uploaded 00-00-0000 Keywords legislative entrepreneurshipcareer concernslegislative institutionsmobilityMarkov modelsendogeneitymaximumlikelihood Abstract (None submitted)

26
Paper
Costly Information and the Stability of Equilibria in the Intergenerational Dilemma
 Signorino, Curtis S. Uploaded 07-16-1996 Keywords evolutionary game theoryoverlapping generations model Abstract Past analyses of the intergenerational dilemma have identified a number of subgame-perfect equilibrium strategies. However, nothing has been said about the stability of these equilibria: how robust they are to perturbation or how difficult it is to move to a Pareto-improving equilibrium. Moreover, it is generally assumed that information is costless. In this paper, I incorporate costly information and analyze the stability of the equilibria, identifying (1) the conditions under which CONFORMIST versus DEFECTOR equilibria will be stable and (2) the degree of difficulty in moving from the Pareto-suboptimal DEFECTOR equilibrium to the Pareto-optimal CONFORMIST equilibrium. In general, the maintenance of a CONFORMIST equilibrium becomes more difficult the more the second period is discounted and the higher the information costs. Additionally, when altruists are included in the model and information is only slightly costly, cycling among the homogeneous equilibria can occur. I show that to counter this instability, conformists should always punish altruists --- that to protect one's own future payoffs, one may need to police the interactions of others.

27
Paper
The Reciprocal Relationship Between State Defense Interest and Committee Representation in Congress
 Carsey, Thomas Rundquist, Barry Uploaded 11-04-1997 Keywords Distributive PoliticsLISRELPooled Time Series Abstract Does prior representation of a state on a Congressional defense committee lead to higher levels of per capita defense contract, or do higher levels of prior per capita contract awards to a state increase its probability of being represented on a defense committee? To solve this puzzle, we estimate a cross-lagged three-equation model on data from all 50 states from 1963 to 1989 using maximum likelihood within LISREL. We find a substantial reciprocal but non-confounding relationship between representation and the allocation of benefits for the House, but not for the Senate. Thus, for the House, this more appropriate model of distributive politics in Congress supports both the committee-induced benefits hypothesis and the recruitment hypothesis. Further, the paper elaborates on how this reciprocal relationship plays out over time.

28
Paper
Evaluating Measures of Ideology
 Bishin, Benjamin G. King, Gary Zeng, Langche Uploaded 08-24-1997 Keywords FILTERADANOMINATEideology Abstract A vigorous debate has arisen over the metric used to measure ideology (Jackson and Kingdon 1992, Poole and Rosenthal 1985, Snyder 1991, Krehbiel 1993). Ideology is difficult to measure because legislator's statements may be politically motivated and insincere. This paper evaluates the accuracy of NOMINATE and ADA scores by comparing them to an independent measure, based on background characteristics, developed herein. By Forecasting the Ideology of Legislators Through Elite Response (FILTER), this measure avoids the problems inherent in use of the roll call vote metric. In addition, the FILTER methodology is generalizable to studies of other deliberative bodies. The results show that FILTER scores are highly correlated with NOMINATE and ADA scores.

29
Paper
Heterogeneity, Salience, and Voter Decision Rules for Candidate Preference
 Glasgow, Garrett Uploaded 08-10-1997 Keywords voter behaviordecision rulesrank ordered logitsalienceissue voting Abstract Voters in American Presidential elections display a wide variety of decision rules when choosing a candidate. One form of this heterogeneity is differential weighting of issues used to make a vote choice. The structure of this heterogeneity and differential salience of issues has important implications for the American political process. Determining the nature of these heterogeneous preferences is vital to understanding electoral politics in the United States. An empirical technique for modeling and exploring heterogeneity is developed and applied to the 1980 NES Panel Study. I show that heterogeneity in voter decision rules is widespread, and that while many voters rely on non-issue considerations when determining candidate preference, issue voting does play a role in the decision rules of many voters.

30
Paper
Forecasting Time Series
 Hinich, Melvin J. Uploaded 07-08-1997 Keywords forecastautoregressivevector ARstate spacelinear Abstract The limits of forecasting a linear times series system are discussed.\r\nA stable autoregressive linear system can only be accurately predicted\r\nfor a few steps ahead of the last observation. If the time series is a\r\ndeterministic trend plus random fluctuations then the trend can be\r\npredicted as long as it is stable.

31
Paper
Breaking Up Isn't So Hard to Do: Ecological Inference and Split-Ticket Voting in the 1988 Presidential Election
 Burden, Barry C. Kimball, David Uploaded 04-01-1997 Keywords ecological inferencesplit-ticket voting Abstract This method uses Gary King's (1997) solution to the ecological inference problem to examine split-ticket voting patterns in the 1988 elections. Earlier studies of split-ticket voting used either aggregate data, which suffer from the ecological fallacy, or survey data, which suffer from misreporting and small, unrepresentative sample sizes within states and districts. This paper produces accurate estimates of the proportions of voters splitting their ballots in each state and district for the first time. With these results we test several competing theories of split-ticket voting and divided government. We find, contrary to Fiorina's (1996) balancing argument, that voters are not intentionally splitting their tickets to produce moderate policies. In most cases split outcomes are the result of lopsided congressional campaigns that feature well-funded, high quality candidates versus unknown competitors.

32
Paper
Strange Bedfellows or the Usual Suspects? Spatial Models of Ideology and Interest Group Coalitions
 Almeida, Richard Uploaded 04-01-2005 Keywords Interest groupscoalitionsspatial theorypoisson regressionideology Abstract Entering into coalitions has become a standard tactic for interest groups trying to maximize success while minimizing cost. The strategic conditions underlying decisions to form or join coalitions are beginning to be explored in the political science literature, yet very little is known about the process and criteria through which interest groups select coalition partners. In this paper, I explore the partner selection process by applying spatial theories of ideology and coalition formation to interest group participation on amicus curiae briefs. Previous work demonstrates that the lobbying efforts of groups can be used to generate a general measure of ideology for any group. These captured ideology scores are used in statistical models of interest group coalition partner selection on amicus curiae briefs from 1954-1985. This research demonstrates that the ideology scores captured for each group are powerful predictors of interest group coalition partner selection, even when controls for resources, group type, and other potential predictors are included.

33
Paper
A Method for Weighting Survey Samples of Low-Incidence Voters
 Nagler, Jonathan Alvarez, R. Michael Uploaded 07-19-2005 Abstract In this paper we describe a method for weighting surveys of a sub-sample of voters. We focus on the case of Latino voters. And we analyze data for three surveys: two opinion polls leading up to the 2004 presidential election, and the national exit poll from the 2004 election. We take advantage of much data when it is available, the large amount of data describing the demographics of Hispanic citizens. And we combine this with a model of turnout of those citizens to improve our estimate of the demographics characteristics of Hispanic voters. We show that alternate weighting schemes can substantively alter inferences about population parameters. [This is an incomplete version of the paper, it omits calculations of uncertainty which are some of the fundamental quantities of interest of the paper.]

34
Paper
Parametric and Nonparametric Bayesian Models for Ecological Inference in 2 x 2 Tables
 Imai, Kosuke Lu, Ying Uploaded 07-21-2004 Keywords Aggregate dataData augmentationDensity estimationDirichlet process priorNormal mixturesRacial voting Abstract The ecological inference problem arises when making inferences about individual behavior from aggregate data. Such a situation is frequently encountered in the social sciences and epidemiology. In this article, we propose a Bayesian approach based on data augmentation. We formulate ecological inference in $2 times 2$ tables as a missing data problem where only the weighted average of two unknown variables is observed. This framework directly incorporates the deterministic bounds, which contain all information available from the data, and allow researchers to incorporate the individual-level data whenever available. Within this general framework, we first develop a parametric model. We show that through the use of an $EM$ algorithm, the model can formally quantify the effect of missing information on parameter estimation. This is an important diagnostic for evaluating the degree of aggregation effects. Next, we introduce a nonparametric Bayesian model using a Dirichlet process prior to relax the distributional assumption of the parametric model. Through simulations and an empirical application, we evaluate the relative performance of our models and other existing methods. We show that in many realistic scenarios, aggregation effects are so severe that more than half of the information is lost, yielding estimates with little precision. We also find that our nonparametric model generally outperforms parametric models. C-code, along with an R interface, is publicly available for implementing our Markov chain Monte Carlo algorithms to fit the proposed models.

35
Paper
Incentives, Complexity, and Motivations in Experiments
 Bassi, Anna Morton, Rebecca Williams, Kenneth Uploaded 06-24-2006 Abstract We compare three motivation procedures in a voting experiment: 1) subjects paid a flat fee for participating, 2) subjects paid according to choices as is typical in a political economy experiment, and 3) subjects paid double the typical amount. We also vary complexity of the voting game. Financial incentives significantly increase the probability that subjects choose Bayesian-Nash predicted strategies. In the simpler game the typical financial incentive is sufficient; higher payments have no effect. But in the complex game, increasing financial incentives beyond the typical level is consequential. Further, repetition interacts with typical financial incentives in the complex game to increase the likelihood of Bayesian-Nash strategies. The evidence suggests that financial incentives increase subjects' cognitive attention to experimental tasks as individuals would be in comparable observational settings, which enhances theory evaluation in experiments and the external validity of the results.

36
Paper
The Balance Test Fallacy in Matching Methods for Causal Inference
 Imai, Kosuke King, Gary Stuart, Elizabeth Uploaded 06-29-2006 Keywords causal inferencecovariate balancematchingtreatment effect Abstract Matching methods are widely used to adjust for possibly confounded treatment assignment when making causal inferences. The success of the matching adjustment depends on generating as much equivalence as possible between the distribution of pre-treatment covariates in the treated and control groups. In numerous articles across a diverse variety of academic fields that use matching, researchers evaluate the degree of equivalence by conducting hypothesis tests, most commonly the $t$-test for the mean difference of each of the covariates in the two matched groups. We demonstrate that these hypothesis tests are fallacious and discuss better alternatives.

37
Paper
Using Graphs Instead of Tables to Improve the Presentation of Empirical Results in Political Science
 Kastellec, Jonathan Leoni, Eduardo Uploaded 11-15-2006 Keywords statistical graphicstablespresentationdescriptive statisticsregression results Abstract When political scientists present empirical results, they are much more likely to use tables rather than graphs, despite the fact that the latter greatly increases the clarity of presentation and makes it easier for a reader or listener to draw clear and correct inferences. Using a sample of leading journals, we document this tendency and suggest reasons why researchers prefer tables. We argue the extra work required in producing graphs is rewarded by greatly enhanced presentation and communication of empirical results. We illustrate their benefits by turning several published tables into graphs, including tables that present descriptive data and regression results. We show that regression graphs properly emphasize point estimates and confidence intervals rather than null significance hypothesis testing, and that they can successfully present the results of multiple regression models. A move away from tables and towards graphs would increase the quality of the discipline's communicative output and make empirical findings more accessible to every type of audience.

38
Paper
Extracting Systematic Social Science Meaning from Text
 Hopkins, Daniel King, Gary Uploaded 07-12-2007 Keywords automated content analysismachine learningsimulated extrapolationnon-parametric estimationinternet2008 U.S. Presidential election Abstract We develop two methods of automated content analysis that give approximately unbiased estimates of quantities of theoretical interest to social scientists. With a small sample of documents hand coded into investigator-chosen categories, our methods can give accurate estimates of the proportion of text documents in each category in a larger population. Existing methods successful at maximizing the percent of documents correctly classified allow for the possibility of substantial estimation bias in the category proportions of interest. Our first approach corrects this bias for any existing classifier, with no additional assumptions. Our second method estimates the proportions without the intermediate step of individual document classification, and thereby greatly reduces the required assumptions. For both methods, we also correct statistically, apparently for the first time, for the far less-than-perfect levels of inter-coder reliability that typically characterize human attempts to classify documents, an approach that will normally outperform even population hand coding when that is feasible. We illustrate these methods by tracking the daily opinions of millions of people about candidates for the 2008 presidential nominations in online blogs, data we introduce and make available with this article, and through evaluations in available corpora from other areas, including movie reviews, university web sites, and Enron emails. We also offer easy-to-use software that implements all methods described.

39
Paper
Sharp Bounds on the Causal Effects in Randomized Experiments with Truncation-by-Death''
 Imai, Kosuke Uploaded 08-23-2007 Keywords Average treatment effectCausal inferenceDirect and indirect effectIdentificationPrincipal stratificationQuantile treatment effect. Abstract Many randomized experiments suffer from the truncation-by-death'' problem where potential outcomes are not defined for some subpopulations. For example, in medical trials, quality-of-life measures are only defined for surviving patients, and various skip-pattern questions are analyzed in social science survey experiments. In this paper, I derive the sharp bounds on causal effects under various assumptions. My identification analysis is based on the idea that the `truncation-by-death'' problem can be formulated as the contaminated data problem. The proposed analytical techniques can be applied to other settings in causal inference including the estimation of direct and indirect effects and the analysis of three-arm randomized experiments with noncompliance.

40
Paper
Going beyond the book: Toward critical reading in statistics teaching
 Gelman, Andrew Uploaded 06-01-2008 Keywords categorical and continuous variableshandednessmenstruationprimary sourcessecondary sourcessex ratioteachingtextbookstraffic accidents Abstract We can improve our teaching of statistical examples from books by collecting further data, reading cited articles, and performing further data analysis. This should not come as a surprise, but what might be new is the realization of how close to the surface these research opportunities are: even influential and celebrated books can have examples where more can be learned with a small amount of additional effort. We discuss three examples that have arisen in our own teaching: an introductory textbook that motivated us to think more carefully about categorical and continuous variables; a book for the lay reader that misreported a study of menstruation and accidents; and a monograph on the foundations of probability that overinterpreted statistically insignificant fluctuations in sex ratios.

41
Paper
Voter transition estimation in multiparty systems
 Andreadis, Ioannis Uploaded 07-07-2008 Keywords ElectionsVoter transition ratesEcological inferenceMultiparty systems Abstract Recent advances in the field of ecological inference have provided researchers with new tools to estimate voter transition in two-party systems. Although some researchers have dealt with the R x C ecological inference problem, voter transition estimation remains a difficult and tedious goal. As a result scholars of multi-party systems still struggle with their electoral data. In this paper we present a new approach and we propose a new method that deals with this issue.

42
Paper
Exploiting a Rare Shift in Communication Flows to Document News Media Persuasion: The 1997 United Kingdom General Election
 Ladd, Jonathan Lenz, Gabriel Uploaded 07-30-2008 Keywords Media persuasionendorsementscampaignselectionsmatchingcausal inference Abstract Using panel data and matching techniques, we exploit a rare change in communication flows -- the endorsement switch to the Labour Party by several prominent British newspapers before the 1997 United Kingdom general election -- to study the persuasive power of the news media. These unusual events provide an opportunity to test for news media persuasion while avoiding methodological pitfalls that have plagued previous studies. By comparing readers of newspapers that switched endorsements to similar individuals who did not read these newspapers, we estimate that these papers persuaded a considerable share of their readers to vote for Labour. Depending on the statistical approach, the point estimates vary from about 10 percent to as high as 25 percent of readers. These findings provide rare, compelling evidence that the news media exert a powerful influence on mass political behavior.

43
Paper
Causal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity Analyses of a Field Experiment on Democratic Deliberations
 Imai, Kosuke Yamamoto, Teppei Uploaded 06-30-2008 Keywords differential misclassificationnonparametric boundsretrospective studiessensitivity analysissurvey measurements Abstract Political scientists have long been concerned about the validity of survey measurements. Although many have studied classical measurement error in linear regression models where the error is assumed to arise completely at random, in a number of situations the error may be correlated with the outcome. We analyze the impact of differential measurement error on causal estimation. The proposed nonparametric identification analysis avoids arbitrary modeling decisions and formally characterizes the roles of additional assumptions. We show the serious consequences of differential misclassification and offer a new sensitivity analysis that allows researchers to evaluate the robustness of their conclusions. Our methods are motivated by a field experiment on democratic deliberations, in which one set of estimates potentially suffers from differential misclassification. We show that an analysis ignoring differential measurement error may considerably overestimate the causal effects. This finding contrasts with the case of classical measurement error which always yields attenuation bias.

44
Paper
Modeling Dynamics in Time-Series-Cross-Section Political Economy Data
 Beck, Nathaniel Katz, Jonathan Uploaded 06-04-2009 Keywords dynamicsTSCSpolitical economylagged dependent variablenon-stationary Abstract This paper deals with a variety of dynamic issues in the analysis of time-series-cross-section (TSCS) data. While the issues raised are more general, we focus on applications to political economy. We begin with a discussion of specification and lay out the theoretical differences implied by the various types of time series models that can be estimated. It is shown that there is nothing pernicious in using a lagged dependent variable and that all dynamic models either implicitly or explicitly have such a variable; the differences between the models relate to assumptions about the speeds of adjustment of measured and unmeasured variables. When adjustment is quick it is hard to differentiate between the various models; with slower speeds of adjustment the various models make sufficiently different predictions that they can be tested against each other. As the speed of adjustment gets slower and slower, specification (and estimation) gets more and more tricky. We then turn to a discussion of estimation. It is noted that models with both a lagged dependent variable and serially correlated errors can easily be estimated; it is only OLS that is inconsistent in this situation. We then show, via Monte Carlo analysis shows that for typical TSCS data that fixed effects with a lagged dependent variable performs about as well as the much more complicated Kiviet estimator, and better than the Anderson-Hsiao estimator (both designed for panels).

45
Paper
Party Polarization in Congress: A Social Networks Approach
 Waugh, Andrew Pei, Liuyi Fowler, James Mucha, Peter Porter, Mason Uploaded 07-23-2009 Abstract We use the network science concept of modularity to measure polarization in the United States Congress. As a measure of the relationship between intra-community and extra-community ties, modularity provides a conceptually-clear measure of polarization that directly reveals both the number of relevant groups and the strength of their divisions. Moreover, unlike measures based on spatial models, modularity does not require predefined assumptions about the number of coalitions or parties, the shape of legislator utilities, or the structure of the party system. Importantly, modularity can be used to measure polarization across all Congresses, including those without a clear party divide, thereby permitting the investigation of partisan polarization across a broader range of historical contexts. Using this novel measure of polarization, we show that party influence on Congressional communities varies widely over time, especially in the Senate. We compare modularity to extant polarization measures, noting that existing methods underestimate polarization in periods in which party structures are weak, leading to artificial exaggerations of the extremeness of the recent rise in polarization. We show that modularity is a significant predictor of future majority party changes in the House and Senate and that turnover is more prevalent at medium levels of modularity. We utilize two individual-level variables, which we call "divisiveness" and "solidarity," from modularity and show that they are significant predictors of reelection success for individual House members, helping to explain why partially-polarized Congresses are less stable. Our results suggest that modularity can serve as an early-warning signal of changing group dynamics, which are reflected only later by changes in formal party labels.

46
Paper
New Empirical Strategies to Model the Government Formation Process
 Glasgow, Garrett Golder, Matt Golder, Sona Uploaded 07-15-2010 Keywords discrete choicemixed logitIIArandom coefficientsgovernment formation Abstract Over the past decade, a "standard approach" to the quantitative study of government formation has developed. This approach involves the use of a conditional (CL) logit model to examine government choice with the government formation opportunity as the unit of analysis. In this paper, we reconsider this approach and make three methodological contributions. First, we demonstrate that the existing procedure used to test for the independence of irrelevant alternatives (IIA) is ﬂawed and severely biased against ﬁnding IIA violations. Our new testing procedure reveals that many government alternatives share unobserved attributes, thereby violating the IIA assumption and making the CL model inappropriate. Second, we employ a mixed logit with random coefficients that allows us to take account of unobserved heterogeneity and IIA violations. Third, we return to a question that originally motivated this literature, namely, what determines the likelihood that a particular party enters government? Although scholars have generally abandoned this question due to perceived methodological limitations in our ability to address it, we demonstrate that calculating probabilities for parties entering ofﬁce rather than governments is straightforward in a mixed logit framework.

47
Paper
Bayesian Methods: A Social and Behavioral Sciences Approach, ANSWER KEY TO THE SECOND EDITION. Odd Numbers.
 Park, Hong Min Gill, Jeff Uploaded 09-14-2010 Keywords BayesmodelingsimulationBayesian inferenceMCMCpriorposteriorBayes FactorDICGLMMarkov chainMonte Carlohierarchical modelslinearnonlinear Abstract This is the odd-numbered exercise answers to the second edition of Bayesian Methods: A Social and Behavioral Sciences Approach (minus Chapter 13). Course Instructors can get the full set from Chapman & Hall/CRC upon request.

48
Paper
How Robust Standard Errors Expose Methodological Problems They Do Not Fix
 King, Gary Roberts, Margaret Uploaded 07-13-2012 Keywords robust standard errorsclustered standard errorsheteroskedasticity-consistent standard errors Abstract "Robust standard errors'' are used in a vast array of scholarship across all fields of empirical political science and most other social science disciplines. The popularity of this procedure stems from the fact that estimators of certain quantities in some models can be consistently estimated even under particular types of misspecification; and although classical standard errors are inconsistent in these situations, robust standard errors can sometimes be consistent. However, in applications where misspecification is bad enough to make classical and robust standard errors diverge, assuming that misspecification is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, we show that settling for a misspecified model (even with robust standard errors) can be a big mistake, in that all but a few quantities of interest will be impossible to estimate (or simulate) from the model without bias. We suggest a different practice: Recognize that differences between robust and classical standard errors are like canaries in the coal mine, providing clear indications that your model is misspecified and your inferences are likely biased. At that point, it is often straightforward to use some of the numerous and venerable model checking diagnostics to locate the source of the problem, and then modern approaches to choosing a better model. With a variety of real examples, we demonstrate that following these procedures can drastically reduce biases, improve statistical inferences, and change substantive conclusions.

49
Paper
Conservative Vote Probabilities: An Easier Method for the Analysis of Roll Call Data
 Fowler, Anthony Hall, Andrew B. Uploaded 08-08-2012 Keywords Roll CallIdeologyCongressSupreme CourtState LegislaturesNon-parametric Abstract We propose a new roll-call scaling method based on OLS which is easier to implement and understand than previous methods and also produces directly interpretable estimates. This measure, Conservative Vote Probability (CVP), indicates the probability that an individual legislator votes "conservatively" relative to the median legislator. CVP is a flexible non-parametric statistical technique that requires no complicated assumptions but still produces legislator scalings that correlate with previous roll call methods at extremely high levels. In this paper we introduce the methodology behind CVP and off er several substantive examples to demonstrate its e efficacy as an easier, more accessible alternative to previous roll call methods.

50
Paper
The fault in our stars: Measuring and correcting significance bias in Political Science
 Esarey, Justin Wu, Ahra Uploaded 01-16-2014 Keywords significancehypothesis testregression Abstract Prior research finds that statistically significant results are overrepresented in scientific publications. If significant results are consistently favored in the review process, published results will systematically overstate the magnitude of their findings. Worse yet, the typical two-tailed statistical significance test with \alpha=0.05 does little to prevent the proliferation of false positives in the literature. In this paper, we systematically measure the impact of these two forms of significance bias on published research in quantitative political science. We estimate that 35% or more of published results exaggerate their substantive significance to a meaningful degree, with an average upward bias of 9%-20%. Additionally, 15%-35% of published results are at elevated risk of being false positives. Most importantly, we evaluate a variety of new and existing methodological strategies to correct both forms of significance bias. We conclude that a smaller \alpha threshold combined with conservative Bayesian priors is an effective remedy.

 < prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 next>