logoPicV1 logoTextV1

Search Results

Below results based on the criteria 'estimation'
Total number of records returned: 35

The Fruit of Jefferson's Dinner Party: Roll Call Analysis of the Compromise of 1790 with Substantive and Relational Constraints
Clinton, Joshua
Meirowitz, Adam

Uploaded 07-12-2002
Keywords ideal point estimation
log roll
First Congress
agenda estimation
Abstract The "Compromise of 1790" -- in which legislative gridlock in the First House (1789-1791) was supposedly resolved by a deal in which Southern states conceded to the assumption of states' Revolutionary War debt by the federal government in exchange for locating the permanent Capitol along the Potomac -- is one of the earliest and most colorful examples of log rolls in American politics. However, historians disagree on the validity or completeness of this story and this account is only directly supported by an account from Jefferson. We assess the extent to which the voting record actually supports the hypothesis that a compromise was reached sometime in mid June. Using substantive information about the roll call votes and relational information about the agenda to specify a model in which bill locations are identified we implement a Bayesian analysis (using MCMC methods). Our results do not support the traditional account of the compromise. In resolving the capital question legislators did not anticipate that assumption would carry. We also find that the final outcome was quite centrist and legislator ideal points are better explained by sectional, as opposed to ideological, theories.

Bridging Institutions and Time: Creating Comparable Preference Estimates for Presidents, Senators, Representatives and Justices, 1950-2002
Bailey, Michael

Uploaded 07-19-2005
Keywords ideal point estimation
Supreme Court
Abstract Difficulty in comparing preferences across time and institutional contexts hinders the empirical testing of many important theories in political science. In this paper, I characterize these difficulties and provide a measurement approach that relies on inter-temporal and inter-institutional ``bridge'' observations and Bayesian Markov chain simulation methods. I generate preference estimates for Presidents, Senators, Representatives and Supreme Court Justices that are comparable across time and across institutions. Such preference estimates are indispensable in a variety of important research projects, including research on statutory interpretation, executive influence on the Supreme Court and Senate influence on court appointments.

Robust Estimation and Outlier Detection for Overdispersed Multinomial Models of Count Data, with an Application to the Elian Effect in Florida
Mebane, Walter R.
Sekhon, Jasjeet

Uploaded 07-12-2002
Keywords robust estimation
overdispersed multinomial regression
Abstract We develop a robust estimation method for regression models for vectors of counts (overdispersed multinomial models). The method requires only that the model is good for most---not all---of the observed data, and it identifies outliers. A Monte Carlo sampling experiment shows that the robust method can produce consistent parameter estimates and correct statistical inferences even when ten percent of the data are generated by a significantly different process, where nonrobust maximum likelihood estimation fails. We analyze Florida county vote data from the 2000 presidential election, considering votes for five categories of presidential candidates (Buchanan, Nader, Gore, Bush and ``other''), focusing on Cuban-Americans' reactions to the Elian Gonzalez affair. We replicate results regarding Buchanan's vote in Palm Beach County. We use Census tract data within Miami-Dade County to confirm the need to take the Cuban-American population explicitly into account. The analysis illustrates how the robust method can support triangulation to verify whether a regression specification is adequate.

Parametric and Nonparametric Bayesian Models for Ecological Inference in 2 x 2 Tables
Imai, Kosuke
Lu, Ying

Uploaded 07-21-2004
Keywords Aggregate data
Data augmentation
Density estimation
Dirichlet process prior
Normal mixtures
Racial voting
Abstract The ecological inference problem arises when making inferences about individual behavior from aggregate data. Such a situation is frequently encountered in the social sciences and epidemiology. In this article, we propose a Bayesian approach based on data augmentation. We formulate ecological inference in $2 times 2$ tables as a missing data problem where only the weighted average of two unknown variables is observed. This framework directly incorporates the deterministic bounds, which contain all information available from the data, and allow researchers to incorporate the individual-level data whenever available. Within this general framework, we first develop a parametric model. We show that through the use of an $EM$ algorithm, the model can formally quantify the effect of missing information on parameter estimation. This is an important diagnostic for evaluating the degree of aggregation effects. Next, we introduce a nonparametric Bayesian model using a Dirichlet process prior to relax the distributional assumption of the parametric model. Through simulations and an empirical application, we evaluate the relative performance of our models and other existing methods. We show that in many realistic scenarios, aggregation effects are so severe that more than half of the information is lost, yielding estimates with little precision. We also find that our nonparametric model generally outperforms parametric models. C-code, along with an R interface, is publicly available for implementing our Markov chain Monte Carlo algorithms to fit the proposed models.

A model- based approach to the analysis of a large table of counts: occupational class patterns in among Australians by ancestry, generation, and age group
Jones, Kelvyn
Johnston, Ron
Manley, David
Owen, Dewi
Forrest, James

Uploaded 10-06-2014
Keywords tabular analysis of counts
log-Normal Poisson model
random effects
precision-weighted estimate
Bayesian estimation
Abstract A novel exploratory approach is developed to the analysis of a large table of counts. It uses random- effects models where the cells of the table (representing types of individuals) form the higher level in a multilevel model. The model includes Poisson variation and an offset to model the ratio of observed to expected values thereby permitting the analysis of relative rates. The model is estimated as a Bayesian model through MCMC procedures and the estimates are precision-weighted so that unreliable rates are down-weighted in the analysis. Once reliable rates have been obtained graphical and tabular analysis can be deployed. The analysis is illustrated through a study of the occupational class distribution for people of different age, birthplace origin (ancestry) and generation in Australia. The case is also made that even where there is a full census there is a need to move beyond a descriptive analysis to a proper inferential and modelling framework. We also discuss the relative merits of Full and Empirical Bayes approaches to model estimation.

Detection of Multinomial Voting Irregularities
Mebane, Walter R.
Sekhon, Jasjeet
Wand, Jonathan

Uploaded 07-17-2001
Keywords outlier detection
robust estimation
overdispersed multinomial
generalized linear model
2000 presidential election
voting irregularities
Abstract We develop a robust estimator for an overdispersed multinomial regression model that we use to detect vote count outliers in the 2000 presidential election. The count vector we model contains vote totals for five candidate categories: Buchanan, Bush, Gore, Nader and ``other.'' We estimate the multinomial model using county-level data from Florida. In Florida, the model produces results for Buchanan that are essentially the same as in a binomial model: Palm Beach County has the largest positive residual for Buchanan. The multinomial model shows additional large discrepancies that almost always hurt Gore or Nader and help Bush or Buchanan.

Statistical Backwards Induction: A Simple Method for Estimating Statistical Strategic Models
Bas, Muhammet
Signorino, Curtis
Walker, Robert

Uploaded 09-22-2006
Keywords discrete choice
statistical backwards induction
limited information estimation
Abstract We present a simple method for estimating regressions based on extensive-form games. Our procedure, which can be implemented in most standard statistical packages, involves sequentially estimating standard logits (or probits) in a manner analogous to backwards induction. We demonstrate that the technique produces consistent parameter estimates and show how to calculate consistent standard errors using model-dependent analytical and general simulation techniques. To illustrate the method, we replicate Leblang’s (2003) study of speculative attacks by financial markets and government responses to these attacks.

Using Auxiliary Data to Estimate Selection Bias Models
Boehmke, Frederick

Uploaded 07-06-2001
Keywords selection bias
two-stage estimation
survey design
interest groups
Abstract Recent work has made progress in estimating models involving selection bias of a particularly strong nature: all nonrespondents are unit nonresponders, meaning that no data is available for them. These models are reasonable successful in circumstances where the dependent variable of interest is continuous, but they are less practical empirically when it is latent and only discrete outcomes or choices are observed. I develop a method in this paper to estimate these models that is much more practical in terms of estimation. The model uses a small amount of auxiliary information to estimate the selection equation parameters which are then held fixed to estimate the equation of interest parameters in a maximum likelihood setting. After presenting monte carlo analysis to support the model, I apply the technique to a substantive problem: which interest groups are likely to to be involved in support of potential initiatives to achieve their policy goals.

Extracting Systematic Social Science Meaning from Text
Hopkins, Daniel
King, Gary

Uploaded 07-12-2007
Keywords automated content analysis
machine learning
simulated extrapolation
non-parametric estimation
2008 U.S. Presidential election
Abstract We develop two methods of automated content analysis that give approximately unbiased estimates of quantities of theoretical interest to social scientists. With a small sample of documents hand coded into investigator-chosen categories, our methods can give accurate estimates of the proportion of text documents in each category in a larger population. Existing methods successful at maximizing the percent of documents correctly classified allow for the possibility of substantial estimation bias in the category proportions of interest. Our first approach corrects this bias for any existing classifier, with no additional assumptions. Our second method estimates the proportions without the intermediate step of individual document classification, and thereby greatly reduces the required assumptions. For both methods, we also correct statistically, apparently for the first time, for the far less-than-perfect levels of inter-coder reliability that typically characterize human attempts to classify documents, an approach that will normally outperform even population hand coding when that is feasible. We illustrate these methods by tracking the daily opinions of millions of people about candidates for the 2008 presidential nominations in online blogs, data we introduce and make available with this article, and through evaluations in available corpora from other areas, including movie reviews, university web sites, and Enron emails. We also offer easy-to-use software that implements all methods described.

A Note Relating Ideal Point Estimates to the Spatial Model
Clinton, Joshua
Meirowitz, Adam

Uploaded 07-18-2000
Keywords ideal points
preference estimation
spatial model
Abstract Existing preference estimators do not incorporate the full structure of the spatial model. Specifically, they fail to use the sequential nature of the agenda by not constraining the nay location of a bill to be the yea location of the last successful policy. The consequences of this omission may be far-reaching. Not only is information useful for the identification of the model neglected, but more seriously, the dimensionality of the policy space may be incorrectly estimated. Preference and bill location estimates are uninterpretable in terms of the spatial model. We show that under very general assumptions, ML estimates of ideal points that do not constrain the nay locations will differ from ML estimates that constrain the nay locatios -- a difference that does not vanish as the numbers of votes goes to infinity. Additionally, unconstrained models underestimate the true dimensionality of the policy space. We derive a Maximum Likelihood estimator of legislative preferences and bill locations that shares basic assumptions with the spatial model of voting.

Estimating Proposal and Status Quo Locations Using Voting and Cosponsorship Data
Peress, Michael

Uploaded 07-04-2008
Keywords Ideal Point Estimation
Status Quo
Theories of Lawmaking
Abstract Theories of lawmaking generate predictions for the policy outcome as a function of the status quo. These theories are difficult to test because existing ideal point estimation techniques do not recover the locations of proposals or status quos. Instead, such techniques only recover cutpoints. This limitation has meant that existing tests of theories of lawmaking have been indirect in nature. I propose a method of directly measuring ideal points, proposal locations, and status quo locations on the same multidimensional scale, by employing a combination of voting data, bill and amendment cosponsorship data, and the congressional record. My approach works as follows. First, we can identify the locations of legislative proposals (bills and amendments) on the same scale as voter ideal points by jointly scaling voting and cosponsorship data. Next, we can identify the location of the final form of the bill using the location of last successful amendment (which we already know). If the bill was not amended, then the final form is simply the original bill location. Finally, we can identify the status quo point by employing the cutpoint we get form scaling the final passage vote. To implement this procedure, I automatically coded data on the congressional record available from www.thomas.gov. I apply this approach to recent sessions of the U.S. Senate, and use it to test the implications of competing theories of lawmaking.

Two-Stage Estimation of Non-Recursive Choice Models
Alvarez, R. Michael
Glasgow, Garrett

Uploaded 03-02-1999
Keywords two-stage estimation
two-stage probit least squares
two-stage conditional maximum likelihood
Abstract Questions of causation are important issues in empirical research on political behavior. Most of the discussion of the econometric problems associated with multi-equation models with reciprocal causation has focused on models with continuous dependent variables (e.g. Markus and Converse 1979; Page and Jones 1979). Yet many models of political behavior involve discrete or dichotomous dependent variables; this paper describes two techniques which can consistently estimate reciprocal relationships between dichotomous and continuous dependent variables. The first, two-stage probit least squares (2SPLS), is very similar to two-stage instrumental variable techniques. The second, two-stage conditional maximum likelihood (2SCML), may overcome problems associated with 2SPLS, but has not been used in the political science literature. First, we demonstrate the potential pitfalls of ignoring the problems of reciprocal causation in non-recursive choice models. Then, we show the properties of both techniques using Monte Carlo simulations: both the two-stage models perform well in large samples, but in small samples the 2SPLS model has superior statistical properties. However, the 2SCML model offers an explicit statistical test for endogeneity. Last, we apply these techniques to an empirical example which focuses on the relationship between voter preferences in a presidential election and the voter's uncertainty about the policy positions taken by the candidates. This example demonstrates the importance of these techniques for political science research.

Bayesian Combination of State Polls and Election Forecasts
Lock, Kari
Gelman, Andrew

Uploaded 09-21-2008
Keywords election prediction
pre-election polls
Bayesian updating
shrinkage estimation
Abstract In February of 2008, SurveyUSA polled 600 people in each state and asked who they would vote for in either head-to-head match-up: Obama vs. McCain, and Clinton vs. McCain. Here we integrate these polls with prior information; how each state voted in comparison to the national outcome in the 2004 election. We use Bayesian methods to merge prior and poll data, weighting each by its respective information. The variance for our poll data incorporates both sampling variability and variability due to time before the election, estimated using pre-election poll data from the 2000 and 2004 elections. The variance for our prior data is estimated using the results of the past nine presidential elections. The union of prior and poll data results in a posterior distribution predicting how each state will vote, in turn giving us posterior intervals for both the popular and electoral vote outcomes of the 2008 presidential election. Lastly, these posterior distributions are updated with the most recent poll data as of August, 2008.

Representation and Salient Issues: Legislator Responsiveness to the Service Constituency
Bennett, Sherry L.
Smith, Renee M.

Uploaded 04-21-1999
Keywords service constituency
opinion activation
U.S. trade policy
ordered probit
IV estimation
Abstract ormal models of the supply of public policy and of information transmission between lobbyists and legislators imply that the preferences of both organized and informed, but unorganized, interests influence legislators' vote choices. Denzau and Munger (1986) refer to these citizens as a legislator's service constituency. In this paper, we provide argument and evidence to show that the concept of a service constituency is crucial to theoretical explanations and empirical investigations of a legislator's responsiveness to constituent demands on salient issues. We also provide theory and evidence to account for the process by which unorganized citizens become part of a service constituency. Our argument emphasizes the effects of interest group competition on information accessibility and opinion activation for diffuse, unorganized citizens. Our empirical evidence provides strong support for our hypotheses about opinion activation and the effects of the service constituency on legislative behavio

Agents and Outliers: Testing Organization with Committee Preference Expression
Fortunato, David

Uploaded 06-22-2009
Keywords ideal point estimation
legislative organization
theories of law making
Abstract This paper offers a test of the three dominant schools of thought on the organization of the U.S. House of Representatives by revisiting the old question, Are committees composed of preference outliers? This study takes a new approach to the outlier question by explicitly assuming that the distribution of preferences among committee members varies from that of their colleagues on the floor. By making this assumption I free myself of the obligation of measuring ideology and focus instead on gauging the degree to which the committee crafted agenda allows these preference differences to be expressed --- or the degree to which committees are allowed to high jack the policy making process. Evaluating the latitude that committees take in setting the agenda allows me to assess not only the degree to which committee agents shirk from their principal but also the ability of the three dominant schools of thought on the organization of the U.S. House to predict legislative behavior. By generating jurisdiction specific estimates of agenda manipulation I find strong support for party dominated models of organization with a hierarchical ordering of agency in committee members and evidence for more outlier committees than previous research.

Selection Bias in a Model of Candidate Entry Decisions
Kanthak, Kristen
Morton, Becky
Gerber, Elisabeth R.

Uploaded 07-13-1999
Keywords selection bias
Poisson estimation
population uncertainty
Abstract In recent years, several states have changed or considered changing their laws regulating how political parties nominate candidates for office. We focus on one potentially important consequence of these changes: How do primary election laws affect candidate entry decisions? We have constructed and solved a formal model of individual candidate behavior in which potential candidates can choose to: 1) enter the electoral competition as major party candidates; 2) enter as minor party candidates; 3) enter as independents; or 4) not enter. Based on our analysis of the model, we hypothesize that the expected utility of each choice is a function, in part, of a state's primary election laws. We test our hypotheses with data on candidate choice from recent US Congressional elections. Estimation of our model is complicated, however, by the fact that we do not observe the choices of potential candidates who choose not to enter (i.e., the sample is truncated) and the observed dependent variable (i.e., candidate choices to run as major party, minor party, or independent candidates) is measured as a discrete, unordered polychotomous choice. We employ a two-stage Heckman (1979)-type estimation procedure that utilizes a Poisson framework for estimating candidate entry rates. We find that our estimates of the effects of electoral institutions on the partisan affiliation decisions of independent candidates are unaffected by sample selection. Our estimates of the partisan affiliation decisions of minor party candidates, however, change when we account for non-random sample selection.

A Comparison of the Small-Sample Properties of Several Estimators for Spatial-Lag Count Models
Hays, Jude
Franzese, Robert

Uploaded 07-22-2009
Keywords Interdependence
Spatial Econometrics
Spatial-Lag Models
Count Data
Nonlinear Least-Squares
GMM Estimation
Abstract Political scientists frequently encounter and analyze spatially interdependent count data. Applications include counts of coups in African countries, of state participation in militarized interstate disputes, and of bills sponsored by members of Congress, to name just a few. The extant empirical models for spatially interdependent counts and their corresponding estimators are, unfortunately, dauntingly complex, computationally costly, or both. They also generally tend 1) to treat spatial dependence as nuisance, 2) to stress spatial-error or spatial-heterogeneity models over spatial-lag models, and 3) to treat all observed spatial association as arising by one undifferentiated source. Prominent examples include the Winsorized count model of Kaiser and Cressie (1997) and Griffith�s spatially-filtered Poisson model (2002, 2003). Given the available options, the default approaches in most applied political-science research are to either to ignore spatial interdependence in count variables or to use spatially-lagged observed-counts as exogenous regressors, either of which leads to inconsistent estimates of causal relationships. We develop alternative nonlinear least-squares and method-of-moments estimators for the spatial-lag Poisson model that are consistent. We evaluate by Monte Carlo simulation the small sample performance of these relatively simple estimators against the naiive alternatives of current practice. Our results indicate substantial consistency improvements against minimal complexity and computational costs. We illustrate the model and estimators with an analysis of terrorist incidents around the world.

A Random Effects Approach to Legislative Ideal Point Estimation
Bailey, Michael

Uploaded 04-21-1998
Keywords ideal points
random effects models
Bayesian estimation
em algorithm
Abstract Conventionally, scholars use either standard probit/logit techniques or fixed-effect methods to estimate legislative ideal points. However, these methods are unsatisfactory when a limited number of votes are available: standard probit/logit methods are poorly equipped to handle multiple votes and fixed-effect models disregard serious ``incidental parameter'' problems. In this paper I present an alternative approach that moves beyond single-vote probit/logit analysis without requiring the large number of votes needed for fixed-effects models. The method is based on a random effects, panel logit framework that models ideal points as stochastic functions of legislator characteristics. Monte Carlo results and an application to trade politics demonstrate the practical usefulness of the method.

An Ecological Item-Response Model for Multiple Subsets of Respondents with Application to the European Court of Justice
Malecki, Michael

Uploaded 07-30-2009
Keywords ideal point estimation
item response
judicial politics
ecological inference
hierarchical model
Abstract The European Court of Justice (ECJ) has fostered the development of a common European legal order, and in doing so, has asserted itself and its supremacy more, and more successfully, than any other international court. It has maintained features of international courts such as its composition of one judge per member state, while employing other tools of national high courts such as en banc decisions and organization into chambers, that together hide internal dissent and shield the ECJ from direct monitoring or curbing by the member states. The same shield has frustrated efforts to quantify the court's responsiveness to member states, with limited evidence that the ECJ yields to some member-state interest some of the time, but nonetheless has advanced integration beyond national governments' wishes. This equivocation arises at least in part from a failure to include relevant information about the court's composition and organization. In fact, the six-year renewable terms of judges, their previous qualifications and affiliations, and the internal organization into chambers all provide prior information that can and should be incorporated into a more complete model of judicial behavior. I develop an extension of the well-studied item-response model to infer judges' preferences, using the structured ecological data from the cases they heard and relevant prior information about judges and the national governments that appoint them as well as information about cases. I offer new, more rigorous tests for existing theoretical hypotheses about the ECJ's deference to certain actors and preference for integration. The model is applicable to other settings of structured ecological data. Many other national and international courts hear cases in subset chambers, and relevant prior information should be included rather than ignored in models of judicial behavior.

Estimating voter preference distributions from individual-level voting data (with application to split-ticket voting
Lewis, Jeffrey B.

Uploaded 09-15-1998
Keywords split ticket voting
ideal point estimation
spatial voting models
EM algorithm
Abstract In the last decade a great deal of progress has been made in estimating spatial models of legislative roll-call voting. There are now several well-known and effective methods of estimating the ideal points of legislators from their roll-call votes. Similar progress has not been made in the empirical modeling of the distribution of preferences in the electorate. Progress has been slower, not because the question is less important, but because of limitations of data and a lack of tractable methods. In this paper, I present a method for inferring the distribution of voter ideal points on a single dimension from individual-level voting returns on ballot propositions. The statistical model and estimation technique draw heavily on the psychometric literature on test taking and, in particular, on the work of Bock and Aitkin (1981}. The method yields semi-parametric estimates of the distribution of voters along an unobserved spatial dimension. The model is applied to data from the 1992 general election in Los Angeles County. I present the distribution of voter ideal points of each of 17 Congressional districts. Finally, I consider the issue of split-ticket voting estimating for two Congressional districts the distribution of voters that split their tickets and of those that did not.

Inferring Strategic Voting
Kawai, Kei
Watanabe, Yasutora

Uploaded 07-16-2010
Keywords Strategic Voting
Set Estimation
Partial Identification
Abstract We estimate a model of strategic voting and quantify the impact it has on election outcomes. Because the model exhibits multiplicity of outcomes, we adopt a set estimator. Using Japanese general-election data, we find a large fraction [75.3%, 80.3%] of strategic voters, only a small fraction [2.4%, 5.5%] of whom voted for a candidate other than the one they most preferred (misaligned voting). Existing empirical literature has not distinguished between the two, estimating misaligned voting instead of strategic voting. Accordingly, while our estimate of strategic voting is high, our estimate of misaligned voting is comparable to previous studies.

Do Majority-Minority Districts Maximize Black Representation in Congress
Epstein, David
O'Halloran, Sharyn
Cameron, Charles

Uploaded 01-01-1995
Keywords districting
voting rights act
minority representation
electoral systems
semi-parametric estimation
Abstract This paper investigates the question of whether or not concentrated minority districts, which increase the probability that minorities are elected to office but decrease minority influence elsewhere, maximize overall black representation in Congress. We address this question in a three-step process: we first estimate representation equations that link constituency preferences to the actions of their representative; then electoral equations that link constituency characteristics to the type of representative elected; and finally combine these two effects to simulate the districting strategies that maximize substantive minority representation. We find that outside of the South, dividing minority voters equally across districts maximizes representation, while in the South the optimal scheme creates concentrated districts on the order of 47% black voting age population. We also conclude that minority candidates have substantial chances of being elected from districts with less than 50% minority voters, and that in the face of a national Republican tide, optimal districting schemes will concentrate minority voters less, rather than more.

The Robustness of Normal-theory LISREL Models: Tests Using a New Optimizer, the Bootstrap, and Sampling Experiments, with Applications
Mebane, Walter R.
Sekhon, Jasjeet
Wells, Martin T.

Uploaded 01-01-1995
Keywords statistics
covariance structures
linear structural relations
confidence intervals
specification tests
hypothesis tests
evolutionary programming
genetic algorithms
monte carlo
sampling experiment
Abstract Asymptotic results from theoretical statistics show that the linear structural relations (LISREL) covariance structure model is robust to many kinds of departures from multivariate normality in the observed data. But close examination of the statistical theory suggests that the kinds of hypotheses about alternative models that are most often of interest in political science research are not covered by the nice robustness results. The typical size of political science data samples also raises questions about the applicability of the asymptotic normal theory. We present results from a Monte Carlo sampling experiment and from analysis of two real data sets both to illustrate the robustness results and to demonstrate how it is unwise to rely on them in substantive political science research. We propose new methods using the bootstrap to assess more accurately the distributions of parameter estimates and test statistics for the LISREL model. To implement the bootstrap we use optimization software two of us have developed, incorporating the quasi-Newton BFGS method in an evolutionary programming algorithm. We describe methods for drawing inferences about LISREL models that are much more reliable than the asymptotic normal-theory techniques. The methods we propose are implemented using the new software we have developed. Our bootstrap and optimization methods allow model assessment and model selection to use well understood statistical principles such as classical hypothesis testing.

Improving Inferences in the Study of Crisis Bargaining
Arena, Phil
Joyce, Kyle

Uploaded 07-19-2010
Keywords crisis bargaining
instrumental variables
structural estimation
empirical implications of theoretical models
Abstract We present a simple crisis bargaining model that indicates that targets can generally prevent war by arming. We then create a simulated data set where the bargaining model is assumed to perfectly describe the data-generating process for those states engaged in crisis bargaining, which we assume most pairs of states are not. We further assume researchers cannot observe which states are engaged in crisis bargaining, though observable variables might serve as proxies. We demonstrate that a naive design would indicate a positive relationship between arming and war. We then evaluate the ability of matching, instrumental variables, and statistical backwards induction to uncover the true negative relationship. While each method is capable of doing so under certain conditions, each also faces important limitations. In most cases, statistical backwards induction will be the most practical of the three, but we caution that even this method is no perfect fix.

The Coalition-oriented Evolution of Vote Intentions across Regions and Levels of Political Awareness during the 1993 Canadian Election Campaign: Quotidian Markov Chain Models using Rolling Cross-section Data
Wand, Jonathan
Mebane, Walter R.

Uploaded 08-28-1997
Keywords Markov chains
rolling cross-section data
macro data
categorical data
survey data
Canadian politics
strategic voting
Abstract We use survey data collected in Ontario and Quebec during the 1993 Canadian federal election to assess the extent to which voters were sensitive to the distribution of positions in special institutions that would possibly be created to handle negotiations between Quebec and the rest of Canada following a referendum on Quebec sovereignty expected after the election. We draw on a theory of coalition-oriented voting developed by Austin-Smith and Banks (1988) to argue that voters' anticipations regarding those institutions contributed to the catastrophic losses suffered by the Progressive Conservative party. We use a method we have developed for estimating discrete, finite-state Markov chain models from ``macro'' data to analyze the dynamics of individual choice probabilities in daily rolling cross-sectional survey data from 1993 Canadian Election Study. We allow each transition matrix to be updated as a function of daily vote support for either the Bloc or Reform to test for reactive coalition-oriented voting. We find significant reactive voting among Quebecois non-sovereigntists. The timing of these reactions depended on the individual's level of political awareness. In contrast, we find no evidence of reactive voting among either Quebecois sovereigntists or Ontario voters.

Properties of Ideal-Point Estimators
Tahk, Alexander

Uploaded 07-20-2010
Keywords ideal points
ideal-point estimation
Quinn conjecture
optimal classification
Abstract Although ideal-point estimation has become relatively commonplace in political science, fairly little is known about the properties of these estimations. Two of the most common estimators—NOMINATE and the Bayesian approach of Clinton, Jackman, and Rivers—suffer from the incidental parameters problem, implying that standard results about the consistency of maximum-likelihood and Bayes estimators do not apply. Thus, despite their widespread use, these estimators are not known to be consistent and may lead to erroneous results even in very large samples. This paper provides several theoretical results regarding ideal-point estimation. First, this paper demonstrates a counterexample to consistency of common ideal-point estimators—even with regard to the rank of the ideal points. It then presents a simple estimator of the rank of unidimensional ideal points that is inefficient but also consistent for a generalization of most common ideal-point models.

Pauline, the Mainstream, and Political Elites: the place of race in Australian political ideology
Jackman, Simon

Uploaded 08-25-1997
Keywords public opinion
political ideology
political elites
Australian politics
factor analysis
ideological locations
density estimation
plotting highest density regions
Abstract An often heard claim in the current ``race debate'' is that Australia's major political parties are out of touch with ``mainstream'' Australia on issues related to race. Parallel surveys of the electorate and candidates in the 1996 Federal election allow this claim to be tested, with items tapping general ideological dispositions, but including questions about Aboriginal Australians, immigration, and links with Asia. I make three critical findings: egin{itemize} item the electorate holds quite conservative opinions on these issues relative to the candidates, and is quite distant from ALP candidates in particular; item attitudes on racial issues are a powerful component of the electorate's otherwise relatively loosely organized political ideology, so much so that any categorisation of Australian political ideology ignoring race must be considered incomplete; item racial attitudes cut across other components of the electorate's ideology, placing all the parties under internal ideological strains, but the ALP appears particularly vulnerable on this score. end{itemize} While the data show the Coalition to be the net beneficiary of the ideological tensions posed by race, the formation of Pauline Hanson's One Nation party has exposed the Coalition's vulnerability to race as a cross-cutting political issue. Racial issues thus have many characteristics of a realigning dimension in Australian politics.

The Structure of Signaling: A Combinatorial Optimization Model with Network-Dependent Estimation
Esterling, Kevin M.
Lazer, David
Carpenter, Daniel

Uploaded 08-18-1997
Keywords lobbying models
combinatorial optimization
count models
network-dependent estimation
structural autocorrelation
Abstract This paper examines the relationship between lobbyists' contact-making behavior and their long-term access to the government. Specifically: 1) Do lobbyists establish social contacts in an individually-rational manner to best receive information from each other? And, 2) does the resulting network position condition their access to the government? We begin by wedding rational choice models to network analysis with a formal model of lobbyists' choice of contacts in a network, adopting the classic combinatorial optimization approach of Boorman (1975). The model predicts that when the demand for political information is low, a cocktail equilibrium prevails: lobbyists will invest their time in gaining "weak tie" acquaintances rather than in gaining "strong tie" trusted partners. When the demand for information in a policy domain is high, then both cocktail equilibria and "chum" equilibria (all strong-tie networks) prevail. We then turn to an empirical analysis of lobbyist contact-making and access, using the data of Laumann and Knoke in The Organizational State. We analyze the communication structure of the policy domains in health policy, using count data models that are adjusted for "structural autocorrelation" by the networks we study. The results support the cocktail equilibrium hypothesis, and offer a result that portends rich questions for future research: Washington lobbyists appear to overinvest in strong ties, in general reducing their credibility with the government in the long-term, as well as reducing the informational efficiency of the overall communication network.

Modeling Electoral Coordination: Voters, Parties and Legislative Lists in Uruguay
Levin, Ines
Katz, Gabriel

Uploaded 04-20-2011
Keywords electoral coordination
number of parties
Bayesian estimation
multilevel modeling
strategic voting
Abstract During each electoral period, the strategic interaction between voters and political elites determines the number of viable candidates in a district. In this paper, we implement a hierarchical seemingly unrelated regression model to explain electoral coordination at the district level in Uruguay as a function of district magnitude, previous electoral outcomes and electoral regime. Elections in this country are particularly useful to test for institutional effects on the coordination process due to the large variations in district magnitude, to the simultaneity of presidential and legislative races held under different rules, and to the reforms implemented during the period under consideration. We find that district magnitude and electoral history heuristics have substantial effects on the number of competing and voted-for parties and lists. Our modeling approach uncovers important interaction-effects between the demand and supply side of the political market that were often overlooked in previous research.

The Varying Role of Voter Information across Democratic Societies
Sekhon, Jasjeet

Uploaded 07-26-2004
Keywords Voter Information
Causal Inference
Propensity Score Matching
Robust Estimation
Survey Data
Abstract Using new robust matching methods for making causal inferences from survey data, I demonstrate that there are profound differences between how voters behave in mature democracies versus how they behave in new ones. The problems of voter ignorance and inattentiveness are not as serious in mature democracies as many analysts have suggested but are of grave concern in new democracies. Citizens in mature democracies are able to accomplish something that citizens in fledgling democracies are not: inattentive and poorly informed citizens are able to vote like their better informed compatriots and hence need to pay little attention to political events such as election campaigns in order to vote as if they were attentive. The results from the U.S. (which rely on various National Election Studies) and Mexico (2000 Panel Study) are reported in detail. Results from other countries are briefly reported.

Markov Chain Models for Rolling Cross-section Data: How Campaign Events and Political Awareness Affect Vote Intentions and Partisanship in the United States and Canada
Mebane, Walter R.
Wand, Jonathan

Uploaded 04-07-1997
Keywords Markov chains
rolling cross-section data
macro data
measurement error
categorical data
ordinal data
panel data
survey data
party identification
American politics
Canadian politics
Abstract We use a new approach we have developed for estimating discrete, finite-state Markov chain models from ``macro'' data to analyze the dynamics of individual choice probabilities in two collections of rolling cross-sectional survey data that were designed to support investigations of what happens to voters' information and preferences during campaigns. Using data from the 1984 American National Election Studies Continuous Monitoring Study, we show that not only did individual party identification vary substantially during the year, but the dynamics of party identification changed significantly in response to the conclusion of the Democratic party's nomination contest. Party identification appears to have measurement error only when the model misspecifies the dynamics. There are rapid oscillations among some categories of partisanship that may reflect individual stances regarding not only competition between the parties but also competition among party factions. Using data from the 1993 Canadian Election Study, we show that the critical events that shaped voting intentions in the election varied tremendously depending on an individual's level of political awareness, and that the effects of awareness varied across regions of the country.

Using Optimal Classification to Analyze Mass Political Preferences
Hare, Christopher

Uploaded 07-25-2013
Keywords Optimal Classification
ideal point estimation
mass ideology
spatial voting
Abstract I demonstrate use of Poole's (2000, 2005) Optimal Classification (OC) nonparametric unfolding method to scale mass political preferences. Because it is nonparametric, OC does not impose a particular functional form on respondents' utility functions or the error term. alpha-NOMINATE analysis shows that the assumption of quadratic utility is especially problematic. The assumption that errors are iid is also almost certainly likely to be violated since some survey respondents (e.g., those with low levels of political knowledge) are more likely to commit spatial voting errors. I discuss an approach for extending the OC method to handle ordinal choices and compare the results from OC and ordinal IRT.

Robust Estimation of the Cox Proportional Hazards Model
Harden, Jeff

Uploaded 07-17-2010
Keywords Event History Modeling
Cox Proportional Hazards Model
Partial Likelihood Maximization
Iteratively-Reweighted Robust Estimation
Abstract The Cox proportional hazards model is often used with time-to-event data in political science. However, misspecification issues such as measurement error or omitted covariates can cause substantial coefficient bias when it is estimated via the conventional Partial Likelihood Maximization (PLM). Here we review an iteratively-reweighted robust (IRR) estimator of the Cox model that is proven to reduce this bias under such conditions and propose a cross-validated median fit (CVMF) test to select between PLM and IRR. Then we apply the test to data in political science. We consider several typologies of replications with respect to (1) the test's selection (PLM or IRR) and (2) the implications of IRR for the original hypotheses (less support, more support, or mixed results). Overall, we demonstrate that PLM and IRR can each be optimal, that substantive conclusions can depend on which one is used, and that the CVMF test is effective in choosing between them.

Inferring Strategic Voting
Watanabe, Yasutora
Kawai, Kei

Uploaded 07-21-2010
Keywords Strategic Voting
Partial Identificatioin
Set Estimation
Abstract We estimate a model of strategic voting and quantify the impact it has on election outcomes. Because the model exhibits multiplicity of outcomes, we adopt a set estimator. Using Japanese general-election data, we find a large fraction [75.3%, 80.3%] of strategic voters, only a small fraction [2.4%, 5.5%] of whom voted for a candidate other than the one they most preferred (misaligned voting). Existing empirical literature has not distinguished between the two, estimating misaligned voting instead of strategic voting. Accordingly, while our estimate of strategic voting is high, our estimate of misaligned voting is comparable to previous studies.

Blossom: An evolutionary strategy optimizer with applications to matching, scaling, networks, and sampling
Beauchamp, Nick

Uploaded 07-24-2013
Keywords maximum likelihood
genetic algorithms
multidimensional scaling
social networks
sampling methods
markov chain monte carlo
evolutionary algorithms
estimation of distribution algorithms
Abstract This paper introduces a new maximization and importance sampling algorithm, "Blossom," along with an associated R script, which is especially well suited to rugged, discontinuous, and multimodal functions where even approximate gradient methods are unfeasible, and MCMC approaches work poorly. The Blossom algorithm employs an evolutionary optimization strategy related to the Estimation of Multivariate Normal Algorithm (EMNA) or Covariance Matrix Adaptation (CMA), within the general family of Estimation of Distribution Algorithms (EDA). It works by successive iterations of sampling, selecting the highest-scoring subsample, and using the variance-covariance matrix of that subsample to generate a new sample, with various self-adapting parameters. Compared against a benchmark suite of challenging functions introduced in Yao, Liu, and Lin (1999), it finds equal or better maxima to those found by the genetic algorithm Genoud introduced in Mebane and Sekhon (2011). The algorithm is then tested in four challenging domains from political science: (1) estimation of nonlinear and multimodal spatial metrics; (2) maximizing balance for matching; (3) ideological scaling of judges with discontinuous objective functions; (4) community detection in social networks. In all of these cases, Blossom outperforms most existing nonlinear optimizers in R. Finally, the samples gathered during the optimization process can be efficiently used for importance sampling using approximate voronoi cells around sample points, equalling the performance of MCMC metropolis samplers in some circumstances, and also of use for generating efficient proposal distributions. Even in an increasingly MCMC world, there remain important roles for effective general-purpose optimizers, and Blossom is especially effective for rough terrains where most other methods fail.

< prev 1 next>