logoPicV1 logoTextV1

Search Results


Below results based on the criteria 'direct and indirect e'
Total number of records returned: 903

1
Paper
Bayesian Measures of Explained Variance and Pooling in Multilevel (Hierarchical) Models
Gelman, Andrew
Pardoe, Iain

Uploaded 04-16-2004
Keywords adjusted R-squared
Bayesian inference
hierarchical model
multilevel regression
partial pooling
shrinkage
Abstract Explained variance (R2) is a familiar summary of the fit of a linear regression and has been generalized in various ways to multilevel (hierarchical) models. The multilevel models we consider in this paper are characterized by hierarchical data structures in which individuals are grouped into units (which themselves might be further grouped into larger units), and there are variables measured on individuals and each grouping unit. The models are based on regression relationships at different levels, with the first level corresponding to the individual data, and subsequent levels corresponding to between-group regressions of individual predictor effects on grouping unit variables. We present an approach to defining R2 at each level of the multilevel model, rather than attempting to create a single summary measure of fit. Our method is based on comparing variances in a single fitted model rather than comparing to a null model. In simple regression, our measure generalizes the classical adjusted R2. We also discuss a related variance comparison to summarize the degree to which estimates at each level of the model are pooled together based on the level-specific regression relationship, rather than estimated separately. This pooling factor is related to the concept of shrinkage in simple hierarchical models. We illustrate the methods on a dataset of radon in houses within counties using a series of models ranging from a simple linear regression model to a multilevel varying-intercept, varying-slope model.

2
Paper
The Macro Mechanics of Social Capital
Keele, Luke

Uploaded 10-15-2003
Keywords social capital
time series
public opinion
Abstract Interest in social capital has grown as it has become apparent that it is an important predictor of collective well-being. Recently, however, attention has shifted to how levels of social capital have changed over time. But focusing on how a society moves from one level of social capital to another requires that we alter current theory. In particular, by moving to the context of temporal change, we must not treat it as a lumpy concept with general causes and effects. Instead, we need a theory that explains the macro mechanics between civic activity and interpersonal trust. In the following analysis, I develop a macro theory of social capital through a careful delineation of the social capital aggregation process which demonstrates that we should expect civic engagement to affect interpersonal trust over time with the reverse not being true. Then, I develop and use new longitudinal measures of civic engagement and interpersonal trust to test the direction of causality between the two components of social capital. Finally, I model civic engagement as a function of resources and demonstrate how the decline in civic engagement has adversely affected levels of interpersonal trust over the last thirty years.

3
Paper
Political Preference Formation: Competition, Deliberation, and the (Ir)relevance of Framing Effects
Druckman, Jamie

Uploaded 07-09-2003
Keywords framing effects
experiments
rational choice theory
political psychology
Abstract A framing effect occurs when different, but logically equivalent, words or phrases such as 95% employment or 5% unemployment cause individuals to alter their preferences. Framing effects challenge the foundational assumptions of much of the social sciences (e.g., the existence of coherent preferences or stable attitudes), and raise serious normative questions about democratic responsiveness. Many scholars and pundits assume that framing effects are highly robust in political contexts. Using a new theory and an experiment with more than 550 participants, I show that this is not the case framing effects do not occur in many political settings. Elite competition and citizens inter- personal conversations often vitiate and eliminate framing effects. However, I also find that when framing effects persist, they can be even more pernicious than often thought not only do they suggest incoherent preferences but they also stimulate increased confidence in those preferences. My results have broad implications for preference formation, rational choice theory, political psychology, and experimental design.

4
Paper
Forecasts and Contingencies: From Methodology to Policy
Schrodt, Philip A.

Uploaded 08-19-2002
Keywords forecast
foreign policy
international relations
future
Abstract A "folk criticism" in political science maintains that the discipline should confine its efforts to explanation and avoid venturing down the dark, dirty, and dangerous path to forecasting and prediction. I argue that not only is this position inconsistent with the experiences of other sciences, but in fact the questions involved in making robust and valid predictions invoke many core methodological issues in political analysis. Those issues include, among others, the question of the level of predictability in political behavior, the problem of case selection in small-N situations, and the various alternative models that could be used to formalize predictions. This essay focuses on the problem of forecasting in international politics, and concludes by noting some of the problems of institutional culture -- bureaucratic and academic -- that have inhibited greater use of systematic forecasting methods in foreign policy.

5
Paper
Estimating Voters' Taste for Risk: Candidate Choice under Uncertainty
Berinsky, Adam
Lewis, Jeffrey B.

Uploaded 07-08-2002
Keywords elections
risk preferences
Abstract Recent work in political science has taken up the question of issue voting under conditions of uncertainty -- situations in which voters have imperfect information about the policy positions of candidates. Models that recognize this principle are realistic portrayals of the campaign environment, but may be limited in important respects. To date, the study of vote choice under uncertainty has made a common assumption of quadratic preferences. Such preferences implying that citizens behave in a very risk-averse manner when casting votes But this assumption is simply that; an assumption. Many other utility functions consistent with ``proximity'' voting could be chosen, including functions that imply risk neutral and risk acceptant behavior The assumption of risk-aversion is, after all, not simply a technical choice: it has important implications for how we view the process of citizen choice in elections and campaigns. If voters are risk-averse, candidates can benefit by making clear their positions on issues that they know will appeal to the electorate. Risk-averse voters therefore improve the quality of campaign discourse because candidates are punished for taking vague positions. But this scenario is only one among several possibilities. If voters are risk-neutral or risk-acceptant, candidates may have incentive to muddle the details of their policy plans and send ambiguous signals about their positions. Such a story of the campaign process may be less normatively appealing than one in which voters are risk-averse, but it might also more accurate portray the dynamics of political campaigns. We believe that the nature of risk preferences among the electorate should, therefore, be subject to greater scrutiny. In this paper, we move beyond the assumption of a particular spatial utility function and estimate voter's preferences for risk. We find that, contrary to the literature, voters are less risk averse than the quadratic model implies. Indeed, by the definition of risk preference developed in the paper, we find voters to be generally (nearly) risk-neutral and, in some cases, risk-acceptant.

6
Paper
An Agenda for New Political Methodology: Microfoundations and ART
Achen, Christopher H.

Uploaded 08-29-2001
Keywords logit
probit
scobit
microfoundations
Abstract The last two decades have brought revolutionary change to the field of political methodology. Steady gains in theoretical sophistication have combined with explosive increases in computing power to produce a profusion of new estimators for applied political researchers. Attendance at the annual Summer Meeting of the Methodology Section has multiplied many times, and section membership is among the largest in APSA. All these are signs of success. Yet there are warning signs, too. This paper, written to appear in the {em Annual Review of Political Science}, attempts to critically summarize current developments in the young field of political methodology. It focuses on recent generalizations of dichotomous dependent variable estimators like logit and probit, arguing that even our best new work stands in need of firmer connection to credible models of human behavior and more sophisticated work habits for discovering reliable empirical generalizations.

7
Paper
Bayesian Learning about Ideal Points of U.S. Supreme Court Justices, 1953-1999
Martin, Andrew D.
Quinn, Kevin M.

Uploaded 07-09-2001
Keywords item response models
dynamic linear models
Markov chain Monte Carlo
Abstract At the heart of attitudinal and strategic explanations of judicial behavior is the assumption that justices have policy preferences. These preferences have been measured in a handful of ways, including using factor analysis and multidimensional scaling techniques (Schubert, 1965, 1974), looking at past votes in a single policy area (Epstein et al., 1989), content-analyzing newspaper editorials at the time of appointment to the Court (Segal and Cover, 1989), and recording the background characteristics of the justices (Tate and Handberg, 1991). In this manuscript we employ Markov chain Monte Carlo (MCMC) methods to Ţt Bayesian measurement models of judicial preferences for all justices serving on the U.S. Supreme Court from 1953 to 1999. We are particularly interested in considering to what extent ideal points of justices change throughout their tenure on the Court, and how the proposals over which they are voting also change across time. To do so, we Ţt four longitudinal item response models that include dynamic speciŢcations for the ideal points and the case-speciŢc parameters. Our results suggest that justices do not have constant ideal points, even after controlling for the types of cases that come before the Court.

8
Paper
Zen and the Art of Policy Analysis: A Response to Nielsen and Wolf
Meier, Kenneth J.
Eller, Warren
Wrinkle, Robert D.
Polinard, J. L.

Uploaded 03-12-2001
Keywords education
pooled time series
Abstract Neilsen and Wolf (N.d.) lodge several criticism of Meier, Wrinkle and Polinard (1999). Although most of the criticisms deal with tangential issues rather than our core argument, their criticisms are flawed by misguided estimation strategies, erroneous results, and an inattention to existing theory and scholarship. Our re-analysis of their work demonstrates these problems and presents even stronger evidence for our initial conclusion–both minority and Anglo students perform better in schools with more minority teachers.

9
Paper
Sink or Swim:" What Happened to California's Bilingual Students after Proposition 227?
Bali, Valentina A.

Uploaded 08-21-2000
Keywords initiative
bilingual education
California
Proposition 227
Heckman selection
Abstract Proposition 227, passed in California in 1998, aimed to dismantle bilingual programs in the state's public schools. Using individual level data from a southern California school district, I find that in 1998, before Proposition 227, limited-English-proficient (LEP) students enrolled in bilingual classes had lower scores in reading than LEP students not enrolled in bilingual classes: 2.4 points less on a scale from 1 to 99. In math these bilingual students scored 0.5 points higher than non-bilingual LEPs. But in 1999, after Proposition 227 the same set of students had scores no worse than non-bilingual LEP students in reading and were still 0.5 points higher in math. Proposition 227, which interrupted bilingual programs early and emphasized English instruction, then, did not set bilingual students back relative to non-bilingual LEP and may have even benefitted them.

10
Paper
Time Remembered: A Dynamic Model of Interstate Interaction
Crescenzi, Mark J. C.
Enterline, Andrew J.

Uploaded 05-23-2000
Keywords dynamic model
growth
decay
cooperation
conflict
Abstract Over time, states form relationships. These relationships, mosaics of past interactions, provide political leaders with information about how states are likely to behave in the future. Although simple, this claim holds important implications for the manner in which we construct and test empirically our expectations about interstate behavior. Empirical analyses of interstate relations implicitly assume that the units of analysis, principally dyad-years, are independent. Formal models of interstate interaction are often cast in the absence of historical context. In the following paper, we construct a dynamic model of interstate interaction that we believe will assist scholars employing empirical and formal methods by incorporating history into their models of interstate relations. Our conceptual model includes both conflictual and cooperative components, and exhibits the basic properties of growth and decay indicative of a dyadic behavioral history. In an empirical exposition, we derive a continuous measure of interstate conflict from the conflictual component of the model. Turning to Oneal and Russett's (1997) analysis of dyadic conflict for the period 1950-85 as a benchmark, we examine whether the inclusion of our measure of interstate conflict significantly improves our ability to predict militarized conflict. We find empirical support for this hypothesis, indicating that our continuous measure of interstate conflict significantly augments a fully specified statistical model of dyadic militarized conflict. We conclude that our research underscores the considerable purchase gained by incorporating historical context into models of interstate relations.

11
Paper
Government Formation in Parliamentary Democracies
Martin, Lanny W.
Stevenson, Randolph T.

Uploaded 01-27-2000
Keywords government formation
coalition politics
conditional logit
Abstract The literature on cabinet formation in parliamentary democracies is replete with theoretical explanations of why some cabinets form and others do not. This theoretical richness, however, has not led to the development of a healthy empirical literature designed to choose between competing theories. In this paper, we try to rectify this problem by developing an empirical model that can adequately capture the kind of choice situation that is inherent in cabinet selection and then using it to evaluate the leading theories of cabinet formation that have been advanced in the literature. For example, this analysis allows us to make conclusions about the relative importance in cabinet formation of traditional variables like size and ideology, as well as to evaluate the impact that recent new-institutionalist theories (such as Laver and Shepsle 1996) have on our ability to predict and explain cabinet formation over and above the more traditional explanations.

12
Paper
An Event Data Set for the Arabian/Persian Gulf Region 1979-1997
Schrodt, Philip A.
Gerner, Deborah J.

Uploaded 04-12-1999
Keywords event data
Middle East
Persian Gulf
automated coding
Abstract This paper discusses a WEIS-coded event data set covering the Arabian/Persian Gulf region (Iran, Iraq, Kuwait, Oman, Saudi Arabia, Yemen, and the smaller Gulf states) for the period 15 April 1979 to 10 June 1997. The coded events cover international interactions among these states, as well as interactions with any other states or major international organizations. The data set is generated from Reuters news reports downloaded from the NEXIS data service and coded using the Kansas Event Data System (KEDS) machine-coding program. The paper begins with a review of the process of generating a machine-coded data set, including a discussion of software we have developed to partially automate the development of dictionaries to code new geographical regions. The Gulf data are coded using a standard set of verb phrases (rather than phrases specifically adapted to the Gulf) and an actors dictionary that has been augmented only with the actors identified by a utility program that examines the source texts for actors not already found in the KEDS dictionary. The Reuters reports generate 264,421 events when full stories are coded and 48,721 events when only lead sentences are coded. An examination of the time series that are generated when the events are aggregated by month using the Goldstein scale shows that they capture the major features of the behavior that we know to have occurred in the region. There is generally a high correlation (r > 0.75) between the series generated from lead-sentences and from full stories when the major actors of the region (Iran, Iraq, Saudi Arabia and USA) are studied. An exception to this pattern is found in interactions involving a relatively minor actor, the United Arab Emirates. Here the full-story coding provides far more events than the lead-sentence coding and shows greater variance even for interactions between major actors. We expect this will also be the case for other small Gulf states, suggesting that full-story coding may be necessary for a complete analysis of these actors. Paper was presented a year ago at the International Studies Association, Minneapolis, 18 - 22 March 1998 The file includes the papers in Postscript and PDF formats. The data set has been updated through March, 1999 and is available at the KEDS project web site, http://www.ukans.edu/~keds.

13
Paper
Logistic Regression in Rare Events Data
King, Gary
Zeng, Langche

Uploaded 05-20-1999
Keywords rare events
logit
logistic regression
binary dependent variables
bias correction
case-control
choice-based
endogenous selection
selection bias
Abstract Rare events are binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros (``nonevents''). In many literatures, rare events have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a quarter million dyads, only a few of which are at war. As it turns out, easy procedures exist for making valid inferences when sampling all available events (e.g., wars) and a tiny fraction of non-events (peace). This enables scholars to save as much as 99% of their (non-fixed) data collection costs, or to collect much more meaningful explanatory variables. We provide methods that link these two results, enabling both types of corrections to work simultaneously, and software that implements the methods developed.

14
Paper
Measuring Voter Turnout in the National Election Studies
Burden, Barry C.

Uploaded 08-26-1999
Keywords voter turnout
NES
overreport
response rates
Abstract Though the overreport of voter turnout in the National Election Study (NES) is widely known, this paper documents that the bias has become increasingly severe. The gap between NES and official estimates of presidential election has more than doubled from 11 points in 1952 to 28 points 1996. This occurred because official voter turnout fell steadily from 1960 onward while NES turnout did not. In contrast, the bias in House election turnout is always smaller and has increased only marginally over time, mostly due to inflation in presidential election years. I find that worsening presidential turnout estimates are mostly the result of declining response rates rather than instrumentation, question wording changes, or other factors. Adjusting official turnout estimates to more accurately measure real turnout does not account for the growing gap. Rather, as more peripheral voters elude interviewers in recent years, the NES sample becomes more saturated with self-reported voters, thus inflating reported turnout. The paper concludes by calling for a reevaluation of the NES in the wake of these and other changes that have taken place.

15
Paper
Public Opinion Shocks and Government Termination
Martin, Lanny W.

Uploaded 11-16-1999
Keywords government survival
public opinion
discrete-hazard model
logit
Abstract Abstract. The ability of a government to remain in power depends partially upon its vulnerability to unexpected changes occurring in the outside political environment. In this paper, I examine the relationship between government termination and changes in the electoral expectations of political parties in the legislature, as reflected by shifts in popular support for the government. I find that the decision to terminate the government is related in complex ways to changes in public opinion. Governments are more likely to collapse as certain members of the incumbent coalition expect to gain more ministerial portfolios, and in cases of minority government, when the opposition expects to gain more legislative seats. Further, I show that these effects increase with the approach of regularly-scheduled elections.

16
Paper
Follow the Leader? Presidential Approval, Perceived Presidential Support, and Representatives'Electoral Fortunes'
Gronke, Paul
Koch, Jeffery
Wilson, J. Matthew

Uploaded 04-17-1998
Keywords congress
1994
presidential approval
projection
negative voting
Abstract The relationship between presidential approval and congressional incumbent electoral success has been long established. Surprisingly, the individual level dynamics of this process have been largely unexamined. Drawing on a new set of questions included in the 1993, 1994, and 1996 National Election Studies, we explore the degree to which citizen perceptions of member support for Clinton's legislative program mediate the impact of presidential approval on evaluations and choice. We find that the degree to which individuals thought their members supported the President's legislative program functions just as we hypothesize, enhancing or ameliorating the impact of presidential approval on affective attachments to the member, evaluation of the incumbent's job performance, and congressional vote choice.

17
Paper
No Evidence on Proximity vs. Directional Voting
Lewis, Jeffrey B.
King, Gary

Uploaded 06-05-1998
Keywords spatial models
voting
elections
decision models
Abstract The directional and proximity models offer dramatically different theories for how voters make decisions. We demonstrate here that the empirical tests in the large and growing literature on this subject amount to theoretical debates about which statistical assumption is right. The key statistical assumptions in this literature have not been empirically tested, and indeed turn out to be effectively untestable with existing methods and data. Unfortunately, these assumptions are also crucial since changing them leads to different conclusions about voter decision processes.

18
Paper
Bias and Responsiveness in Multiparty and Multigroup Representation
Monroe, Burt L.

Uploaded 07-21-1998
Keywords partisan bias
responsiveness
seats and votes
electoral systems
compositional data
JudgeIt
Abstract There is an extensive and expanding literature that examines methods for estimating the responsiveness and partisan bias of two-party electoral systems. Attempts to extend these methods into the multiparty domain appropriate for the vast majority of electoral systems, or to the analysis of the representation of other types of groups (e.g., regions, ethnic groups), have been limited. I describe index, multiyear, uniform swing, and variable swing methods -- along with novel graphical displays -- for analyzing seats-votes curves, bias, and responsiveness in multiparty systems. The variable swing method is a multiparty generalization of Gelman and King's "JudgeIt" model. Examples discussed include elections in the UK, Mauritius, and Costa Rica, and geographic representation worldwide. In comparing the various methods it is argued that variable swing is ideal for most applications, that uniform swing and index methods provide useful answers to a limited set of questions despite faulty assumptions, and that multiyear methods are generally not useful.

19
Paper
Negotiating Coalitions
Bottom, William P.
Miller, Gary J.
Holloway, James
McClurg, Scott D.

Uploaded 09-15-1998
Keywords Game theory
Experimental Design
Coalition Formation
Negotiation
Risk
Abstract Game theory's best efforts have done little but verify the undecidability of coalitional problems. The typical solution concept specifies the hypothesized distribution for each of several viable coalition structures--but cannot choose among the coalition structures. For example, the bargaining set presumes that bargaining proceeds by objection and counter-objection until potential coalition members are indifferent between the coalitions that they pivot between. Thus, the bargaining set makes a clear distributional hypothesis, but thereby gives up any leverage on which coalition will occur. In this paper, we explore how risk preferences and the nature of coalitional goods influence the coalition-building process. We test a variety of potential explanations with data collected in an experimental setting. Foremost among our conclusions is that the coalitions which form among inexperienced subjects are affected by their risk preferences. We further find that this effect disappears among experienced subjects. We conclude the paper by discussing some of the explanations for and questions stemming from our results.

20
Paper
Studying Congressional and Gubernatorial Campaigns
Alvarez, R. Michael

Uploaded 00-00-0000
Keywords campaigns
congressional elections
public opinionpolling
National Election Studies
Abstract This paper was presented at the recent NES Congressional Elections Research and Development Conference. I argue that the NES ought to redesign the congressional election survey so that campaigns can be studied in more depth. I provide four empirical examples from my recent research which demonstrate some of the directions the NES can take. I conclude with a series of proposals for changes to the NES survey instrument, new questions which could be included in the NES congressional election studies, and discussion about the integration of contextual data with the NES survey data.

21
Paper
Racial Polarization and Turnout in Louisiana: New Insights from Aggregate Data Analysis
Palmquist, Bradley
Voss, D. Stephen

Uploaded 00-00-0000
Keywords ecological inference
aggregate data
turnout
South
racial polarization
redistricting
Abstract This paper applies recent developments in aggregate data analysis to newly assembled precinct-level datasets for Louisiana. We validate the usefulness of these methods for answering common voting behavior questions, such as estimating racial polarization/cohesion and predicting racial turnout rates, by applying them to known crosstabulations of turnout by race and party in Louisiana. Then we take the analysis a step further to show how the methods can be used to estimate unknown statistics relevant to redistricting litigation (and their uncertainty) using as much information as possible. In addition to the methodological insights, we draw some substantive conclusions about racial voting behavior and racial mobilization in the South.

22
Paper
Using Cluster Analysis to Derive Early Warning Indicators for Political Change in the Middle East, 1979-1996
Schrodt, Philip A.
Gerner, Deborah J.

Uploaded 08-22-1996
Keywords event data
conflict
early warning
Middle East
cluster analysis
genetic algorithms
Abstract This paper uses event data to develop an early warning model of major political changes in the Levant for the period April 1979 to July 1996. Following a general review of statistical early warning research, the analysis focuses on the behavior of eight Middle Eastern actorsŃEgypt, Israel, Jordan, Lebanon, the Palestinians, Syria, the United States and USSR/RussiaŃusing WEIS-coded event data generated from Reuters news service lead sentences with the KEDS machine-coding system. The analysis extends earlier work (Schrodt and Gerner 1995) demonstrating that clusters of behavior identified by conventional statistical methods correspond well with changes in political behavior identified a priori. We employ a new clustering algorithm that uses the correlation between the dyadic behaviors at two points in time as a measure of distance, and identifies cluster breaks as those time points that are closer to later points than to preceding points. We also demonstrate that these data clusters begin to "stretch" prior to breaking apart; this characteristic is used as an early-warning indicator. A Monte- Carlo analysis shows that the clustering and early warning measures perform very differently in simulated data sets having the same mean, variance, and autocorrelation as the observed data (but no cross-correlation) which reduces the likelihood that the clustering patterns are due to chance. The initial analysis uses Goldstein's (1992) weighting system to aggregate the WEIS-coded data. In an attempt to improve on the Goldstein scale, we use a genetic algorithm to optimize the weighting of the WEIS event categories for the purpose of clustering. This does not prove very successful and only differentiates clusters in the first half of the data set, a result similar to one we obtained using the cross-sectional K- Means clustering procedure. Correlating the frequency of events in the twenty-two 2-digit WEIS categories, on the other hand, gives clustering and early warning results similar to those produced by the Goldstein scale. The paper concludes with some general remarks on the role of quantitative early warning and directions for further research. This paper was presented at the American Political Science Association, San Francisco, 28 August - 1 September 1996

23
Paper
The Economic Sophistication of Public Opinion in the United States
Sekhon, Jasjeet

Uploaded 09-18-1997
Keywords Public Opinion
Economic Sophistication
Survey of Consumer Attitudes and Behavior (SCAB)
Natural Rate of Unemployment
NAIRU
Unemployment
Bootstrap
Bootstrap Confidence Region
Abstract I show that the public does indeed have coherent and sophisticated reactions to macroeconomic variables. These reactions are consistent with economic theory. Individuals form evaluations and expectations in a way which is sensitive to the complex trade-off between unemployment and inflation as determined by the nonaccelerating inflation rate of unemployment (NAIRU). The primary dataset used in this analysis has 69,680 observations and is compiled by merging 113 individual level ``Surveys of Consumer Attitudes and Behavior'' from 1976:01 to 1991:12. The data analysis makes extensive use of bootstrap methods to create confidence regions and to conduct hypothesis tests.

24
Paper
Economics, Entitlements and Social Issues: Voter Choice in the 1996 Presidential Election
Alvarez, R. Michael
Nagler, Jonathan

Uploaded 08-21-1997
Keywords elections
issues
ideology
economic voting
economy
multinomial probit
Abstract In this paper we examine three sets of explanations for the outcome of the 1996 presidential election campaign. First, we look at the effects of voter perceptions of the national economy on voter support for Clinton. Second we look at the effects of candidate and voter positions on a number of issues and on ideology. Last, we seek to understand whether other issues --- social issues such as abortion as well as issues revolving around entitlements and taxation --- played significant roles in this election. Thus this work extends the work of Alvarez and Nagler (1995), and enriches it with analysis of a more comprehensive set of issues considered. In the end, we are able to pull together each of these different sets of explanations into a consistent analysis of the 1996 presidential election which shows why Clinton won this race, but which also helps us understand why it was that both Dole and Perot fell so far from electoral victory.

25
Paper
Estimation and Strategic Interaction in Discrete Choice Models of International Conflict
Signorino, Curtis S.

Uploaded 07-23-1997
Keywords discrete choice
strategic
QRE
logit
international relations
Abstract Typical applications of logit and probit to theories of international conflict do not capture the structure of the strategic interdependence implied by those theories. In this paper I demonstrate how to use a game-theoretic solution concept, the quantal response equilibrium (QRE), to derive strategic discrete choice models of international conflict, where the structure of the strategic interaction is incorporated directly in the statistical model. I demonstrate this for a crisis interaction model and use monte carlo analysis to show that logit provides estimates with incorrect substantive interpretations and fitted values that are often far from the true values. Finally, I reanalyze a well-known game-theoretic model of war, Bueno de Mesquita and Lalman's (1992) international interaction game, using this method. My results indicate that their model does not explain international interaction as well as they claim.

26
Paper
Heterogeneity and Individual Party Identification
Box-Steffensmeier, Janet M.
Smith, Renee M.

Uploaded 05-01-1997
Keywords heterogeneity
party identification
macropartisanship
panel data
Wiley-Wiley
Monte Carlo
beta-logistic
Markov
Abstract Box-Steffensmeier and Smith (1996) suggest that heterogeneity in individual-level party identification accounts for aggregate dynamics in macropartisanship. Wiley-Wiley estimates of partisan persistence suggesting a very high degree of individual-level partisan persistence have been made under the assumption of no heterogeneity. Stratifying panel data by subgroups based on information, interest, and age, shows some heterogeneity in persistence even when the Wiley-Wiley estimator is used. Analytical and Monte Carlo results show, however, that the Wiley-Wiley estimator is biased upward when heterogeneity is present. Given these problems, we estimate a beta-logistic model of heterogeneity and persistence in individual-level party identification and show (a) heterogeneity in the probabilities of persistent response does exist and (b) a portion of that heterogeneity is systematically explained by interest in political campaigns in the 1990-91-92 ANES panel three-wave panel. Our estimates indicate Markov models assuming true state dependence may not be needed. Further, we find that our estimate of one of the parameters of the beta distribution is consistent with the estimate of that parameter that would be derived from our previous aggregate-level analysis.

27
Paper
Estimating the Probability of Events That have Never Occurred: When Does Your Vote Matter?
Gelman, Andrew
King, Gary
Boscardin, John

Uploaded 02-14-1997
Keywords conditional probability
decision analysis
elections
electoral campaigning
forecasting
political science
presidential elections
rare events
rational choice
subjective probability
voting power
Abstract Researchers sometimes argue that statisticians have little to contribute when few realizations of the process being estimated are observed. We show that this argument is incorrect even in the extreme situation of estimating the probabilities of events so rare that they have never occurred. We show how statistical forecasting models allow us to use empirical data to improve inferences about the probabilities of these events. Our application is estimating the probability that your vote will be decisive in a U.S. presidential election, a problem that has been studied by researchers in political science for more than two decades. The exact value of this probability is of only minor interest, but the number has important implications for understanding the optimal allocation of campaign resources, whether states and voter groups receive their fair share of attention from prospective presidents, and how formal ``rational choice'' models of voter behavior might be able to explain why people vote at all. We show how the probability of a decisive vote can be estimated empirically from state-level forecasts of the presidential election and illustrate with the example of 1992. Based on generalizations of standard political science forecasting models, we estimate the (prospective) probability of a single vote being decisive as about 1 in 10 million for close national elections such as 1992, varying by about a factor of 10 among states. Our results support the argument that subjective probabilities of many types are best obtained via empirically-based statistical prediction models rather than solely mathematical reasoning. We discuss the implications of our findings for the types of decision analyses that are used in public choice studies.

28
Paper
Who is the Best Connected Legislator? A Study of Cosponsorship Networks
Fowler, James

Uploaded 06-29-2005
Abstract Using large-scale network analysis I map the cosponsorship networks of all 280,000 pieces of legislation proposed in the U.S. House and Senate from 1973 to 2004. In these networks a directional link can be drawn from each cosponsor of a piece of legislation to its sponsor. I use a number of statistics to describe these networks such as the quantity of legislation sponsored and cosponsored by each legislator, the number of legislators cosponsoring each piece of legislation, the total number of legislators who have cosponsored bills written by a given legislator, and network measures of closeness, betweenness, and eigenvector centrality. I then introduce a new measure I call ‘connectedness’ which uses information about the frequency of cosponsorship and the number of cosponsors on each bill to make inferences about the social distance between legislators. Connectedness predicts which members will pass more amendments on the floor, a measure which is commonly used as a proxy for legislative influence. It also predicts roll call vote choice even after controlling for ideology and partisanship.

29
Paper
Another geography of turnout? Respondents and non-respondents to the 2005 British Election Study
Johnston, Ron
Harris, Rich

Uploaded 08-15-2005
Keywords British Election Study
response rates
ecological analyses
Abstract An issue of growing concern in studies of voting patterns using survey data is the falling response rate achieved by face-to-face surveys. The 2005 British Election Study (BES) pre-campaign survey achieved interviews with 55.6 per cent of the 6450 individuals sampled – a drop of nearly 20 percentage points over the average for the surveys undertaken in the 1960s and some 15 points over those in the 1970s. Of the addresses selected, no contact could be made at 5.8 per cent, the individuals selected at a further 26.0 per cent refused to be interviewed, 4.4 per cent were otherwise unproductive and 8.0 per cent of the addresses were ‘out of scope (deadwood)’. To what extent does this failure to reach a very substantial minority of the addresses selected have any impact upon the results of the sample survey and the conclusions drawn therefrom?

30
Paper
Rich state, poor state, red state, blue state:What's the matter with Connecticut?
Gelman, Andrew
Shor, Boris
Bafumi, Joseph
Park, David

Uploaded 11-29-2005
Keywords availability heuristic
ecological fallacy
hierarchical model
income and voting
multilevel model
presidential elections
public opinion
secret weapon
varying-slope model
Abstract We find that income matters more in ``red America'' than in ``blue America.'' In poor states, rich people are much more likely than poor people to vote for the Republican presidential candidate, but in rich states (such as Connecticut), income has a very low correlation with vote preference. In addition to finding this pattern and studying its changes over time, we use the concepts of typicality and availability from cognitive psychology to explain how these patterns can be commonly misunderstood. Our results can be viewed either as a debunking of the journalistic image of rich ``latte'' Democrats and poor ``Nascar'' Republicans, or as support for the journalistic images of political and cultural differences between red and blue states---differences which are not explained by differences in individuals' incomes. For decades, the Democrats have been viewed as the party of the poor, with the Republicans representing the rich. Recent presidential elections, however, have shown a reverse pattern, with Democrats performing well in the richer ``blue'' states in the northeast and west coast, and Republicans dominating in the ``red'' states in the middle of the country. Through multilevel modeling of individual-level survey data and county- and state-level demographic and electoral data, we reconcile these patterns. Key methods used in this research are: (1) plots of repeated cross-sectional analyses, (2) varying-intercept, varying-slope multilevel models, and (3) a graph that simultaneously shows within-group and between-group patterns in a multilevel model. These statistical tools help us understand patterns of variation within and between states in a way that would not be possible from classical regressions or by looking at tables of coefficient estimates.

31
Paper
Estimating Incumbency Advantage and Campaign Spending Effect without the Simultaneity Bias
Fukumoto, Kentaro

Uploaded 07-16-2006
Keywords Incumbency Advantage
Campaign Spending
Simultaneity Bias
Bayesian Nash equilibria
normal vote
Abstract In estimating incumbency advantage and campaign spending effect, simultaneity problem is composed of stochastic dependence and parametric dependence. Scholars have tried to solve the former, while the present paper intends to tackle the latter. Its core idea is to estimate parameters by maximizing likelihood of all endogenous variables (vote, both parties' candidate qualities and campaign spending) simultaneously. In order to do it, I take advantage of theories of electoral politics rigorously, model each endogenous variables by the others (or their expectation), derive Bayesian Nash equilibria, and plug them into my estimator. I show superiority of my model compared to the conventional estimators by Monte Carlo simulation. Empirical application of this model to the recent U.S. House election data demonstrates that incumbency advantage is smaller than previously shown and that entry of incumbent and strong challenger is motivated by electoral prospect.

32
Paper
Can political science literatures be believed? A study of publication bias in the APSR and the AJPS
Gerber, Alan
Malhotra, Neil

Uploaded 09-07-2006
Keywords publication bias
Abstract Despite great attention to the quality of research methods in individual studies, if the publication decisions of journals are a function of the statistical significance of research findings, the published literature as a whole may not produce an accurate measure of true effects. This paper examines the two most prominent political science journals (the APSR and the AJPS) and two major literatures in the discipline (the effect of negative advertisements and economic voting) to see if there is evidence of publication bias. We examine the effect of the .05 significance level on the pattern of published findings using what we term a “caliper” test and can reject the hypothesis of no publication bias at the 1 in 100,000,000 level. Our findings therefore strongly suggest that the results reported in the leading political science journals and in two important literatures are misleading and inaccurate due to publication bias. We also discuss some of the reasons for publication bias and propose reforms to reduce its impact on research.

33
Paper
Verifying Evidence of "Congressional Enactments of Race-Gender"
Grant, J. Tobin

Uploaded 02-05-2007
Keywords replication
verification
interpretive methodology
qualitative methods
race
gender
Congress
Abstract I report the results of a verification of Hawkesworth's 2003 "Congressional Enactments of Race-Gender" (CERG). This is a landmark analysis of race and gender in the U.S. Congress that is noteworthy for both its theory and its empirical evidence. A deeper look at the evidence and the context raises fundamental questions about the empirical validity of CERG's theory of race-gender in Congress. I conclude that racing-gendering in Congress is more nuanced than originally presented in CERG, and that further research is necessary to demonstrate empirically CERG's theory of Congress as a raced-gendered institution. This verification has important methodology implications, as it demonstrates why verification of empirical research -- including interpretive research -- should be a widely-practiced methodology within political science.

34
Paper
Inductive Event Data Scaling using Item Response Theory
Schrodt, Philip A.

Uploaded 07-17-2007
Keywords event data
IRT
latent trait
scaling
Rasch model
Goldstein scale
WEIS
CAMEO
Abstract Political event data are frequently converted to an interval-level measurement by assigning a numerical scaled value to each event. All of the existing scaling systems rely on non-replicable expert assessments to determine these numerical scores. This paper uses item response theory (IRT) to derive scales inductively, using event data on Israeli interactions with Lebanon and the Palestinians for 1991-2007. Monthly scores on a latent trait are calculated using three IRT models: the single-parameter Rasch model, and two-parameter models that add discrimination and guessing parameters. The three formulations produce generally comparable scores (correlations of 0.90 or higher). The Rasch scales are less successful than the expert-derived Goldstein scale in reconciling the somewhat divergent sets of events derived from the Agence France Presse and Reuters news services. This is in all likelihood due largely to a low weighting given uses of force by the IRT because such events are common in these two dyads. A factor analysis of the event counts shows that a single cooperation-conflict dimension generally accounts for about two-thirds of the variance in these dyads, but a second case-specific dimension explains another 20%. Finally, moving averages of the derived scores generally correlate well with the Goldstein values, suggesting that IRT may provide a route towards deriving purely inductive, and hence replicable, scales.

35
Paper
Partisanship, Voting, and the Dopamine D2 Receptor Gene
Dawes, Christopher
Fowler, James

Uploaded 02-01-2008
Keywords partisanship
voting
turnout
genetic association
dopamine
DRD2
Abstract Previous studies have found that both political orientations (Alford, Funk & Hibbing 2005) and voting behavior (Fowler, Baker & Dawes 2007, Fowler & Dawes 2007) are significantly heritable. In this article we study genetic variation in another important political behavior: partisan attachment. Using the National Longitudinal Study of Adolescent Health, we show that individuals with the A1 allele of the D2 dopamine receptor gene are significantly less likely to identify as a partisan than those with the A2 allele. Further, we find that this gene's association with partisanship also mediates an indirect association between the A1 allele and voter abstention. These results are the first to identify a specific gene that may be responsible for the tendency to join political groups, and they may help to explain correlation in parent and child partisanship and the persistence of partisan behavior over time.

36
Paper
Non-ignorable abstentions in roll call data analysis
Rosas, Guillermo
Shomer, Yael

Uploaded 07-02-2008
Keywords ignorability
IRT model
roll call data
legislative voting
Abstract How should we deal with abstentions in roll-call data analysis? Abstentions are very common in decision-making bodies around the world, and very often obey to a strategic rationale. Methods to recover ideal points from roll-call datasets -- such as Nominate and MCMC IRT -- are based on assumptions about the ignorability of the abstention- generating mechanism. However, the strategic character of abstentions makes the assumption of ignorability difficult to meet in practice. We discuss different abstention-generating mechanisms to understand the conditions under which they may be deemed ignorable, and extend the MCMC IRT model so as to incorporate information from abstention patterns into inference about legislators' ideal points.

37
Paper
Ecological Inference with Covariates
Park, Won-ho

Uploaded 07-08-2008
Keywords ecological inference
Thomsen
voter transition
South Korean
democratization
Abstract The building block of ecological inference strategies is to construct a two-by-two table that describes the individual-level relationship from aggregate information. Extensions to this baseline model, whichever particular technique is employed, have been developed in the context of constructing bivariate R-by-C tables. However, another important and substantively interesting extension is a model that would let the researcher include additional covariates into the model and is yet to be fully discussed and developed. In the paper, I propose a method of moment estimator that incorporates covariates into the ecological inference process by extending Thomsen (1987)'s voter transition model. I apply the developed model to estimate the impact of demographic variables on turnout in South Korean voters over time, especially around democratization, using precinct-level electoral returns and census records.

38
Paper
Nonparametric Priors For Ordinal Bayesian Social Science Models: Specification and Estimation
Gill, Jeff
Casella, George

Uploaded 08-21-2008
Keywords generalized linear mixed model
ordered probit
Bayesian approaches
nonparametric priors
Dirichlet process mixture models
nonparametric Bayesian inference
Abstract A generalized linear mixed model, ordered probit, is used to estimate levels of stress in presidential political appointees as a means of understanding their surprisingly short tenures. A Bayesian approach is developed, where the random effects are modeled with a Dirichlet process mixture prior, allowing for useful incorporation of prior information, but retaining some vagueness in the form of the prior. Applications of Bayesian models in the social sciences are typically done with ``noninformative'' priors, although some use of informed versions exists. There has been disagreement over this, and our approach may be a step in the direction of satisfying both camps. We give a detailed description of the data, show how to implement the model, and describe some interesting conclusions. The model utilizing a nonparametric prior fits better and reveals more information in the data than standard approaches.

39
Paper
What will we know on Tuesday at 7pm?
Gelman, Andrew
Silver, Nate

Uploaded 11-03-2008
Abstract Using 10,000 simulations from a probabilistic election forecast, we compute the conditional distribution of the Obama and McCain's vote margins and electoral vote totals, given the outcomes of the states whose polls are the first to close. We consider the scenario in which the vote margins are available in each state, and separately consider the possibility that we are only told each state's winner.

40
Paper
The Importance of Fully Testing Conditional Theories Positing Interaction
Golder, Matt
Berry, William
Milton, Daniel

Uploaded 07-10-2009
Abstract In recent years, it has become common for political scientists to present marginal effect plots when interpreting results from interactive models. This has led to a dramatic improvement in the quality of research testing conditional theories. The typical practice is to (i) view one of the variables expected to interact, say Z, as the conditioning variable, (ii) offer a hypothesis about how the marginal effect of the other variable, X, is conditional on the value of Z, and (iii) construct a plot of the relationship between Z and the estimated marginal effect of X. All interactions are symmetric, though; when the effect of X is conditional on Z, the effect of Z must be conditional on X. In this paper, we illustrate that the failure of scholars to provide a second hypothesis about how the marginal effect of Z is conditional on the value of X, together with the corresponding marginal effect plot, means that scholars often subject their conditional theories to substantially weaker empirical tests than their data allow. The result is that much of the existing literature either understates or, more worryingly, overstates the empirical support for the conditional theories that political scientists have posited.

41
Paper
Bayesian statistical decision theory and a critical test for substantive significance
Esarey, Justin

Uploaded 09-09-2009
Keywords inference
t-test
substantive significance
Bayesian
Abstract I introduce a new critical test statistic, c*, that uses Bayesian statistical decision theory to help an analyst determine whether quantitative evidence supports the existence of a substantively meaningful relationship. Bayesian statistical decision theory takes a rational choice perspective toward evidence, allowing researchers to ask whether it makes sense to believe in the existence of a statistical relationship given how they value the consequences of correct and incorrect decisions. If a relationship of size c* is not important enough to influence future research and policy advice, then the evidence does not support the existence of a substantively significant effect. A replication of findings from the American Journal of Political Science and Journal of Politics illustrates that statistical significance at conventional levels is neither necessary nor sufficient to accept a hypothesis of substantive significance using c*. I also make software packages available for Stata and R that allow political scientists to easily use c* for inference in their own research.

42
Paper
Polity by Design; an engineering approach
Kwatra, Saurabh

Uploaded 03-15-2010
Keywords Genuine Political Engineering
Good Governance
Interdisciplinary Designing
Abstract Rules by which societies govern themselves are called institutions. Institutions can be political, economic, social, but generally they are a complex combination of these. Universities and academies of higher education frequently offer courseware on 'Political Engineering'; the title has an interdisciplinary flavor, suggesting some kind of engineering applied to political science. When you proceed from heading to subject, you find tools of economic theory, game theory, social-choice theory and formal logic used in ample. There is everything but engineering! This paper is the first bold attempt to apply genuine methodologies in mechanical engineering design to Governance. Hence I define this created subject as â??Genuine Political Engineeringâ??. The paper revolves around the solution to a complex problem: comparing the size of a road roller required resurfacing a road most efficiently with the number of elected representatives required ruling a population (in a country or state) most effectively. The solution emerges in the shape of a sophisticated software that I call 'political machinery'. This research is aimed to compel a sizeable percentage of conventional political pundits and exponents of sustainable living to conclude that governance can be bettered by employing machine designers to assist parliamentarians-turned-policy makers.

43
Paper
What Can We Learn with Statistical Truth Serum? Design and Analysis of the List Experiment
Glynn, Adam

Uploaded 07-23-2010
Keywords social desirability
indirect questions
list experiment
item count technique
privacy protection
survey experiment
Abstract Due to the inherent sensitivity of many survey questions, a number of researchers have adopted an indirect questioning technique known as the list experiment (or the item count technique) in order to minimize bias due to dishonest or evasive responses. However, standard practice with the list experiment requires a large sample size, is not readily adaptable to regression or multivariate modeling, and provides only limited diagnostics. This paper addresses all three of these issues. First, the paper presents design principles for the standard list experiment (and the double list experiment) to minimize bias and reduce variance as well as providing sample size formulas for the planning of studies. Additionally, this paper investigates the properties of a number of estimators and introduces an easy-to-use piecewise estimator that reduces necessary sample sizes in many cases. Second, this paper proves that standard-procedure list experiment data can be used to estimate the probability that an individual holds the socially undesirable opinion/behavior. This allows multivariate modeling. Third, this paper demonstrates that some violations of the behavioral assumptions implicit in the technique can be diagnosed with the list experiment data. The techniques in this paper are illustrated with examples from American politics.

44
Paper
The Split Population Logit (SPopLogit): Modeling Measurement Bias in Binary Data
Beger, Andreas
DeMeritt, Jacqueline
Hwang, Wonjae
Moore, Will

Uploaded 02-28-2011
Keywords split populations
binary data
measurement error
bias
zero inflation
substantive inference
Abstract Researchers frequently face applied situations where their measurement of a binary outcome suffers from bias. Social desirability bias in survey work is the most widely appreciated circumstance, but the strategic incentives of human beings similarly induce bias in many measures outside of survey research (e.g., whether the absence of an armed attack indicates a country's satisfaction with the status quo or a calculation that the likely costs of war outweigh the likely benefits). In these circumstances the data we are able to observe do not reflect the distribution we wish to observe. This study introduces a statistical model that permits researchers to model the process that produces the bias, the split population logit (SPopLogit) model. It further presents a Monte Carlo simulation that demonstrates the effectiveness of the SPopLogit model, and then re-analyzes a study of sexual infidelity to illustrate the richness of the quantities of (empirical and theoretical) interest that can be estimated with the model. {\tt Stata ado} files that can be used to invoke the SPopLogit, as well as batch files that illustrate how to simulate commonly reported quantities of interest, are available for download from the WWW. The authors close by briefly identifying just a few of the many types of research projects that will benefit from abandoning logit and probit models in favor of the SPopLogit. NOTE: Files to implement SPopLogit and generate quantities of interest, as well as replication files for our Monte Carlo simulations and substantive application, are available at http://andybeger.wordpress.com/2011/02/03/split-population-logit/

45
Paper
The Hidden American Immigration Consensus: A Conjoint Analysis of Attitudes Toward Immigrants
Hainmueller, Jens
Hopkins, Daniel

Uploaded 07-14-2012
Keywords immigration
public opinion
conjoint analysis
Abstract With immigration a salient issue, it is critical to understand Americans' attitudes toward immigrants. Past research points to several immigrant characteristics, both cultural and economic, that might influence attitudes. Yet it has not tested the competing hypotheses comprehensively. This paper uses a statistical tool from marketing---choice-based conjoint analysis---to test the relative influence of nine randomized immigrant attributes in generating support for admission. Drawing on a two-wave Knowledge Networks survey, it demonstrates that Americans view educated immigrants in high-status jobs favorably, while they view those who lack plans to work, have previously entered without authorization, or do not speak English unfavorably. Consistent with norms-based and sociotropic explanations, the immigrants most likely to be admitted are those expected to contribute economically and to comply with norms about work and assimilation. Remarkably, these preferences vary little with respondents' education, partisanship, or other attributes. Beneath partisan divisions over immigration lies a consensus about which immigrants to admit.

46
Paper
How Much of the Incumbency Advantage is Financial?
Hall, Andrew B.

Uploaded 01-11-2013
Keywords incumbency advantage
partial identification
structural model
RDD
Abstract Incumbency provides a substantial benefit to candidates. Becoming an incumbent in one year causes a large gain in the candidate's vote share in the subsequent election and, in the intervening time before the election, a substantial gain in campaign funds as well. How much of the electoral incumbency advantages comes through the financial advantage it provides, and how much consists of other benefits? In this paper I bound the parameters of a structural model and find that at least half of the incumbency advantage is the result of the incumbent's financial advantage. Campaign donations and campaign spending are far more important to incumbents than the literature has thought.

47
Paper
A Unified Approach to Generalized Causal Inference
Martel Garcia, Fernando

Uploaded 08-01-2013
Keywords External validity
causal diagrams
dags
generalizability
experiments
learning
Abstract Randomized controlled trials and natural experiments have been criticized for their lack of generalizability (external validity), questioning their usefulness to social science and policy. Here I show how three common approaches to generalizability - the heuristic, statistical, and structural approaches --, are each incomplete on their own, and how generalized causal diagrams, or g-dags, can achieve a complete representation of the problem. G-dags combine theory and evidence to (1) make inferences from a study to a population, or subgroup; (2) combine two or more studies that are not generalizable on their own, into a generalized inference; (3) encode and test generalizable knowledge; and (4) provide a link to boosting algorithms as generalized additive models. Just as important, g-dags make make explicit what is being assumed, or questioned, in discussing the generalizability of experiments. This allows for constructive discourse and informed research agendas.

48
Paper
Evaluating a Stochastic Model of Government Formation
Siegel, David
Golder, Matt
Golder, Sona

Uploaded 03-19-2014
Keywords government formation
zero-intelligence
stochastic model
Abstract In a 2012 JOP article, we presented a zero-intelligence model of government formation. Our intent was to provide a "null" model of government formation, a baseline on which other models could build. We made two claims regarding aggregate government formation outcomes. First, that our model produces aggregate results on the distributions of government types, cabinet portfolios, and bargaining delays in government formation that compare favorably to those in the real world. And second, that these aggregate distributions vary in theoretically intuitive ways as the model parameters change. In their recent note, Martin and Vanberg (MV) criticize our model on theoretical and empirical grounds. Here we not only show how MV's evaluation of our model is flawed, but we also illustrate, using an analogy to common statistical practice, how one might properly attempt to falsify stochastic models such as ours at both the individual and the aggregate level.

49
Paper
Treatment effects in before-after data
Gelman, Andrew

Uploaded 04-27-2004
Keywords correlation
experiments
interactions
hierarchical models
observational studies
variance components
Abstract In experiments and observations with before-after data, the correlation between "before" and "after" measurements is typically higher among the controls than among the treated units, violating the usual assumptions of equal variance and a constant treatment effect. We illustrate with three applied examples and then discuss models that could be used to fit this phenomenon, which we argue is related to the

50
Paper
Forming voting blocs and coalitions as a prisoner's dilemma: a possible theoretical explanation for political instability
Gelman, Andrew

Uploaded 10-27-2003
Keywords coalitions
cooperation
decisive vote
elections
legislatures
prisoner's dilemma
voting power
Abstract Individuals in a committee can increase their voting power by forming coalitions. This behavior is shown here to yield a prisoner's dilemma, in which a subset of voters can increase their power, while reducing average voting power for the electorate as a whole. This is an unusual form of the prisoner's dilemma in that cooperation is the selfish act that hurts the larger group. Under a simple model, the privately optimal coalition size is approximately 1.4 times the square root of the number of voters. When voters' preferences are allowed to differ, coalitions form only if voters are approximately politically balanced. We propose a dynamic view of coalitions, in which groups of voters choose of their own free will to form and disband coalitions, in a continuing struggle to maintain their voting power. This is potentially an endogenous mechanism for political instability, even in a world where individuals' (probabilistic) preferences are fixed and known.


< prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 next>
   
wustlArtSci