logoPicV1 logoTextV1

Search Results


Below results based on the criteria 'bootstrap'
Total number of records returned: 14

1
Paper
Non-Compulsory Voting in Australia?: what surveys can (and can't) tell us
Jackman, Simon

Uploaded 08-25-1997
Keywords turnout
Australian politics
compulsory voting
political participation
counter-factuals
surveys
non-response
measurement error
social-desirability heuristic
question-order effects
simulation
parametric bootstrap
Abstract Compulsory voting has come under close scrutiny in recent Australian political debate, and influential voices within the (conservative) Coalition government have called for its repeal. Conventional wisdom holds that a repeal of compulsory voting would result in a sizeable electoral boost for the Coalition; the proportion of Coalition voters who would not vote is thought to be smaller than the corresponding proportion of Labor voters. But estimates of Coalition gains under a return to voluntary turnout are quite rough-and-ready, relying on methods hampered by critical shortcomings. In this paper I focus on assessing the counter-factual of non-compulsory turnout via surveys: while turnout is compulsory in Australia, responding to surveys isn't, and the problems raised by high rates of non-response are especially pernicious in attempting to assess the counter-factual of voluntary turnout. Among survey respondents, social-desirability and question-order effects also encourage over-reports of the likelihood of voluntarily turning out. Taking non-response and measurement error into consideration, I conclude that survey-based estimates (a) significantly emph{under-estimate} the extent to which turnout would emph{decline} under a voluntary turnout regime; but (b) emph{over-estimate} the extent to which a fall in turnout would work to the advantage of the Coalition parties. Nonetheless, the larger of the Coalition parties --- the Liberal Party --- unambiguously increases its vote share under a wide range of assumptions about who does and doesn't voluntarily turnout.

2
Paper
The Changing Economic Preferences of the American Public: 1976-1991
Sekhon, Jasjeet

Uploaded 03-28-1997
Keywords Macroeconomic Preferences
Monetary Policy
Hermite Polynomials
Splines
Bootstrap
Abstract I show that the public indeed does have coherent preferences over macroeconomic tradeoffs, and these preferences have changed in ways consistent with not only economic theory but also with the changes which occurred in the American political system during the 1980s. In particular, most people learned something new about the state of the world in the late 1970s, and began to reject classical Keynesian explanations about economic reality. Individuals were becoming more sympathetic to the economic platform of the Republican party---i.e., they began to favour price stability. Moreover, the results support the notion that poor Americans do not hold government policy responsible for their personal economic plight (Hochschild 1981, Lane 1962).

3
Paper
Data Mining for Theorists
Kenkel, Brenton
Signorino, Curtis

Uploaded 07-26-2011
Keywords empirical implications of theoretical models
basis regression
adaptive lasso
bootstrap
functional form misspecification
Abstract Among those interested in statistically testing formal models,two approaches dominate. The structural estimation approach derives a structural probability model based on the formal model and then estimates parameters associated with that model. The reduced-form approach generally applies off-the-shelf techniques---such as OLS, logit, or probit---to test whether the independent variables are related to a decision variable according to the comparative statics predictions. We provide a new statistical method for the comparative statics approach. The decision variable of interest is modeled as a polynomial function of the available covariates, which allows for the nonmonotonic and interactive relationships commonly found in strategic choice data. We use the adaptive lasso to reduce the number of parameters and prevent overfitting, and we obtain measures of uncertainty via the nonparametric bootstrap. The method is "data mining" because the aim is to discover complex relationships in data without imposing a particular structure,but "for theorists" in that it was developed specifically to deal with the peculiar features of data on strategic choice. Using a Monte Carlo simulation, we show that the method handily outperforms other non-structural techniques in estimating a nonmonotonic relationship from strategic choice data.

4
Paper
Unresponsive, Unpersuaded: The Unintended Consequences of Voter Persuasion Efforts
Bailey, Michael
Hopkins, Daniel
Rogers, Todd

Uploaded 08-09-2013
Keywords causal inference
field experiments
persuasion
attrition
multiple imputation
Approximate Bayesian Bootstrap
Abstract Can randomized experiments at the individual level help assess the persuasive effects of campaign tactics? In the contemporary U.S., vote choice is not observable, so one promising research design to assess persuasion involves randomizing appeals and then using a survey to measure vote intentions. Here, we analyze one such field experiment conducted during the 2008 presidential election in which 56,000 registered voters were assigned to persuasion in person, by phone, and/or by mail. Persuasive appeals by canvassers had two unintended consequences. First, they reduced responsiveness to the follow-up survey, lowering the response rate sharply among infrequent voters. Second, various statistical methods to address the resulting biases converge on a counter-intuitive conclusion: the persuasive canvassing reduced candidate support. Our results allow us to rule out even small effects in the intended direction, and illustrate the backlash that persuasion can engender.

5
Paper
Bayesian exploratory data analysis
Gelman, Andrew

Uploaded 02-11-2003
Keywords bootstrap
Fisher's exact test
graphics
mixture model
model checking
multiple imputation
prior predictive check
posterior predictive check
p-value
u-value
Abstract Exploratory data analysis (EDA) and Bayesian inference (or, more generally, complex statistical modeling)---which are generally considered as unrelated statistical paradigms---can be particularly effective in combination. In this paper, we present a Bayesian framework for EDA based on posterior predictive checks. We explain how posterior predictive simulations can be used to create reference distributions for EDA graphs, and how this approach resolves some theoretical problems in Bayesian data analysis. We show how the generalization of Bayesian inference to include replicated data $y^{ m rep}$ and replicated parameters $ heta^{ m rep}$ follows a long tradition of generalizations in Bayesian theory. On the theoretical level, we present a predictive Bayesian formulation of goodness-of-fit testing, distinguishing between $p$-values (posterior probabilities that specified antisymmetric discrepancy measures will exceed 0) and $u$-values (data summaries with uniform sampling distributions). We explain that $p$-values, unlike $u$-values, are Bayesian probability statements in that they condition on observed data. Having reviewed the general theoretical framework, we discuss the implications for statistical graphics and exploratory data analysis, with the goal being to unify exploratory data analysis with more formal statistical methods based on probability models. We interpret various graphical displays as posterior predictive checks and discuss how Bayesian inference can be used to determine reference distributions. The goal of this work is not to downgrade descriptive statistics, or to suggest they be replaced by Bayesian modeling, but rather to suggest how exploratory data analysis fits into the probability-modeling paradigm. We conclude with a discussion of the implications for practical Bayesian inference. In particular, we anticipate that Bayesian software can be generalized to draw simulations of replicated data and parameters from their posterior predictive distribution, and these can in turn be used to calibrate EDA graphs.

6
Paper
Time Series Models for Compositional Data
Brandt, Patrick T.
Monroe, Burt L.
Williams, John T.

Uploaded 07-09-1999
Keywords compositional data
VAR
time series analysis
bootstrap
Monte Carlo simulation
macropartisanship
Abstract Who gets what? When? How? Data that tell us who got what are compositional data - they are proportions that sum to one. Political science is, unsurprisingly, replete with examples: vote shares, seat shares, budget shares, survey marginals, and so on. Data that also tell us when and how are compositional time series data. Standard time series models are often used, to detrimental consequence, to model compositional time series. We examine methods for modeling compositional data generating processes using vector autoregression (VAR). We then use such a method to reanalyze aggregate partisanship in the United States.

7
Paper
Learning in Campaigns: A Policy Moderating Model of Individual Contributions to House Candidates
Wand, Jonathan
Mebane, Walter R.

Uploaded 04-18-1999
Keywords FEC
campaign contributions
campaign finance
policy moderation
GLM
generalized linear model
negative binomial
time series
bootstrap
U.S. House of Representatives
1984 election
Abstract We propose a policy moderating model of individual campaign contributions to House campaigns. Based on a model that implies moderating behavior by voters, we hypothesize that individuals use expectations about the Presidential election outcome when deciding whether to donate money to a House candidate. Using daily campaign contributions data drawn from the FEC Itemized Contributions files for 1984, we estimate a generalized linear model for count data with serially correlated errors. We expand on previous empirical applications of this type of model by comparing standard errors derived from a sandwich estimator to confidence intervals produced by a nonparametric bootstrap.

8
Paper
Does Size Matter? Exploring the Small Sample Properties of Maximum Likelihood Estimation
Hart, Jr., Robert A.
Clark, David H.

Uploaded 04-20-1999
Keywords small samples
ML
Type II errors
bootstrap
Abstract The last two decades have witnessed an explosion in the use of computationally intensive methodologies in the social sciences as computer technology has advanced. Among these empirical methods are Maximum Likelihood (ML) procedures. ML estimators possess a variety of desirable qualities, perhaps most prominent of which is the asymptotic efficiency of the standard errors. However, the behavior of the estimators in general, of the estimates of the standard errors in particular, and thus of inferential hypothesis tests are uncertain in small sample analyses. In political science research, small samples are routinely the subject of empirical investigation using ML methods, yet little is known regarding what effect sample size has on a researcher's ability to draw inferences This paper explores the behavior of ML estimates in probit models across differing sample sizes and with varying numbers of independent variables in Monte Carlo simulations. Our experimental results allow us to conclude that: a) the risk of making Type I errors does not change appreciably as sample size descends; b) the risk of making Type II errors increases dramatically in smaller samples and as the number of regressors increases.

9
Paper
The Robustness of Normal-theory LISREL Models: Tests Using a New Optimizer, the Bootstrap, and Sampling Experiments, with Applications
Mebane, Walter R.
Sekhon, Jasjeet
Wells, Martin T.

Uploaded 01-01-1995
Keywords statistics
estimation
covariance structures
linear structural relations
LISREL
bootstrap
confidence intervals
BCa
specification tests
goodness-of-fit
hypothesis tests
optimization
evolutionary programming
genetic algorithms
monte carlo
sampling experiment
Abstract Asymptotic results from theoretical statistics show that the linear structural relations (LISREL) covariance structure model is robust to many kinds of departures from multivariate normality in the observed data. But close examination of the statistical theory suggests that the kinds of hypotheses about alternative models that are most often of interest in political science research are not covered by the nice robustness results. The typical size of political science data samples also raises questions about the applicability of the asymptotic normal theory. We present results from a Monte Carlo sampling experiment and from analysis of two real data sets both to illustrate the robustness results and to demonstrate how it is unwise to rely on them in substantive political science research. We propose new methods using the bootstrap to assess more accurately the distributions of parameter estimates and test statistics for the LISREL model. To implement the bootstrap we use optimization software two of us have developed, incorporating the quasi-Newton BFGS method in an evolutionary programming algorithm. We describe methods for drawing inferences about LISREL models that are much more reliable than the asymptotic normal-theory techniques. The methods we propose are implemented using the new software we have developed. Our bootstrap and optimization methods allow model assessment and model selection to use well understood statistical principles such as classical hypothesis testing.

10
Paper
Bootstrap Methods for Non-nested Hypothesis Tests
Mebane, Walter R.
Sekhon, Jasjeet

Uploaded 07-20-1996
Keywords Cox Test
Bootstrap
LISREL
Endogenous Switching Regression
Tobit-Style Censoring
Abstract Cox (1961; 1962) proposed a fairly general method that can be used to construct powerful tests of alternative hypotheses from separate statistical families. We prove that non-parametric bootstrap methods can produce consistent and second-order correct approximations to the distribution of the Cox statistic for non-nested LISREL-style covariance structure models. We use the method to investigate a question about the specification of a LISREL model used by Kinder, Adams and Gronke (1989). In a second application---a pair of non-nested endogenous switching regression models with tobit-style censoring, applied to real data---we illustrate how bootstrap calibration can be used to correct the size of the test when the test distribution is being estimated by Monte Carlo simulation due to concern about nonregularity.

11
Paper
Generic Tests for a Nonlinear Model of Congressional Campaign Dynamics
Mebane, Walter R.

Uploaded 08-25-1996
Keywords Congress
elections
campaigns
differential equations
Hopf bifurcation
non-nested hypothesis tests
Cox tests
bootstrap
nonlinear models
Abstract I develop a statistical model based on a generic third-order Taylor series approximation for differential equation systems that exhibit Hopf bifurcation in order to use district-level cross-sectional data to test a nonlinear dynamic formal model of campaign contributions, district service and voting during and after a U.S. House election. The statistical model represents the key nonlinearities of the formal model's Cournot-Nash equilibrium in a highly robust fashion. For data from the years 1984--85 and 1986--87, non-nested hypothesis tests (implemented using a calibrated, parametric bootstrap method) show that under assumptions of multivariate normality, the nonlinear model is vastly superior to the generic linear alternative defined by the sample mean vector and covariance matrix.

12
Paper
Generic Tests for a Nonlinear Model of Congressional Campaign Dynamics
Mebane, Walter R.

Uploaded 08-25-1996
Keywords Congress
elections
campaigns
differential equations
Hopf bifurcation
non-nested hypothesis tests
Cox tests
bootstrap
nonlinear models
Abstract I develop a statistical model based on a generic third-order Taylor series approximation for differential equation systems that exhibit Hopf bifurcation in order to use district-level cross-sectional data to test a nonlinear dynamic formal model of campaign contributions, district service and voting during and after a U.S. House election. The statistical model represents the key nonlinearities of the formal model's Cournot-Nash equilibrium in a highly robust fashion. For data from the years 1984--85 and 1986--87, non-nested hypothesis tests (implemented using a calibrated, parametric bootstrap method) show that under assumptions of multivariate normality, the nonlinear model is vastly superior to the generic linear alternative defined by the sample mean vector and covariance matrix.

13
Paper
The Economic Sophistication of Public Opinion in the United States
Sekhon, Jasjeet

Uploaded 09-18-1997
Keywords Public Opinion
Economic Sophistication
Survey of Consumer Attitudes and Behavior (SCAB)
Natural Rate of Unemployment
NAIRU
Unemployment
Bootstrap
Bootstrap Confidence Region
Abstract I show that the public does indeed have coherent and sophisticated reactions to macroeconomic variables. These reactions are consistent with economic theory. Individuals form evaluations and expectations in a way which is sensitive to the complex trade-off between unemployment and inflation as determined by the nonaccelerating inflation rate of unemployment (NAIRU). The primary dataset used in this analysis has 69,680 observations and is compiled by merging 113 individual level ``Surveys of Consumer Attitudes and Behavior'' from 1976:01 to 1991:12. The data analysis makes extensive use of bootstrap methods to create confidence regions and to conduct hypothesis tests.

14
Paper
Precise, Second-Order Correct Estimates of the Natural Rate of Unemployment via Bootstrap Calibration
Sekhon, Jasjeet

Uploaded 09-11-1997
Keywords Natural Rate of Unemployment
NAIRU
Unemployment
Bootstrap
Calibration
Confidence Interval
Likelihood
Marginal Likelihood
Conditional Likelihood
Profile Likelihood
Abstract The natural rate of unemployment, which is usually considered to be the nonaccelerating inflation rate of unemployment (NAIRU), is an important and often used economic variable (Gordon 1997). In this paper I present second-order correct $O_{p}(n^{-2})$ confidence regions for estimates of the NAIRU, obtained via bootstrap calibration. These confidence regions are three times smaller than those provided in recent econometric work (Staiger, Stock and Watson 1997a, 1997b). The confidence regions are sufficiently precise to support use of the NAIRU for a variety of analytical and policy purposes, including monetary policy, as determined by a criterion suggested by Krueger (1997).


< prev 1 next>
   
wustlArtSci