About the Society
Papers, Posters, Syllabi
Submit an Item
Polmeth Mailing List
Below results based on the criteria 'optimization'
Total number of records returned: 7
Genetic Matching for Estimating Causal Effects: A General Multivariate Matching Method for Achieving Balance in Observational Studies
Genetic matching is a new method for performing multivariate matching which uses an evolutionary search algorithm to determine the weight each covariate is given. The method utilizes an evolutionary algorithm developed by Mebane and Sekhon (1998; Sekhon and Mebane 1998) that maximizes the balance of observed potential confounders across matched treated and control units. The method is nonparametric and does not depend on knowing or estimating the propensity score, but the method is greatly improved when a known or estimated propensity score is incorporated. Genetic matching reliably reduces both the bias and the mean square error of the estimated causal effect even when the property of equal percent bias reduction (EPBR) does not hold. When this property does not hold, matching methods---such as Mahalanobis distance and propensity score matching---often perform poorly. Even if the EPBR property does hold and the propensity score is correctly specified, in finite samples, estimates based on genetic matching have lower mean square error than those based on the usual matching methods. We present a reanalysis of the LaLonde (1986) job training dataset which demonstrates the benefits of genetic matching and which helps to resolve a longstanding debate between Dehejia and Wahba (1999, 2002); Dehejia (2005) and Smith and Todd (2001, 2005a,b) over the ability of matching to overcome LaLonde's critique of nonexperimental estimators. Monte Carlos are also presented to demonstrate the properties of our method.
Endogeneity in Probit Response Models
In this paper, we look at conventional methods for removing endogeneity bias in regression models, including the linear model and the probit model. The usual Heckman two-step procedure should not be used in the probit model: from a theoretical perspective, this procedure is unsatisfactory, and likelihood methods are superior. However, serious numerical problems occur when standard software packages try to maximize the biprobit likelihood function, even if the number of covariates is small. The log likelihood surface may be nearly flat, or may have saddle points with one small positive eigenvalue and several large negative eigenvalues. We draw conclusions for statistical practice. Finally, we describe the conditions under which parameters in the model are identifable; these results appear to be new.
Entropy optimization: computer implementation of the MaxEnt and MinxEnt principles
The entropy optimization principles MaxEnt of Jaynes (1957a,b) and MinxEnt of Kullback (1959) can be used in a variety of scientific fields. Among many possible applications, the principles are suitable to tackle the ecological inference problem that often shows up in social science research. Formally, both principles involve the constrained optimization of entropy measures that are intrinsically nonlinear functions of probabilities. Since each is a nonlinear programming problem, the solutions to both depend on iterative search algorithms. In addition, the constraints that probabilities are non–negative and sum up to one restrict in a particular way the solution space. The paper presents in detail a computer efficient implementation of those two principles in the linearly constrained case that makes a prior check for the existence of solution to the optimization problems. A description, made with the aid of two flowcharts, of an algorithm allows interested researchers to develop computer codes in practically any language. The authors also make available their own, easy–to–use codes written in MatLab.
Practical Maximum Likelihood
McDonald, Michael P.
Maximum likelihood estimation is now widely used in political science, providing a general statistical framework in which we build and test increasingly complex models of politics. The modern development of maximum likelihood is attributable to Fisher, and the approach dominated mathematical statistics during the twentieth century. More attention has been paid to the development of complex statistical models than to the necessary details of their estimation. In this article we discuss some of the art and practice of MLE: -Estimation: We discuss how to choose algorithms for MLE estimations, methods for setting algorithm parameters appropriately, and how to formulate likelihood functions for efficient and accurate estimation. -Tests of Estimation: Methods of statistical inference assume that a global maximum of the likelihood function has been found. There are however, few general guarantees that likelihood functions are single-peaked. Furthermore, no MLE software currently in use by political scientists verifies that global maximum of the likelihood function has been reached. We provide tests of global optimality, drawing from current research in statistics, econometrics, and computer science. -MLE Based Inference: Standard errors produced by MLE's can be misleading, and lead to unreliable inferences, when the likelihood function is not well behaved around its maximum. We illustrate the consequences of unreliable methods, and discuss more robust methods of calculating
Legislator Quality and Campaign Contributions
Mebane, Walter R.
Ratkovic, Marc T.
Tofias, Michael W.
U.S. House of Representatives
constrained nonlinear optimization
political action committees
We introduce a simple theoretical model of the relationship between the campaign contributions a legislator receives from a PAC and the amount of ``service'' the legislator provides to the PAC, a key assumption being that the marginal cost of service decreases as the quality of the legislator increases. Optimal solution of the constrained optimization problem that each PAC faces in allocating its campaign contributions among legislators implies a conditional two-limit tobit model for the relationship between contributions and aspects of the quality of each legislator. The constraints arise because PAC contributions must be positive but no greater than a legally limiting value and because each PAC's budget for contributions is finite. We extend the tobit model to support pooling data >from several similar PACs. We estimate the empirical model using data from the U.S. House of Representatives. The fact that optimal PAC behavior implies censoring suggests that it is usually inappropriate to aggregate contributions from different PACs; but pooling can work well.
The Robustness of Normal-theory LISREL Models: Tests Using a New Optimizer, the Bootstrap, and Sampling Experiments, with Applications
Mebane, Walter R.
Wells, Martin T.
linear structural relations
Asymptotic results from theoretical statistics show that the linear structural relations (LISREL) covariance structure model is robust to many kinds of departures from multivariate normality in the observed data. But close examination of the statistical theory suggests that the kinds of hypotheses about alternative models that are most often of interest in political science research are not covered by the nice robustness results. The typical size of political science data samples also raises questions about the applicability of the asymptotic normal theory. We present results from a Monte Carlo sampling experiment and from analysis of two real data sets both to illustrate the robustness results and to demonstrate how it is unwise to rely on them in substantive political science research. We propose new methods using the bootstrap to assess more accurately the distributions of parameter estimates and test statistics for the LISREL model. To implement the bootstrap we use optimization software two of us have developed, incorporating the quasi-Newton BFGS method in an evolutionary programming algorithm. We describe methods for drawing inferences about LISREL models that are much more reliable than the asymptotic normal-theory techniques. The methods we propose are implemented using the new software we have developed. Our bootstrap and optimization methods allow model assessment and model selection to use well understood statistical principles such as classical hypothesis testing.
The Structure of Signaling: A Combinatorial Optimization Model with Network-Dependent Estimation
Esterling, Kevin M.
This paper examines the relationship between lobbyists' contact-making behavior and their long-term access to the government. Specifically: 1) Do lobbyists establish social contacts in an individually-rational manner to best receive information from each other? And, 2) does the resulting network position condition their access to the government? We begin by wedding rational choice models to network analysis with a formal model of lobbyists' choice of contacts in a network, adopting the classic combinatorial optimization approach of Boorman (1975). The model predicts that when the demand for political information is low, a cocktail equilibrium prevails: lobbyists will invest their time in gaining "weak tie" acquaintances rather than in gaining "strong tie" trusted partners. When the demand for information in a policy domain is high, then both cocktail equilibria and "chum" equilibria (all strong-tie networks) prevail. We then turn to an empirical analysis of lobbyist contact-making and access, using the data of Laumann and Knoke in The Organizational State. We analyze the communication structure of the policy domains in health policy, using count data models that are adjusted for "structural autocorrelation" by the networks we study. The results support the cocktail equilibrium hypothesis, and offer a result that portends rich questions for future research: Washington lobbyists appear to overinvest in strong ties, in general reducing their credibility with the government in the long-term, as well as reducing the informational efficiency of the overall communication network.