About the Society
Papers, Posters, Syllabi
Submit an Item
Polmeth Mailing List
Below results based on the criteria 'heteroskedasticity'
Total number of records returned: 5
Policies, Prototypes, and Presidential Approval
American presidents, like all democratic political leaders, rely on popular support in order to promote their political agenda, gain legislative victories, and succeed at the ballot box. Extant studies of approval, however, focus resolutely on aggregate values, while individual level determinants, and variation, have been ignored. This paper redresses this imbalance, and in doing so speaks to the outstanding questions in studies of presidential approval. Individual level presidential approval is a product of three dimensions of evaluation: prospective policy judgements (what are you going to do for me tomorrow?), retrospective assessments (what did you do for me yesterday?), and personality assessments (what kind of leader are you?). In addition, the model draws on new models of uncertainty in the survey response, testing the hypothesis that weaker partisan attachments and lower levels of chronic political information will lead to greater uncertainty about presidential performance. The model is tested using NES studies from 1980--1996. The overall performance is superior (predicting from 40-70% of the variance in individual scores), and the primary hypotheses are confirmed. Retrospective, but not prospective, judgements drive individual level presidential approval, thus challenging the ``bankers" model of approval, and personality assessments play a central role in approval. Finally, strong evidence is found of heteroskedasticity in the approval models, and political information, interest in the presidential contest, and strength of partisan attachments all lead to lower levels of uncertainty. Variations in the role played by party during the Reagan years, compared to the presidencies of Carter, Bush, and Clinton, suggest a complex interaction between partisan ties, presidential performance, and the personality of the particular individual occupying the oval office.
Information and American Attitudes Toward Bureaucracy
Alvarez, R. Michael
Internal Revenue Service
The exploration of American attitudes towards the Internal Revenue Service joins an unusual pair of research domains: public opinion and public administration. Public administration scholars contend that the hostility Americans show towards ``bureaucracy'' stems from the contradictory expectations Americans have for bureaucratic performance. Drawing upon a survey commissioned by the IRS and conducted in 1987 just after the passage of the Tax Reform Act, we explore attitudes towards the performance of the IRS in eight categories. Using a new heteroskedastic ordinal logit technique, we demonstrate (1) that it is overwhelmingly a single expectation of flexibility that governs attitudes towards the IRS; (2) that these expectations are not in contradiction; and (3) that domain-specific information sharply focuses respondent attitudes towards bureaucracy.
Heterogeneity and Bias in Models of Vote Choice
Voters in the United States do not behave in a homogenous manner. Voting models typically account for such heterogeneity by seeking to decompose the process of vote choice into a number of distinct components. By examining voting choice data in this way, researchers are able to ascertain reasonable estimates of the average effect of various socio-economic and political variables on the candidate selection process. Models of this sort, while plausible, may not properly reflect the true heterogeneity of the American voter. At their core, simple models assume that voters use a common and uniform decision rule when deciding between candidates. But it is possible, if not likely, that different groups and classes of citizens use differently tructured processes to determine their choice of candidates. Searchers have attempted to account for this heterogeneity in a variety of ways. Rivers(1988) and Jackson (1992), for example, have accounted for differences in the voting behavior of individuals by allowing the mean effect of theoretically important variables to vary across individuals. While these approaches are extremely promising, in this paper I will take a different approach and examine three more subtle forms of heterogeneity in the vote choice process: (1) heterogeneity induced by non-random selection from the full population of citizens into the vote choice model sample; (2) heterogeneity due to the interaction of selection bias and non-constant variance; and (3) heterogeneity in the patterns of missing data across groups of the respondents. While much of the discussion in the paper is focused on the first two forms of heterogeneity, it is the third form of heterogeneity - one not typically addressed in the political science literature - that is the most important determinant of the degree of bias in vote choice models. Thus, heterogeneity within the sample of respondents affects the vote choice model estimates, just not in the way I originally envisioned. It is not just heterogeneity in the variance term, or in the selection into the vote choice process that poses a threat to accurate estimates of the power of the predictors in our vote choice models. Rather, it is the failure to preserve or account for the heterogeneity of the paths by which people answer survey questions that is the real bogeyman of vote choice models.
Difficult Choices: An Evaluation of Heterogenous Choice Models
Park, David K.
While the derivation and estimation of heterogeneous choice models appears straightforward, the properties of such models are not well understood. Using a series of Monte Carlo experiments, we focus on the properties of both heteroskedastic probit and heteroskedastic ordered probit models. We also test how robust these models are to both specification and measurement error. We find that estimates in heterogeneous choice models tend to be biased in all but ideal conditions, and can often lead to incorrect inferences.
How Robust Standard Errors Expose Methodological Problems They Do Not Fix
robust standard errors
clustered standard errors
heteroskedasticity-consistent standard errors
"Robust standard errors'' are used in a vast array of scholarship across all fields of empirical political science and most other social science disciplines. The popularity of this procedure stems from the fact that estimators of certain quantities in some models can be consistently estimated even under particular types of misspecification; and although classical standard errors are inconsistent in these situations, robust standard errors can sometimes be consistent. However, in applications where misspecification is bad enough to make classical and robust standard errors diverge, assuming that misspecification is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, we show that settling for a misspecified model (even with robust standard errors) can be a big mistake, in that all but a few quantities of interest will be impossible to estimate (or simulate) from the model without bias. We suggest a different practice: Recognize that differences between robust and classical standard errors are like canaries in the coal mine, providing clear indications that your model is misspecified and your inferences are likely biased. At that point, it is often straightforward to use some of the numerous and venerable model checking diagnostics to locate the source of the problem, and then modern approaches to choosing a better model. With a variety of real examples, we demonstrate that following these procedures can drastically reduce biases, improve statistical inferences, and change substantive conclusions.