About the Society
Papers, Posters, Syllabi
Submit an Item
Polmeth Mailing List
Below results based on the criteria 'replication'
Total number of records returned: 6
Verifying Evidence of "Congressional Enactments of Race-Gender"
Grant, J. Tobin
I report the results of a verification of Hawkesworth's 2003 "Congressional Enactments of Race-Gender" (CERG). This is a landmark analysis of race and gender in the U.S. Congress that is noteworthy for both its theory and its empirical evidence. A deeper look at the evidence and the context raises fundamental questions about the empirical validity of CERG's theory of race-gender in Congress. I conclude that racing-gendering in Congress is more nuanced than originally presented in CERG, and that further research is necessary to demonstrate empirically CERG's theory of Congress as a raced-gendered institution. This verification has important methodology implications, as it demonstrates why verification of empirical research -- including interpretive research -- should be a widely-practiced methodology within political science.
An Introduction to the Dataverse Network as an Infrastructure for Data Sharing
We introduce a set of integrated developments in web application software, networking, data citation standards, and statistical methods designed to increase scholarly recognition for data contributions; put some of the universe of data and data sharing practices on firmer ground; and facilitate the public distribution of persistent, authorized, and verifiable data, with powerful and easy-to-use technology, even when the data are confidential or proprietary. Our goal is to solve some of the political and sociological problems of data sharing via technological means, with the result intended to benefit both the scientific community and the sometimes apparently contradictory goals of individual researchers. (More information on this project is available at http://TheData.org.)
The Robustness of Statistical Abstractions: A Look Under the Hood of Statistical Models and Software
McDonald, Michael P.
Models rest upon abstractions that are accepted, prima facie, as routine and robust. In the course of model development and estimation, political methodologists routinely assume the reliability of the computational that abstractions they use. All computational abstractions can fall short in their implementation; some because their implementation is complicated and the precision of computation is limited, others because they assume knowledge of solutions to mathematical problems that are inherently difficult to solve. We measure the accuracy of statistical abstractions as implemented in statistical packages popular among political methodologists, such as Gauss, Stata, SST and Excel. We evaluate the use of these abstractions in the context of evaluating complex statistical procedures, such as Jonathan Nagler's (1994) scobit estimator and Gary King's (1997) solution to ecological inference. We find that widely used implementations of common statistical abstractions are at times prone to error. We show that errors in inference in complex models can result from failures to understand the implementation and limitations of computation. We then offer tools to test statistical results to improve the accuracy of many statistical implementations and to test the implementation-robustness of many statistical results. We conclude by offering recommendations to help users of statistical software avoid the pitfalls of computational abstractions and offer guidelines to aid replication
Economic Conditions and Presidential Elections
Willette, Jennifer R.
One of the more robust findings over the last 50 years in research on elections has been the importance of macroeconomic conditions on voting in U.S. presidential elections. An important contribution to that literature was made by Steven Weatherford in a 1978 article demonstrating that working class voters are more sensitive to economic conditions than are middle class voters in their vote choice. Weatherford's result was based on the 1956 through 1960 elections. We replicate Weatherford's result for 1960, and show that the substantive finding is extremely sensitive to the definition of class. When using occupation groups as the measure of class, we are able to essentially replicate Weatherford's result. However, using income as the measure of class we do not find any evidence to support the same finding for 1960. We then extend the analysis to cover the period 1956 thru 1996 using both an income-based measure of class and an occupation-based measure of class. We show that there does not appear to be a clear pattern distinguishing levels of economic voting between working-class and middle-class voters; though using the occupation-based measure working class voters appear more sensitive to the economy in recent elections. Finally, we offer a new theory of economic voting. We propose that voters vote based on the economic performance of their economic reference group - rather than on their own personal finances or on the state of the national economy.
I show herein how to write a publishable paper by beginning with the replication of a published article. This strategy seems to work well for class projects in producing papers that ultimately get published, helping to professionalize students into the discipline, and teaching them the scientific norms of the free exchange of academic information. I begin by briefly revisiting the prominent debate on replication our discipline had a decade ago and some of the progress made in data sharing since. (This paper is forthcoming in PS: Political Science and Politics. The current version is available at http://gking.harvard.edu.)
The difference between ``significant'' and ``not significant'' is not itself statistically significant
A common error in statistical analyses is to summarize comparisons by declarations of statistical significance or non-significance. There are a number of difficulties with this approach. First is the oft-cited dictum that statistical significance is not the same as practical significance. Another difficulty is that this dichotomization into significant and non-significant results encourages the dismissal of observed differences in favor of the usually less interesting null hypothesis of no difference. Here, we focus on a less commonly noted problem, namely that changes in statistical significance are not themselves significant. By this, we are not merely making the commonplace observation that any particular threshold is arbitrary---for example, only a small change is required to move an estimate from a 5.1% significance level to 4.9%, thus moving it into statistical significance. Rather, we are pointing out that even large changes in significance levels can correspond to small, non-significant changes in the underlying variables. We illustrate with a theoretical and an applied example.