About the Society
Papers, Posters, Syllabi
Submit an Item
Polmeth Mailing List
Below results based on the criteria 'Experimental Design'
Total number of records returned: 3
Bottom, William P.
Miller, Gary J.
McClurg, Scott D.
Game theory's best efforts have done little but verify the undecidability of coalitional problems. The typical solution concept specifies the hypothesized distribution for each of several viable coalition structures--but cannot choose among the coalition structures. For example, the bargaining set presumes that bargaining proceeds by objection and counter-objection until potential coalition members are indifferent between the coalitions that they pivot between. Thus, the bargaining set makes a clear distributional hypothesis, but thereby gives up any leverage on which coalition will occur. In this paper, we explore how risk preferences and the nature of coalitional goods influence the coalition-building process. We test a variety of potential explanations with data collected in an experimental setting. Foremost among our conclusions is that the coalitions which form among inexperienced subjects are affected by their risk preferences. We further find that this effect disappears among experienced subjects. We conclude the paper by discussing some of the explanations for and questions stemming from our results.
Variance Identification and Efficiency Analysis in Randomized Experiments under the Matched-Pair Design
Average Treatment Effect
In his landmark article, Neyman (1923) introduced randomization-based inference in analyzing experiments under the completely randomized design. Under this framework, Neyman considered the statistical estimation of the sample average treatment effect and derived the variance of the standard estimator using the treatment assignment mechanism as the sole basis of inference. In this paper, I extend Neyman's analysis to randomized experiments under the matched-pair design where experimental units are paired based on their pre-treatment characteristics and the randomization of treatment is subsequently conducted within each matched pair. I study the variance identification for the standard estimator of average treatment effects and analyze the relative efficiency of the matched-pair design over the completely randomized design. I also show how to empirically evaluate the relative efficiency of the two designs using experimental data obtained under the matched-pair design. My randomization-based analysis clarifies some of the important questions raised in the literature and identifies a hiden and yet implausible assumption that is made for the efficiency analysis in a widely used textbook. Finally, the analytical results are illustrated with numerical and empirical examples.
Misunderstandings among Experimentalists and Observationalists about Causal Inference
average treatment effects
We attempt to clarify, and suggest how to avoid, several serious misunderstandings about and fallacies of causal inference in experimental and observational research. These issues concern some of the most basic advantages and disadvantages of each basic research design. Problems include improper use of hypothesis tests for covariate balance between the treated and control groups, and the consequences of using randomization, blocking before randomization, and matching after treatment assignment to achieve covariate balance. Applied researchers in a wide range of scientific disciplines seem to fall prey to one or more of these fallacies, and as a result make suboptimal design or analysis choices. To clarify these points, we derive a new four-part decomposition of the key estimation errors in making causal inferences. We then show how this decomposition can help scholars from different experimental and observational research traditions better understand each other's inferential problems and attempted solutions. (This paper is forthcoming in the Journal of the Royal Statistical Society, but we have some time for revisions and would value any comments anyone might have. This is a revised and much more general version of an earlier paper, "The Balance Test Fallacy in Causal Inference".)