About the Society
Papers, Posters, Syllabi
Submit an Item
Polmeth Mailing List
Below results based on the criteria 'smoothing'
Total number of records returned: 4
Age-Period-Cohort Analysis with Noisy, Lumpy Data
Brady, Henry E.
We have developed several relatively simple methods for doing age-period-cohort analysis with noisy, lumpy data. The first method, using additional information from the Census, does not work well with our data constraints because the age composition of the population does not vary enough over relatively short periods of time. The second method, approximating APC surfaces with polynomial functions, smooths the data too much. This approach is very much a brute force curve-fitting exercise that makes a very general assumption about the functional form of the APC surface and then fits it to the data. However, a third technique we evaluate starts with a theoretically informed model of how APC effects operate for a given dependent variable. This method allows for hypothesis testing and a reasonable amount of smoothing, but probably does not smooth period effects enough. It also yields interesting results about age, period, and cohort effects. The last method we discuss briefly, combining the third technique with additional smoothing, needs more development but may improve our estimates.
Getting the Mean Right: Generalized Additive Models
Monte Carlo analysis
We examine the utility of the generalized additive model as an alternative to the common linear model. Generalized additive models are flexible in that they allow the effect of each independent variable to be modelled non-parametrically while requiring that the effect of all the independent variables is additive. GAMs are common in the statistics literature but are conspicuously absent in political science. The paper presents the basic features of the generalized additive model. Through Monte Carlo experimentation we show that there is little danger of the generalized additive model finding spurious structures. We use GAMS to reanalyze several political science data sets. These applications show that generalized additive models can be used to improve standard analyses by guiding researchers as to the parametric shape of response functions. The technique also provides interesting insights about data, particularly in terms of modelling interactions.
Getting the Mean Right is a Good Thing: Generalized Additive Models
This is a substantial revision of the paper submitted as beck96. A shorter version of this paper is under consideration at a political science journal of note. Theory: Social scientists almost always use statistical models positing the dependent variable as a linear function of X, despite suspicions that the social and political world is not so parsimonious. Generalized additive models (GAMs) permit each independent variable to be modelled non-parametrically while requiring that the independent variables combine additively, striking a sensible balance between the flexibility of non-parametric techniques and the ease of interpretation and familiarity of linear regression. GAMs thus offer social scientists a practical methodology for improving on the extant practice of ``linearity by default''. Method: We present the statistical concepts and tools underlying GAMs (e.g., scatterplot smoothing, non-parametrics more generally, and accompanying graphical methods), and summarize issues pertaining to estimation, inference, and the statistical properties of GAMs. Monte Carlo experiments assess the validity of tests of linearity accompanying GAMs. Re-analysis of published work in American politics, comparative politics, and international relations demonstrates the usefulness of GAMs in social science settings. Results: Our re-analyses of published work show that GAMs can extract substantive mileage beyond that yielded by linear regression, offering novel insights, particularly in terms of modelling interactions. The Monte Carlo experiments show there is little danger of GAMs spuriously finding non-linear structures. All data analysis, Monte Carlo experiments, and statistical graphs were generated using S-PLUS, Version 3.3. The routines and data are available at ftp://weber.uscd.edu/pub/nbeck/gam.
Binary and Ordinal Time Series with AR(p) Errors: Bayesian Model Determination for Latent High-Order Markovian Processes
Auxiliary Particle Filter
Markov Chain Monte Carlo (MCMC)
Sampling Importance Resampling(SIR)
To directly and adequately correct serial correlation in binary and ordinal response data, this paper proposes a probit model with errors following a pth-order autoregressive process, and develops simulation-based methods in the Bayesian context to handle computational challenges of posterior estimation, model comparison, and lag order determination. Compared to the extant methods, such as quasi-ML, GCM, and and simulation-based ML estimators, the current method does not rely on the properties of the big variance-covariance matrix or the shape of the likelihood function. In addition, the present model efficiently handles high-order autocorrelated errors that raise computationally formidable difficulties to the conventional methods. By applying a mixed sampler of the Gibbs and Metropolis-Hastings algorithm, the posterior distributions of the parameters do not depend on initial observations. The auxiliary particle filter, complemented by the fixed-lag smoothing, is extended to approximate Bayes Factors for models with latent high-order Markov processes. Computational methods are tested with empirical data. Energy cooperation policies of the International Energy Agency are analyzed in terms of their effects on global oil-supply security. The current model with different lag orders, together with other competitive models, is estimated and compared.