Join thousands of book lovers
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.You can, at any time, unsubscribe from our newsletters.
Provides a comprehensive introduction to the range of polytomous models available within item response theory. Practical examples of major models using real data are provided, as is a chapter on choosing an appropriate model. Figures are used throughout to illustrate important elements, as they are described.
Survey Questions is a highly readable guide to the principles of writing survey questions. The authors review recent research on survey questions, consider the lore of professional experience and finally present those findings which have the strongest implications on writing these questions.
In this volume the underlying logic and practice of maximum likelihood (ML) estimation is made clear by providing a general modelling framework that utilizes the tools of ML methods. This framework offers readers a flexible modelling strategy since it accommodates cases from the simplest linear models to the most complex nonlinear models that link a system of endogenous and exogenous variables with non-normal distributions. Using examples to illustrate the techniques of finding ML estimators and estimates, Eliason discusses: what properties are desirable in an estimator; basic techniques for finding ML solutions; the general form of the covariance matrix for ML estimates; the sampling distribution of ML estimators; the application of ML in the normal distribution as well as in other useful distributions; and some helpful illustrations of likelihoods.
Bootstrapping, a computational nonparametric technique for `re-sampling'', enables researchers to draw a conclusion about the characteristics of a population strictly from the existing sample rather than by making parametric assumptions about the estimator. Using real data examples from per capita personal income to median preference differences between legislative committee members and the entire legislature, Mooney and Duval discuss how to apply bootstrapping when the underlying sampling distribution of the statistics cannot be assumed normal, as well as when the sampling distribution has no analytic solution. In addition, they show the advantages and limitations of four bootstrap confidence interval methods: normal approximation, percentile, bias-corrected percentile, and percentile-t. The authors conclude with a convenient summary of how to apply this computer-intensive methodology using various available software packages.
This excellent introduction to stochastic parameter regression models is more advanced and technically difficult than other papers in this series. These models allow relationships to vary through time, rather than requiring them to be fixed, without forcing the analyst to specify and analyze the causes of the time-varying relationships. This volume will be most useful to those with a good working knowledge of standard regression models and who wish to understand methods which deal with relationships that vary slowly over time, but for which the exact causes of variation cannot be identified.
Feiring provides a well-written introduction to the techniques and applications of linear programming. He shows readers how to model, solve, and interpret appropriate linear programming problems. His carefully-chosen examples provide a foundation for mathematical modelling and demonstrate the wide scope of the techniques.
Using an expository style that builds from simpler to more complex topics, this text explains how to measure the centre and variation on a single variable. It also considers ways to examine the distribution of variables and measure the spread of a variable.
In recent years the loglinear model has become the dominant form of categorical data analysis as researchers have expanded it into new directions. This book shows researchers the applications of one of these new developments - how uniting ordinary loglinear analysis and latent class analysis into a general loglinear model with latent variables can result in a modified LISREL approach. This modified LISREL model will enable researchers to analyze categorical data in the same way that they have been able to use LISREL to analyze continuous data. Beginning with an introduction to ordinary loglinear modelling and standard latent class analysis, the author explains the general principles of loglinear modelling with latent variables, the application of loglinear models with latent variables as a causal model as well as a tool for the analysis of categorical longitudinal data, the strengths and limitations of this technique, and finally, a summary of computer programs that are available for executing this technique.
Discusses data access, transformation and preparation issues, and how to select the appropriate analytic graphics techniques through a review of various GIS and common data sources, such as census products, Tiger files, and CD-ROM access. It provides illustrative output for sample data using selected software.
This volume offers social scientists a concise overview of multiple attribute decision making (MADM) methods, their characteristics and applicability and methods for solving MADM problems. Real world examples are used to introduce the reader to normative models for optimal decisions.The authors explore how MADM methods can be used for descriptive purposes to model: the existing decision-making process; noncompensatory and scoring methods; accommodation of soft data; construction of a multiple-decision support systems; and the validity of methods. The advanced procedures of TOPSIS and ELECTRE are also presented.
Fuzzy set theory deals with sets or categories whose boundaries are blurry or, in other words, 'fuzzy.' This book presents an introduction to fuzzy set theory, focusing on its applicability to the social sciences. It provides a guide for researchers wishing to combine fuzzy set theory with standard statistical techniques and model-testing.
Covering the basics of the cohort approach to studying aging, social, and cultural change, this volume also critiques several commonly used (but flawed) methods of cohort analysis, and illustrates appropriate methods with analyses of personal happiness and attitudes toward premarital and extramarital sexual relations. Finally, the book describes the major sources of suitable data for cohort studies and gives the criteria for appropriate data.The Second Edition features:- a chapter on the analysis of survey data, which includes a discussion of the problems posed by question order effects when data from different surveys are used in a cohort analysis. - an emphasis on the difference between linear and nonlinear effects. - instruction on how to use available data from cohort studies.
Panel data - information gathered from the same individuals or units at several different points in time - are commonly used in the social sciences to test theories of individual and social change. This book highlights the developments in this technique in a range of disciplines and analytic traditions.
This book explores the issues underlying the effective analysis of interaction in factorial designs. It includes discussion of: different ways of characterizing interactions in ANOVA; interaction effects using traditional hypothesis testing approaches; and alternative analytic frameworks that focus on effect size methodology and interval estimation.
Derived from engineering literature that uses similar techniques to map electronic circuits and physical systems, this work utilizes a systems approach to modeling that offers social scientists a variety of tools that are both sophisticated and easily applied. It introduces a modeling tool to researchers in the social sciences.
Introduces the basis of the confidence interval framework and provides the criteria for 'best' confidence intervals, along with the trade-offs between confidence and precision. This book covers topics such as the transformation principle, confidence intervals, and the relationship between confidence interval and significance testing frameworks.
Ordinary regression analysis is not appropriate for investigating dichotomous or otherwise `limited' dependent variables, but this volume examines three techniques -- linear probability, probit, and logit models -- which are well-suited for such data. It reviews the linear probability model and discusses alternative specifications of non-linear models. Using detailed examples, Aldrich and Nelson point out the differences among linear, logit, and probit models, and explain the assumptions associated with each.
Reviews the main competing approaches to modeling multiple time series: simultaneous equations, ARIMA, error correction models, and vector autoregression. This book focuses on vector autoregression (VAR) models as a generalization of the other approaches mentioned. It also reviews arguments for and against using multi-equation time series models.
Combines time series and cross-sectional data to provide the researcher with an efficient method of analysis and improved estimates of the population being studied. With more relevant data available this analysis technique allows the sample size to be increased, which ultimately yields a more effective study.
Offers an in-depth treatment of robust and resistant regression. This work, which is geared toward both future and practicing social scientists, takes an applied approach and offers readers empirical examples to illustrate key concepts. It includes a web appendix that provides readers with the data and the R-code for the examples used in the book.
Applied demography is a technique which can handle small geographic areas -- an approach which allows market segments and target populations to be studied in detail. This book provides the essential elements of applied demography in a clear and concise manner. It details the kind of information that is available; who produces it; and how that information can be used. The sources mentioned are American, but the techniques for estimating have universal application. A background in elementary algebra is sufficient for this book.'This is a handy, concise primer that summarizes various fundamentals of particular interest to the field (estimating total populations, household sizes, etc.)' -- Population Today, October 1984
The great advantage of time series regression analysis is that it can both explain the past and predict the future behavior of variables. This volume explores the regression (or structural equation) approach to the analysis of time series data. It also introduces the Box-Jenkins time series method in an attempt to bridge partially the gap between the two approaches.
Considers how "real-world" observations can be interpreted to convert them into data to be analyzed, so as to facilitate more effective use of scaling techniques. The text introduces the most appropriate scaling strategies for different research situations.
Meta-Analysis shows concisely, yet comprehensively, how to apply statistical methods to achieve a literature review of a common research domain. It demonstrates the use of combined tests and measures of effect size to synthesize quantitatively the results of independent studies for both group differences and correlations. Strengths and weaknesses of alternative approaches, as well as of meta-analysis in general, are presented.
Provides readers with a clear and concise introduction to the why, what, and how of the comparative method
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.