Join thousands of book lovers
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.You can, at any time, unsubscribe from our newsletters.
Drawing on recent "event history" analytical methods from biostatistics, engineering and sociology, this book explains how longitudinal data can be used to study the causes of deaths, crimes, wars, and many other human-related events.
Offers researchers a guide for selecting the best statistical model to use, as well as discussing such topics as contextual analysis with absolute/relative effects, and the choice between regression coefficients as fixed parameters or as random variables.
The second edition of this book provides a conceptual understanding of analysis of variance. It outlines methods for analysing variance that are used to study the effect of one or more nominal variables on a dependent, interval level variable. The book presumes only elementary background in significance testing and data analysis.
Discusses the innovative log-linear model of statistical analysis. This model makes no distinction between independent and dependent variables, but is used to examine relationships among categoric variables by analyzing expected cell frequencies.
This paper considers the possible effects of making inferences about individuals from aggregate data. It assumes a knowledge of regression analysis, and explores the utility of techniques designed to make the inferences in causal modelling more reliable, including a comparison between ecological regression models and ecological correlation.
How do we group different subjects on a variety of variables? Should we use a classification procedure in which only the concepts are classified (typology), one in which only empirical entities are classified (taxonomy), or a combination of both? Kenneth D Bailey addresses these questions and shows how classification methods can be used to improve research. Beginning with an exploration of the advantages and disadvantages of classification procedures, the book covers topics such as: clustering procedures including agglomerative and divisive methods; the relationship among various classification techniques; how clustering methods compare with related statistical techniques; classification resources; and software packages for use in clustering techniques.
Monte Carlo simulation is a method of evaluating substantive hypotheses and statistical estimators by developing a computer algorithm to simulate a population, drawing multiple samples from this pseudo-population, and evaluating estimates obtained from these samples. This book explains the logic behind the method and demonstrates its uses for research.
Repeated surveys allow researchers the opportunity to analyze changes in society as a whole. This book includes: a discussion of the classic issue of how to separate cohort, period and age effects; methods for modelling aggregate trends; and methods for estimating cohort replacement's contribution to aggregate trends.
Although clustering--the classifying of objects into meaningful sets--is an important procedure, cluster analysis as a multivariate statistical procedure is poorly understood. This volume is an introduction to cluster analysis for professionals, as wel
Empirical researchers, for whom Iversen's volume provides an introduction, have generally lacked a grounding in the methodology of Bayesian inference. As a result, applications are few. After outlining the limitations of classical statistical inference, the author proceeds through a simple example to explain Bayes' theorem and how it may overcome these limitations. Typical Bayesian applications are shown, together with the strengths and weaknesses of the Bayesian approach. This monograph thus serves as a companion volume for Henkel's Tests of Significance (QASS vol 4).
Quantile Regression establishes the seldom recognized link between inequality studies and quantile regression models. Though separate methodological literatures exist for each subject matter, the authors explore the natural connections between this increasingly sought-after tool and research topics in the social sciences.
Introducing content analysis methods from a social science perspective, this book focuses on the reliability and viability of these coding procedures and their associated category schemes. This text is intended for those who have a basic knowledge of research methods or data analysis.
A concise, introductory text on propensity score methods that is easy to comprehend by those who have a limited background in statistics, and is practical enough for researchers to quickly generalize and apply the methods.
Approaching the topic from the perspective of the social scientist interested in hypothesis-testing, this volume is an introduction to time-series methods and their application in social science research.
An introduction to a variety of techniques that may be used in the analysis of data from a panel study -- information obtained from a large number of entities at two or more points in time. The focus of this volume is on analysis rather than problems of sampling or design, and its emphasis is on application rather than theory.
Secondary analysis has assumed a central position in social science research as existing survey data and statistical computing programmes have become increasingly available. This volume presents strategies for locating survey data and provides a comprehensive guide to US social science data archives, describing several major data files. The book also reviews research designs for secondary analysis.
With empirical examples which are both plentiful and well chosen to teach the technique, this book provides a thorough guide to latent class scaling models for binary response variables.
A practical introduction to multi-level modelling, this title offers an introduction to HLM and illustrations of how to use this technique to build models for hierarchical and longitudinal data.
Expands on the 1982 edition and adds new basic social network developments of the past twenty-five years.
Several decades of psychometric research have led to the development of sophisticated models for multidimensional test data, and in recent years, multidimensional item response theory (MIRT) has become a burgeoning topic in psychological and educational measurement. Considered a cutting-edge statistical technique, the methodology underlying MIRT can be complex, and therefore doesn¿t receive much attention in introductory IRT courses. However author Wes Bonifay shows how MIRT can be understood and applied by anyone with a firm grounding in unidimensional IRT modeling. His volume includes practical examples and illustrations, along with numerous figures and diagrams. Brief snippets of R code are interspersed throughout the text (with the complete R code included on an accompanying website) to guide readers in exploring MIRT models, estimating the model parameters, generating plots, and implementing the various procedures and applications discussed throughout the book.
Aimed at readers with minimal experience in computer programming, this book provides a theoretical and methodological rationale for using ABM in the social sciences. It concludes with practical advice about how to design and create ABM, a discussion of validation procedures, and some guidelines about publishing articles based on ABM.
A fully updated and accessible overview of randomized response
Describes the mathematical and logical foundations at a level that does not presume advanced mathematical or statistical skills. It illustrates how to do factor analysis with several of the more popular packaged computer programs.
Carol A. Chapelle shows readers how to design validation research for tests of human capacities and performance. Any test that is used to make decisions about people or programs should have undergone extensive research to demonstrate that the scores are actually appropriate for their intended purpose. Argument-Based Validation in Testing and Assessment is intended to help close the gap between theory and practice, by introducing, explaining, and demonstrating how test developers can formulate the overall design for their validation research from an argument-based perspective.
Taking the reader step-by-step through the intricacies, theory and practice of regression analysis, Damodar N. Gujarati uses a clear style that doesn't overwhelm the reader with abstract mathematics.
Offers students a brief and accessible approach to systematically quantifying various types of narrative data they can collect during a research process.
This monograph is not statistical. It looks instead at pre-statistical assumptions about dependent variables and causal order. Professor Davis spells out the logical principles that underlie our ideas of causality and explains how to discover causal direction, irrespective of the statistical technique used. He stresses throughout that knowledge of the `real world'' is important and repeatedly challenges the myth that causal problems can be solved by statistical calculations alone.
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.