Join thousands of book lovers
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.You can, at any time, unsubscribe from our newsletters.
A fully updated and accessible overview of randomized response
Describes the mathematical and logical foundations at a level that does not presume advanced mathematical or statistical skills. It illustrates how to do factor analysis with several of the more popular packaged computer programs.
Carol A. Chapelle shows readers how to design validation research for tests of human capacities and performance. Any test that is used to make decisions about people or programs should have undergone extensive research to demonstrate that the scores are actually appropriate for their intended purpose. Argument-Based Validation in Testing and Assessment is intended to help close the gap between theory and practice, by introducing, explaining, and demonstrating how test developers can formulate the overall design for their validation research from an argument-based perspective.
Taking the reader step-by-step through the intricacies, theory and practice of regression analysis, Damodar N. Gujarati uses a clear style that doesn't overwhelm the reader with abstract mathematics.
Offers students a brief and accessible approach to systematically quantifying various types of narrative data they can collect during a research process.
This monograph is not statistical. It looks instead at pre-statistical assumptions about dependent variables and causal order. Professor Davis spells out the logical principles that underlie our ideas of causality and explains how to discover causal direction, irrespective of the statistical technique used. He stresses throughout that knowledge of the `real world'' is important and repeatedly challenges the myth that causal problems can be solved by statistical calculations alone.
Introduces the elements of experimental design and analysis. The text covers such topics as the fundamental concept of variability, hypothesis testing, how ANOVA can be extended to the multi-group situation and random designs.
This volume shows how odds ratios can be used as a framework for understanding log-linear models. Moving systematically from the paradigmatic 2x2 case to more complicated cases, the author defines the odds ratio and demonstrates how it is a measure of association for tabular analysis.
Focusing on situations in which analysis of variance (ANOVA) involving the repeated measurement of separate groups of individuals is needed, this book attempts to reveal the advantages, disadvantages and counterbalancing issues of repeated measures situations.
Recent advances in statistical methodology and computer automation are making canonical correlation analysis available to more and more researchers. This volume explains the basic features of this sophisticated technique in an essentially non-mathematical introduction which presents numerous examples. Thompson discusses the assumptions, logic, and significance testing procedures required by this analysis, noting trends in its use and some recently developed extensions.
A book which summarizes many of the recent advances in the theory and practice of achievement testing, in the light of technological developments, and developments in psychometric and psychological theory. It provides an introduction to the two major psychometric models, item response theory and generalizability theory, and assesses their strengths for different applications. The book closes with some speculations about the future of achievement tests for the assessment of individuals, as well as monitoring of educational progress.`...the book contains valuable information for both beginners and for advanced workers who want an overview of recent work in achievement testing.' -- The Journal of the American Statistical Association, June 1985
These procedures, collectively known as discriminant analysis, allow a researcher to study the difference between two or more groups of objects with respect to several variables simultaneously, determining whether meaningful differences exist between the groups and identifying the discriminating power of each variable.
A presentation and critique of the use of multiple measures of theoretical concepts for the assessment of validity (using the multi-trait multi-method matrix) and reliability (using multiple indicators with a path analytic framework).
Ordinal data can be rank ordered but not assumed to have equal distances between categories. Using support by judges for civil rights measures and bussing as the primary example, this paper indicates how such data can best be analyzed.
An advanced study which presumes a knowledge of multiple regression and factor analysis techniques, this paper considers two techniques for comparing entire sets of data, and develops the canonical correlation model as an extension of regression analysis in which there are several dependent variables.
As budgets tighten and costs increase, it is becoming even more necessary that workable social programmes are shown to be worthy of support. This book presents one approach to evaluation -- multiattribute utility technology -- which stresses that evaluations should be comparative, and that all the different constituencies served by a programme and its different goals have to be kept in mind.
Secondary analysis has assumed a central position in social science research as existing survey data and statistical computing programmes have become increasingly available. This volume presents strategies for locating survey data and provides a comprehensive guide to US social science data archives, describing several major data files. The book also reviews research designs for secondary analysis.
Written in nontechnical language, this popular and practical volume has been completely updated to bring readers the latest advice on major issues involved in longitudinal research. It covers: research design strategies; methods of data collection; how longitudinal and cross-sectional research compares in terms of consistency and accuracy of results.
Outlines a set of techniques that enable a researcher to discuss the "hidden structure" of large data bases. These techniques use proximities, measures which indicate how similar or different objects are, to find a configuration of points which reflects the structure in the data.
A statistical method which will appeal to two groups in particular: those who are currently using the more traditional technique of exploratory factor analysis; and those who are interested in the analysis of covariance structures, commonly known as the LISREL model. The first group will find that this technique may be more appropriate to the analysis of their research problems; while the second group will find that confirmatory factor analysis is a useful first step to understanding the LISREL model, for this book, and its companion volume, Covariance Structure Models, are designed to be read consecutively. The proofs presented are simple, but the reader must feel comfortable with matrix algebra in order to understand the model.
This volume introduces the theory, method, and applications of one type of conjoint analysis technique. These techniques are used to study individual judgement and decision processes. Based upon Information Integration Theory, metric conjoint analysis allows for evaluation of multi-attribute alternatives based on interval level data. The model, which justifies use of metric conjoint methods and the statistical techniques drawn from it, are the core of this monograph. Also described are applications of the model in marketing, psychology, economics, sociology, planning, and other disciplines, all of which relate to forecasting the decision-making behavior of individuals.
Offers researchers a guide for selecting the best statistical model to use, as well as discussing such topics as contextual analysis with absolute/relative effects, and the choice between regression coefficients as fixed parameters or as random variables.
The second edition of this book provides a conceptual understanding of analysis of variance. It outlines methods for analysing variance that are used to study the effect of one or more nominal variables on a dependent, interval level variable. The book presumes only elementary background in significance testing and data analysis.
Discusses the innovative log-linear model of statistical analysis. This model makes no distinction between independent and dependent variables, but is used to examine relationships among categoric variables by analyzing expected cell frequencies.
This paper considers the possible effects of making inferences about individuals from aggregate data. It assumes a knowledge of regression analysis, and explores the utility of techniques designed to make the inferences in causal modelling more reliable, including a comparison between ecological regression models and ecological correlation.
How do we group different subjects on a variety of variables? Should we use a classification procedure in which only the concepts are classified (typology), one in which only empirical entities are classified (taxonomy), or a combination of both? Kenneth D Bailey addresses these questions and shows how classification methods can be used to improve research. Beginning with an exploration of the advantages and disadvantages of classification procedures, the book covers topics such as: clustering procedures including agglomerative and divisive methods; the relationship among various classification techniques; how clustering methods compare with related statistical techniques; classification resources; and software packages for use in clustering techniques.
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.