Join thousands of book lovers
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.You can, at any time, unsubscribe from our newsletters.
Though much has been written on multivariate failure time data analysis methods, a unified approach to this topic has yet to be communicated. This book aims to fill that gap through a novel focus on marginal hazard rates and cross ratio modeling. Readers will find the content useful for instruction, for application in collaborative research and as a source of novel research concepts. Many of the illustrations are from population disease studies which should interest epidemiologists and population scientists.
The components of variance is a notion essential to statisticians and quantitative research scientists working in a variety of fields, including the biological, genetic, health, industrial, and psychological sciences. Co-authored by Sir David Cox, the pre-eminent statistician in the field, this book provides in-depth discussions that set forth the essential principles of the subject. It focuses on developing the models that form the basis for detailed analyses as well as on the statistical techniques themselves. The authors include a variety of examples from areas such as clinical trial design, plant and animal breeding, industrial design, and psychometrics.
Authors Badonavicius and Nikulin have developed a large and important class of survival analysis models that generalize most of the existing models. In a unified, systematic presentation that does not get bogged down in technical details, this monograph fully examines those models and explores areas of accelerated life testing usually only touched upon in the literature. In addition to the classical results, the authors devote considerable attention to models with time-varying explanatory variables and to methods of semiparametric estimation. The authors include goodness-of-fit tests for the most important models. This book is valuable as both a high-level textbook and as a professional reference.
Bayesian methods in reliability cannot be fully utilized without understanding the essential differences that exist between frequentist probability and subjective probability. Switching from the frequentist to the subjective approach requires that some fundamental concepts be re-thought. This text details those differences and clarifies aspects of subjective probability that have a direct influence on modeling and drawing inference from failure and survival data. In particular, within a framework of Bayesian theory, the author considers the effects of different levels of information in the analysis of the phenomena of positive and negative aging.
Multi-State Survival Models for Interval-Censored Data introduces methods to describe stochastic processes that consist of transitions between states over time. It is targeted at researchers in medical statistics, epidemiology, demography, and social statistics. One of the applications in the book is a three-state process for dementia and survival in the older population. This process is described by an illness-death model with a dementia-free state, a dementia state, and a dead state. Statistical modelling of a multi-state process can investigate potential associations between the risk of moving to the next state and variables such as age, gender, or education. A model can also be used to predict the multi-state process.The methods are for longitudinal data subject to interval censoring. Depending on the definition of a state, it is possible that the time of the transition into a state is not observed exactly. However, when longitudinal data are available the transition time may be known to lie in the time interval defined by two successive observations. Such an interval-censored observation scheme can be taken into account in the statistical inference.Multi-state modelling is an elegant combination of statistical inference and the theory of stochastic processes. Multi-State Survival Models for Interval-Censored Data shows that the statistical modelling is versatile and allows for a wide range of applications.
This book addresses the issues that arise and the methodology that can be applied when the dependence between time series is described and modeled. It shows how to draw meaningful, applicable, and statistically valid conclusions from multivariate (or vector) time series data. The book presents several extensions to the standard autoregressive mo
This book shows how the Bayesian approach to inference is applicable to partially identified models (PIMs) and examines the performance of Bayesian procedures in partially identified contexts. Drawing on his many years of research in this area, the author presents a thorough overview of the statistical theory, properties, and applications of PIM
Exploring the advantages of the state-space approach, this book presents numerous computational procedures that can be applied to a previously specified linear model in state-space form. It discusses model estimation and signal extraction; describes many procedures to combine, decompose, aggregate, and disaggregate a state-space form; and covers
Nonparametric Models for Longitudinal Data with Implementations in R presents a comprehensive summary of major advances in nonparametric models and smoothing methods with longitudinal data. It covers methods, theories, and applications that are particularly useful for biomedical studies in the era of big data and precisio
This book will describe a variety of statistical models useful for the analysis of data arising from life history processes. Particular attention will be paid to models useful for the study of chronic diseases to better understand the dynamics of the disease process, the effects of fixed and time-varying covariates, and the use of models for pre
Kernel smoothing has greatly evolved since its inception to become an essential methodology in the Data Science tool kit for the 21st century. Its widespread adoption is due to its fundamental role for multivariate exploratory data analysis, as well as the crucial role it plays in composite solutions to complex data challenges.
This book shows how constrained principal component analysis (CPCA) offers a unified framework for regression techniques and PCA. Keeping the use of complicated iterative methods to a minimum, the book includes implementation details and many real application examples. It also offers material for methodologically oriented readers interested in d
Exploring the recent achievements that have occurred since the mid-1990s, this book explains how to use modern algorithms to fit geometric contours to observed data in image processing and computer vision. The author covers all facets-geometric, statistical, and computational-of the methods. He looks at how the numerical algorithms relate to one
This volume discusses an important area of statistics and highlights the most important statistical advances. It is divided into four sections: statistics in the life and medical sciences, business and social science, the physical sciences and engineering, and theory and methods of statistics.
Sufficient dimension reduction was first introduced in the early 90's as a set of graphical and diagnostic tools for regression with many predictors. Over the past two decades or so it has developed into a powerful theory and technique for handling high-dimensional data. This book will introduce the main results and important techniques in this
In this age of big data, the number of features measured on a person or object can be large and might be larger than the number of observations. This book shows how the sparsity assumption allows us to tackle these problems and extract useful and reproducible patterns from big datasets. The authors cover the lasso for linear regression, generali
This book provides broad, up-to-date coverage of the Pareto model and its extensions. This edition expands several chapters to accommodate recent results and reflect the increased use of more computer-intensive inference procedures. It includes new material on multivariate inequality and new discussions of bivariate and multivariate income and s
This book covers the theoretical developments and applications of sequential hypothesis testing and sequential quickest changepoint detection in a wide range of engineering and environmental domains. It reviews recent accomplishments in hypothesis testing and changepoint detection both in decision-theoretic (Bayesian) and non-decision-theoretic
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.