Join thousands of book lovers
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.You can, at any time, unsubscribe from our newsletters.
This user-friendly 2003 book explains the techniques and benefits of semiparametric regression in a concise and modular fashion.
This eagerly awaited textbook covers everything the graduate student in probability wants to know about Brownian motion, as well as the latest research in the area. Starting with the construction of Brownian motion, the book then proceeds to sample path properties like continuity and nowhere differentiability. Notions of fractal dimension are introduced early and are used throughout the book to describe fine properties of Brownian paths. The relation of Brownian motion and random walk is explored from several viewpoints, including a development of the theory of Brownian local times from random walk embeddings. Stochastic integration is introduced as a tool and an accessible treatment of the potential theory of Brownian motion clears the path for an extensive treatment of intersections of Brownian paths. An investigation of exceptional points on the Brownian path and an appendix on SLE processes, by Oded Schramm and Wendelin Werner, lead directly to recent research themes.
This third edition of the popular guide to using R reflects recent improvements to the R system, including major advances in graphical user interfaces and graphics packages. It emphasizes hands-on analysis, graphical display and interpretation of data. Ideal for researchers, students of applied statistics, and practising statisticians.
An integrated development of models and likelihood that blends theory and practice, suitable for advanced undergraduate and graduate students, researchers and practitioners. Each chapter of this 2003 book contains a wide range of problems and exercises. A library of data sets accompanying the book is available over the web.
For every practising statistician who designs experiments, a coherent framework for the thinking behind good design. Also ideal for advanced undergraduate and beginning graduate courses. Examples, exercises and discussion questions are drawn from a wide range of real applications: from drug development, to agriculture, to manufacturing.
The first comprehensive introduction to techniques and problems in the field of spatial random networks, for graduate students and scientists. Motivated by applications to wireless data networks; both readable and rigorous. Models developed are also of interest in a broader context, ranging from engineering to social networks, biology, and physics.
In fields such as biology, medical sciences, sociology, and economics researchers often face the situation where the number of available observations, or the amount of available information, is sufficiently small that approximations based on the normal distribution may be unreliable. Theoretical work over the last quarter-century has led to new likelihood-based methods that lead to very accurate approximations in finite samples, but this work has had limited impact on statistical practice. This book illustrates by means of realistic examples and case studies how to use the new theory, and investigates how and when it makes a difference to the resulting inference. The treatment is oriented towards practice and comes with code in the R language (available from the web) which enables the methods to be applied in a range of situations of interest to practitioners. The analysis includes some comparisons of higher order likelihood inference with bootstrap or Bayesian methods.
Combines algebra and statistics to explore the interplay between symmetry-related research questions and their statistical analysis.
A self-contained graduate-level introduction to the statistical mechanics of disordered systems. In three parts, the book treats basic statistical mechanics; disordered lattice spin systems; and latest developments in the mathematical understanding of mean-field spin glass models. It assumes basic knowledge of classical physics and working knowledge of graduate-level probability theory.
This book explains in simple language how saddlepoint approximations make computations of probabilities tractible for complex models. No previous background in the area is required as the book introduces the subject from the very beginning. Many real data examples show the methods at work. For statisticians, biostatisticians, electrical engineers, econometricians, and applied mathematicians.
Choosing a model is central to all statistical work with data; this book is the first to synthesize research and practice from this active field. Model choice criteria are explained, discussed and compared, including the AIC, BIC, DIC and FIC. Real-data examples and exercises build familiarity with the methods.
The coordinate-free, or geometric, approach to the theory of linear models is more insightful, more elegant, more direct, and simpler than the more common matrix approach. This book treats Model I ANOVA and linear regression models with non-random predictors in a finite-dimensional setting.
Describes the Bayesian approach to statistics at a level suitable for final year undergraduate and Masters students as well as statistical and interdisciplinary researchers. It is unusual in presenting Bayesian statistics with an emphasis on mainstream statistics, showing how to infer scientific, medical, and social conclusions from numerical data.
This introduction to wavelet analysis and wavelet-based statistical analysis of time series focuses on practical discrete time techniques, with detailed descriptions of the theory and algorithms needed to understand and implement the discrete wavelet transforms. The book contains numerous exercises and a website offering access to the time series and wavelet software.
Rigorous probabilistic arguments, built on the foundation of measure theory introduced eighty years ago by Kolmogorov, have invaded many fields. Students of statistics, biostatistics, econometrics, finance, and other changing disciplines now find themselves needing to absorb theory beyond what they might have learned in the typical undergraduate, calculus-based probability course. This 2002 book grew from a one-semester course offered for many years to a mixed audience of graduate and undergraduate students who have not had the luxury of taking a course in measure theory. The core of the book covers the basic topics of independence, conditioning, martingales, convergence in distribution, and Fourier transforms. In addition there are numerous sections treating topics traditionally thought of as more advanced, such as coupling and the KMT strong approximation, option pricing via the equivalent martingale measure, and the isoperimetric inequality for Gaussian processes. The book is not just a presentation of mathematical theory, but is also a discussion of why that theory takes its current form. It will be a secure starting point for anyone who needs to invoke rigorous probabilistic arguments and understand what they mean.
A textbook for students with some background in probability that develops quickly a rigorous theory of Markov chains and shows how actually to apply it, e.g. to simulation, economics, optimal control, genetics, queues and many other topics, and exercises and examples drawn both from theory and practice.
Many electronic and acoustic signals can be modelled as sums of sinusoids and noise. However, the amplitudes, phases and frequencies of the sinusoids are often unknown and must be estimated in order to characterise the periodicity or near-periodicity of a signal and consequently to identify its source. This book presents and analyses several practical techniques used for such estimation. The problem of tracking slow frequency changes over time of a very noisy sinusoid is also considered. Rigorous analyses are presented via asymptotic or large sample theory, together with physical insight. The book focuses on achieving extremely accurate estimates when the signal to noise ratio is low but the sample size is large. Each chapter begins with a detailed overview, and many applications are given. Matlab code for the estimation techniques is also included. The book will thus serve as an excellent introduction and reference for researchers analysing such signals.
This 2004 introduction to ways of modelling phenomena that occur over time is accessible to anyone with a basic knowledge of statistical ideas. Examples from physical, biological and social sciences show how the principles can be put into practice: data sets and R code for these are supplied on author's website.
This mathematically rigorous, practical introduction to the field of asymptotic statistics develops most of the usual topics of an asymptotics course, and also presents recent research topics such as empirical processes, the bootstrap, and semiparametric models.
Point-to-point vs hub-and-spoke. Questions of network design are real and involve many billions of dollars. Yet little is known about optimising design - nearly all work concerns optimising flow assuming a given design. This foundational book tackles optimisation of network structure itself, deriving comprehensible and realistic design principles. With fixed material cost rates, a natural class of models implies the optimality of direct source-destination connections, but considerations of variable load and environmental intrusion then enforce trunking in the optimal design, producing an arterial or hierarchical net. Its determination requires a continuum formulation, which can however be simplified once a discrete structure begins to emerge. Connections are made with the masterly work of Bendsoe and Sigmund on optimal mechanical structures and also with neural, processing and communication networks, including those of the Internet and the World Wide Web. Technical appendices are provided on random graphs and polymer models and on the Klimov index.
This book provides an accessible introduction to measure theory and stochastic calculus, and develops into an excellent users' guide to filtering. A complete resource for engineers, or anyone with an interest in implementation of filtering techniques. Three chapters concentrate on applications from finance, genetics and population modelling. Also includes exercises.
Bootstrap methods enable fairly sophisticated statistical calculations to be done by computer simulation. The range of application is broad: from biology and medicine through to econometrics and finance. Compared with other treatments, applications are thoroughly covered in this 1997 book, with an emphasis on practical implementation. Computer code is available on the supporting website.
This detailed introduction to distribution theory uses no measure theory, making it suitable for students in statistics and econometrics and researchers who use statistical methods. Backgrounds in calculus and linear algebra are important, and a course in elementary mathematical analysis useful, but not required. An appendix summarizes the mathematical definitions and results outlined.
Network science is one of the fastest growing areas in science and business. This classroom-tested, self-contained book is designed for master's-level courses and provides a rigorous treatment of random graph models for networks, featuring many examples of real-world networks for motivation and numerous exercises to build intuition and experience.
Recent years have seen an explosion in the volume and variety of data collected in scientific disciplines from astronomy to genetics and industrial settings ranging from Amazon to Uber. This graduate text equips readers in statistics, machine learning, and related fields to understand, apply, and adapt modern methods suited to large-scale data.
This classic introduction to probability theory for beginning graduate students is a comprehensive treatment concentrating on the results most useful for applications.
'Big data' poses challenges that require both classical multivariate methods and modern machine-learning techniques. This coherent treatment integrates theory with data analysis, visualisation and interpretation of the analysis. Problems, data sets and MATLAB (R) code complete the package. It is suitable for master's/graduate students in statistics and working scientists in data-rich disciplines.
This accessible but rigorous introduction is written for advanced undergraduates and beginning graduate students in data science, as well as researchers and practitioners. It shows how a statistical framework yields sound estimation, testing and prediction methods, using extensive data examples and providing R code for many methods.
The data sciences are moving fast, and probabilistic methods are both the foundation and a driver. This highly motivated text brings beginners up to speed quickly and provides working data scientists with powerful new tools. Ideal for a basic second course in probability with a view to data science applications, it is also suitable for self-study.
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.