TIENE EN SU CESTA DE LA COMPRA
en total 0,00 €
Bayesian Statistical Methods provides data scientists with the foundational and computational tools needed to carry out a Bayesian analysis. This book focuses on Bayesian methods applied routinely in practice including multiple linear regression, mixed effects models and generalized linear models (GLM). The authors include many examples with complete R code and comparisons with analogous frequentist procedures.
In addition to the basic concepts of Bayesian inferential methods, the book covers many general topics:
Advice on selecting prior distributions
Computational methods including Markov chain Monte Carlo (MCMC)
Model-comparison and goodness-of-fit measures, including sensitivity to priors
Frequentist properties of Bayesian methods
Case studies covering advanced topics illustrate the flexibility of the Bayesian approach:
Semiparametric regression
Handling of missing data using predictive distributions
Priors for high-dimensional regression models
Computational techniques for large datasets
Spatial data analysis
The advanced topics are presented with sufficient conceptual depth that the reader will be able to carry out such analysis and argue the relative merits of Bayesian and classical methods. A repository of R code, motivating data sets, and complete data analyses are available on the book's website.
Table of Contents
1. Basics of Bayesian Inference
Probability background
Univariate distributions
Discrete distributions
Continuous distributions
Multivariate distributions
Marginal and conditional distributions
Bayes´ Rule
Discrete example of Bayes´ Rule
Continuous example of Bayes´ Rule
Introduction to Bayesian inference
Summarizing the posterior
Point estimation
Univariate posteriors
Multivariate posteriors
The posterior predictive distribution
Exercises
2. From Prior Information to Posterior Inference
Conjugate Priors
Beta-binomial model for a proportion
Poisson-gamma model for a rate
Normal-normal model for a mean
Normal-inverse gamma model for a variance
Natural conjugate priors
Normal-normal model for a mean vector
Normal-inverse Wishart model for a covariance matrix
Mixtures of conjugate priors
Improper Priors
Objective Priors
Jeffreys prior
Reference Priors
Maximum Entropy Priors
Empirical Bayes
Penalized complexity priors
Exercises
3. Computational approaches
Deterministic methods
Maximum a posteriori estimation
Numerical integration
Bayesian Central Limit Theorem (CLT)
Markov Chain Monte Carlo (MCMC) methods
Gibbs sampling
Metropolis-Hastings (MH) sampling
MCMC software options in R
Diagnosing and improving convergence
Selecting initial values
Convergence diagnostics
Improving convergence
Dealing with large datasets
Exercises
4. Linear models
Analysis of normal means
One-sample/paired analysis
Comparison of two normal means
Linear regression
Jeffreys prior
Gaussian prior
Continuous shrinkage priors
Predictions
Example: Factors that affect a home´s microbiome
Generalized linear models
Binary data
Count data
Example: Logistic regression for NBA clutch free throws
Example: Beta regression for microbiome data
Random effects
Flexible linear models
Nonparametric regression
Heteroskedastic models
Non-Gaussian error models
Linear models with correlated data
Exercises
5. Model selection and diagnostics
Cross validation
Hypothesis testing and Bayes factors
Stochastic search variable selection
Bayesian model averaging
Model selection criteria
Goodness-of-fit checks
Exercises
6. Case studies using hierarchical modeling
Overview of hierarchical modeling
Case study: Species distribution mapping via data fusion
Case study: Tyrannosaurid growth curves
Case study: Marathon analysis with missing data
7. Statistical properties of Bayesian methods
Decision theory
Frequentist properties
Bias-variance tradeoff
Asymptotics
Simulation studies
Exercises
Appendices
Probability distributions
Univariate discrete
Multivariate discrete
Univariate continuous
Multivariate continuous
List of conjugacy pairs
Derivations
Normal-normal model for a mean
Normal-normal model for a mean vector
Normal-inverse Wishart model for a covariance matrix
Jeffreys´ prior for a normal model
Jeffreys´ prior for multiple linear regression
Convergence of the Gibbs sampler
Marginal distribution of a normal mean under Jeffreys' prior
Marginal posterior of the regression coefficients under Jeffreys prior
Proof of posterior consistency
Computational algorithms
Integrated nested Laplace approximation (INLA)
Metropolis-adjusted Langevin algorithm
Hamiltonian Monte Carlo (HMC)
Delayed Rejection and Adaptive Metropolis
Slice sampling
Software comparison
Example - Simple linear regression
Example - Random slopes model