![[(Cover) Mathematical Statistics with Applications.jpg]] ## 1 What Is Statistics - 1.1 Introduction - 1.2 Characterizing a Set of Measurements: Graphical Methods - 1.3 Characterizing a Set of Measurements: Numerical Methods - 1.4 How Inferences Are Made - 1.5 Theory and Reality - 1.6 Summary - References and Further Readings - Supplementary Exercises ## 2 Probability - 2.1 Introduction - 2.2 Probability and Inference - 2.3 A Review of Set Notation - 2.4 A Probabilistic Model for an Experiment: The Discrete Case - 2.5 Calculating the Probability of an Event: The Sample-Point Method - 2.6 Tools for Counting Sample Points - 2.7 Conditional Probability and the Independence of Events - 2.8 Two Laws of Probability - 2.9 Calculating the Probability of an Event: The Event-Composition Method - 2.10 The Law of Total Probability and Bayes’ Rule - 2.11 Numerical Events and Random Variables - 2.12 Random Sampling - 2.13 Summary - References and Further Readings - Supplementary Exercises ## 3 Discrete Random Variables and Their Probability Distributions - 3.1 Basic Definition - 3.2 The Probability Distribution for a Discrete Random Variable - 3.3 The Expected Value of a Random Variable or a Function of a Random Variable - 3.4 The Binomial Probability Distribution - 3.5 The Geometric Probability Distribution - 3.6 The Negative Binomial Probability Distribution (Optional) - 3.7 The Hypergeometric Probability Distribution - 3.8 The Poisson Probability Distribution - 3.9 Moments and Moment-Generating Functions - 3.10 Probability-Generating Functions (Optional) - 3.11 Tchebysheff’s Theorem - 3.12 Summary - References and Further Readings - Supplementary Exercises ## 4 Continuous Variables and Their Probability Distributions - 4.1 Introduction - 4.2 The Probability Distribution for a Continuous Random Variable - 4.3 Expected Values for Continuous Random Variables - 4.4 The Uniform Probability Distribution - 4.5 The Normal Probability Distribution - 4.6 The Gamma Probability Distribution - 4.7 The Beta Probability Distribution - 4.8 Some General Comments - 4.9 Other Expected Values - 4.10 Tchebysheff’s Theorem - 4.11 Expectations of Discontinuous Functions and Mixed Probability Distributions (Optional) - 4.12 Summary - References and Further Readings - Supplementary Exercises ## 5 Multivariate Probability Distributions - 5.1 Introduction - 5.2 Bivariate and Multivariate Probability Distributions - 5.3 Marginal and Conditional Probability Distributions - 5.4 Independent Random Variables - 5.5 The Expected Value of a Function of Random Variables - 5.6 Special Theorems - 5.7 The Covariance of Two Random Variables - 5.8 The Expected Value and Variance of Linear Functions of Random Variables - 5.9 The Multinomial Probability Distribution - 5.10 The Bivariate Normal Distribution (Optional) - 5.11 Conditional Expectations - 5.12 Summary - References and Further Readings - Supplementary Exercises ## 6 Functions of Random Variables - 6.1 Introduction - 6.2 Finding the Probability Distribution of a Function of Random Variables - 6.3 The Method of Distribution Functions - 6.4 The Method of Transformations - 6.5 The Method of Moment-Generating Functions - 6.6 Multivariable Transformations Using Jacobians (Optional) - 6.7 Order Statistics - 6.8 Summary - References and Further Readings - Supplementary Exercises ## 7 Sampling Distributions and the Central Limit Theorem - 7.1 Introduction - 7.2 Sampling Distributions Related to the Normal Distribution - 7.3 The Central Limit Theorem - 7.4 A Proof of the Central Limit Theorem (Optional) - 7.5 The Normal Approximation to the Binomial Distribution - 7.6 Summary - References and Further Readings - Supplementary Exercises ## 8 Estimation - 8.1 Introduction - 8.2 The Bias and Mean Square Error of Point Estimators - 8.3 Some Common Unbiased Point Estimators - 8.4 Evaluating the Goodness of a Point Estimator - 8.5 Confidence Intervals - 8.6 Large-Sample Confidence Intervals - 8.7 Selecting the Sample Size - 8.8 Small-Sample Confidence Intervals for μ and μ₁ − μ₂ - 8.9 Confidence Intervals for σ² - 8.10 Summary - References and Further Readings - Supplementary Exercises ## 9 Properties of Point Estimators and Methods of Estimation - 9.1 Introduction - 9.2 Relative Efficiency - 9.3 Consistency - 9.4 Sufficiency - 9.5 The Rao–Blackwell Theorem and Minimum-Variance Unbiased Estimation - 9.6 The Method of Moments - 9.7 The Method of Maximum Likelihood - 9.8 Some Large-Sample Properties of Maximum-Likelihood Estimators (Optional) - 9.9 Summary - References and Further Readings - Supplementary Exercises ## 10 Hypothesis Testing - 10.1 Introduction - 10.2 Elements of a Statistical Test - 10.3 Common Large-Sample Tests - 10.4 Calculating Type II Error Probabilities and Finding the Sample Size for Z Tests - 10.5 Relationships between Hypothesis-Testing Procedures and Confidence Intervals - 10.6 Attained Significance Levels (p-Values) - 10.7 Some Comments on the Theory of Hypothesis Testing - 10.8 Small-Sample Hypothesis Testing for μ and μ₁ − μ₂ - 10.9 Testing Hypotheses Concerning Variances - 10.10 Power of Tests and the Neyman–Pearson Lemma - 10.11 Likelihood Ratio Tests - 10.12 Summary - References and Further Readings - Supplementary Exercises ## 11 Linear Models and Estimation by Least Squares - 11.1 Introduction - 11.2 Linear Statistical Models - 11.3 The Method of Least Squares - 11.4 Properties of Least-Squares Estimators: Simple Linear Regression - 11.5 Inferences Concerning the Parameters βᵢ - 11.6 Inferences for Linear Functions (Simple Regression) - 11.7 Prediction Using Simple Linear Regression - 11.8 Correlation - 11.9 Practical Examples - 11.10 Matrix Formulation - 11.11 Linear Functions (Multiple Regression) - 11.12 Inference in Multiple Regression - 11.13 Prediction in Multiple Regression - 11.14 Test for H₀: βg+1 = ⋯ = βk = 0 - 11.15 Summary and Concluding Remarks - References and Further Readings - Supplementary Exercises ## 12 Considerations in Designing Experiments - 12.1 Elements Affecting Information in a Sample - 12.2 Designing Experiments to Increase Accuracy - 12.3 The Matched-Pairs Experiment - 12.4 Elementary Experimental Designs - 12.5 Summary - References and Further Readings - Supplementary Exercises ## 13 The Analysis of Variance - 13.1 Introduction - 13.2 The Analysis of Variance Procedure - 13.3 One-Way ANOVA - 13.4 ANOVA Table for One-Way Layout - 13.5 Statistical Model for One-Way Layout - 13.6 Additivity Proof (Optional) - 13.7 Estimation in One-Way Layout - 13.8 Randomized Block Design Model - 13.9 ANOVA for Randomized Block Design - 13.10 Estimation in Block Design - 13.11 Selecting Sample Size - 13.12 Simultaneous Confidence Intervals - 13.13 ANOVA via Linear Models - 13.14 Summary - References and Further Readings - Supplementary Exercises ## 14 Analysis of Categorical Data - 14.1 Description of the Experiment - 14.2 The Chi-Square Test - 14.3 Goodness-of-Fit Test - 14.4 Contingency Tables - 14.5 r × c Tables with Fixed Totals - 14.6 Other Applications - 14.7 Summary and Concluding Remarks - References and Further Readings - Supplementary Exercises ## 15 Nonparametric Statistics - 15.1 Introduction - 15.2 General Two-Sample Shift Model - 15.3 Sign Test (Matched Pairs) - 15.4 Wilcoxon Signed-Rank Test - 15.5 Rank Methods for Two Samples - 15.6 Mann–Whitney U Test - 15.7 Kruskal–Wallis Test - 15.8 Friedman Test - 15.9 Runs Test - 15.10 Rank Correlation Coefficient - 15.11 General Comments - References and Further Readings - Supplementary Exercises ## 16 Introduction to Bayesian Methods for Inference - 16.1 Introduction - 16.2 Bayesian Priors, Posteriors, and Estimators - 16.3 Bayesian Credible Intervals - 16.4 Bayesian Tests of Hypotheses - 16.5 Summary and Additional Comments - References and Further Readings ## Appendices - Appendix 1 Matrices and Other Useful Mathematical Results - Appendix 2 Common Probability Distributions, Means, Variances, and MGFs - Appendix 3 Tables - Appendix R