Financial and Actuarial Statistics: An Introduction (Statistics: a Series of Textbooks and Monographs)

  • 33 47 5
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Financial and Actuarial Statistics: An Introduction (Statistics: a Series of Textbooks and Monographs)

Financial and Actuarial Statistics An Introduction Dale S.Borowiak University of Akron Akron, Ohio, U.S.A. MARCEL DEKK

1,487 136 3MB

Pages 281 Page size 432 x 648 pts Year 2005

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

Financial and Actuarial Statistics An Introduction

Dale S.Borowiak University of Akron Akron, Ohio, U.S.A.

MARCEL DEKKER, INC. NEW YORK • BASEL

Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress. ISBN 0-203-91124-5 Master e-book ISBN

ISBN: 0-82474-270-2 (Print Edition) Headquarters Marcel Dekker, Inc. 270 Madison Avenue, New York, NY 10016 tel: 212–696– 9000; fax: 212–685–4540 This edition published in the Taylor & Francis e-Library, 2005. "To purchase your own copy of this or any of Taylor & Francis or Routledge's collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk." Eastern Hemisphere Distribution Marcel Dekker AG Hutgasse 4, Postfach 812, CH-4001 Basel, Switzerland tel: 41–61–260–6300; fax: 41–61–260–6333 World Wide Web http://www.dekker.com/ The publisher offers discounts on this book when ordered in bulk quantities. For more information, write to Special Sales/Professional Marketing at the headquarters address above. Copyright © 2003 by Marcel Dekker, Inc. All Rights Reserved. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage and retrieval system, without permission in writing from the publisher.

STATISTICS: Textbooks and Monographs

D.B.Owen Founding Editor, 1972–1991

Associate Editors Statistical Computing/ Nonparametric Statistics Professor William R.Schucany Southern Methodist University Probability Professor Marcel F.Neuts University of Arizona Multivariate Analysis Professor Anant M.Kshirsagar University of Michigan Quality Control/Reliability Professor Edward G.Schilling Rochester Institute of Technology Editorial Board Applied Probability Dr. Paul R.Garvey The MITRE Corporation Economic Statistics Professor David E.A.Giles University of Victoria Experimental Designs Mr. Thomas B.Barker Rochester Institute of Technology

Multivariate Analysis Professor Subir Ghosh University of California-Riverside Statistical Distributions Professor N.Balakrishnan McMaster University Statistical Process Improvement Professor G.Geoffrey Vining Virginia Polytechnic Institute Stochastic Processes Professor V.Lakshmikantham Florida Institute of Technology Survey Sampling Professor Lynne Stokes Southern Methodist University Time Series Sastry G.Pantula North Carolina State University 1. The Generalized Jackknife Statistic, H.L.Gray and W.R.Schucany 2. Multivariate Analysis, Anant M.Kshirsagar 3. Statistics and Society, Walter T.Federer 4. Multivariate Analysis: A Selected and Abstracted Bibliography, 1957–1972, Kocherlakota Subrahmaniam and Kathleen Subrahmaniam 5. Design of Experiments: A Realistic Approach, Virgil L.Anderson and Robert A. McLean 6. Statistical and Mathematical Aspects of Pollution Problems, John W.Pratt 7. Introduction to Probability and Statistics (in two parts), Part I: Probability; Part II: Statistics, Narayan C.Giri 8. Statistical Theory of the Analysis of Experimental Designs, J.Ogawa 9. Statistical Techniques in Simulation (in two parts), Jack P.C.Kleijnen 10. Data Quality Control and Editing, Joseph I.Naus

11. Cost of Living Index Numbers: Practice, Precision, and Theory, Kali S.Banerjee 12. Weighing Designs: For Chemistry, Medicine, Economics, Operations Research, Statistics, Kali S.Banerjee 13. The Search for Oil: Some Statistical Methods and Techniques, edited by D.B.Owen 14. Sample Size Choice: Charts for Experiments with Linear Models, Robert E.Odeh and Martin Fox 15. Statistical Methods for Engineers and Scientists, Robert M.Bethea, Benjamin S. Duran, and Thomas L.Boullion 16. Statistical Quality Control Methods, Irving W.Burr 17. On the History of Statistics and Probability, edited by D.B.Owen 18. Econometrics, Peter Schmidt 19. Sufficient Statistics: Selected Contributions, Vasant S.Huzurbazar (edited by Anant M. Kshirsagar) 20. Handbook of Statistical Distributions, Jagdish K.Patel, C.H.Kapadia, and D.B.Owen 21. Case Studies in Sample Design, A.C.Rosander 22. Pocket Book of Statistical Tables, compiled by R.E.Odeh, D.B.Owen, Z.W. Bimbaum, and L.Fisher 23. The Information in Contingency Tables, D.V.Gokhale and Solomon Kullback 24. Statistical Analysis of Reliability and Life-Testing Models: Theory and Methods, Lee J. Bain 25. Elementary Statistical Quality Control, Irving W.Burr 26. An Introduction to Probability and Statistics Using BASIC, Richard A.Groeneveld 27. Basic Applied Statistics, B.L.Raktoe and J.J.Hubert 28. A Primer in Probability, Kathleen Subrahmaniam 29. Random Processes: A First Look, R.Syski 30. Regression Methods: A Tool for Data Analysis, Rudolf J.Freund and Paul D.Minton

31. Randomization Tests, Eugene S.Edgington 32. Tables for Normal Tolerance Limits, Sampling Plans and Screening, Robert E.Odeh and D.B.Owen 33. Statistical Computing, William J.Kennedy, Jr., and James E.Gentle 34. Regression Analysis and Its Application: A Data-Oriented Approach, Richard F.Gunst and Robert L.Mason 35. Scientific Strategies to Save Your Life, I.D.J.Bross 36. Statistics in the Pharmaceutical Industry, edited by C.Ralph Buncher and Jia-Yeong Tsay 37. Sampling from a Finite Population, J.Hajek 38. Statistical Modeling Techniques, S.S.Shapiro and A.J.Gross 39. Statistical Theory and Inference in Research, T.A.Bancroft and C.-P.Han 40. Handbook of the Normal Distribution, Jagdish K.Patel and Campbell B.Read 41. Recent Advances in Regression Methods, Hrishikesh D.Vinod and Aman Ullah 42. Acceptance Sampling in Quality Control, Edward G.Schilling 43. The Randomized Clinical Trial and Therapeutic Decisions, edited by Niels Tygstrup, John M Lachin, and Erik Juhl 44. Regression Analysis of Survival Data in Cancer Chemotherapy, Walter H.Carter, Jr., Galen L.Wampler, and Donald M.Stablein 45. A Course in Linear Models, Anant M.Kshirsagar 46. Clinical Trials: Issues and Approaches, edited by Stanley H.Shapiro and Thomas H. Louis 47. Statistical Analysis of DNA Sequence Data, edited by B.S.Weir 48. Nonlinear Regression Modeling: A Unified Practical Approach, David A.Ratkowsky 49. Attribute Sampling Plans, Tables of Tests and Confidence Limits for Proportions, Robert E.Odeh and D.B.Owen

50. Experimental Design, Statistical Models, and Genetic Statistics, edited by Klaus Hinkelmann 51. Statistical Methods for Cancer Studies, edited by Richard G.Comell 52. Practical Statistical Sampling for Auditors, Arthur J.Wilbum 53. Statistical Methods for Cancer Studies, edited by Edward J.Wegman and James G. Smith 54. Self-Organizing Methods in Modeling: GMDH Type Algorithms, edited by Stanley J. Farlow 55. Applied Factorial and Fractional Designs, Robert A.McLean and Virgil L.Anderson 56. Design of Experiments: Ranking and Selection, edited by Thomas J.Santner and Ajit C.Tamhane 57. Statistical Methods for Engineers and Scientists: Second Edition, Revised and Expanded, Robert M.Bethea, Benjamin S.Duran, and Thomas L.Boullion 58. Ensemble Modeling: Inference from Small-Scale Properties to Large-Scale Systems, Alan E.Gelfand and Crayton C.Walker 59. Computer Modeling for Business and Industry, Bruce L.Bowerman and Richard T. O’Connell 60. Bayesian Analysis of Linear Models, Lyle D.Broemeling 61. Methodological Issues for Health Care Surveys, Brenda Cox and Steven Cohen 62. Applied Regression Analysis and Experimental Design, Richard J.Brook and Gregory C.Arnold 63. Statpal: A Statistical Package for Microcomputers—PC-DOS Version for the IBM PC and Compatibles, Bruce J.Chalmer and David G.Whitmore 64. Statpal: A Statistical Package for Microcomputers—Apple Version for the II, II+, and lle, David G.Whitmore and Bruce J.Chalmer 65. Nonparametric Statistical Inference: Second Edition, Revised and Expanded, Jean Dickinson Gibbons 66. Design and Analysis of Experiments, Roger G.Petersen

67. Statistical Methods for Pharmaceutical Research Planning, Sten W.Bergman and John C.Gittins 68. Goodness-of-Fit Techniques, edited by Ralph B.D’Agostino and Michael A. Stephens 69. Statistical Methods in Discrimination Litigation, edited by D.H.Kaye and Mikel Aickin 70. Truncated and Censored Samples from Normal Populations, Helmut Schneider 71. Robust Inference, M.L.Tiku, W.Y.Tan, and N.Balakrishnan 72. Statistical Image Processing and Graphics, edited by Edward J.Wegman and Douglas J.DePriest 73. Assignment Methods in Combinatorial Data Analysis, Lawrence J.Hubert 74. Econometrics and Structural Change, Lyle D.Broemeling and Hiroki Tsurumi 75. Multivariate Interpretation of Clinical Laboratory Data, Adelin Albert and Eugene K. Harris 76. Statistical Tools for Simulation Practitioners, Jack P.C.Kleijnen 77. Randomization Tests: Second Edition, Eugene S.Edgington 78. A Folio of Distributions: A Collection of Theoretical Quantile-Quantile Plots, Edward B. Fowlkes 79. Applied Categorical Data Analysis, Daniel H.Freeman, Jr. 80. Seemingly Unrelated Regression Equations Models: Estimation and Inference, Virendra K.Srivastava and David E.A.Giles 81. Response Surfaces: Designs and Analyses, Andre I.Khuri and John A.Comell 82. Nonlinear Parameter Estimation: An Integrated System in BASIC, John C.Nash and Mary Walker-Smith 83. Cancer Modeling, edited by James R.Thompson and Barry W.Brown 84. Mixture Models: Inference and Applications to Clustering, Geoffrey J.McLachlan and Kaye E.Basford 85. Randomized Response: Theory and Techniques, Arijit Chaudhuri and Rahul Mukerjee

86. Biopharmaceutical Statistics for Drug Development, edited by Karl E.Peace 87. Parts per Million Values for Estimating Quality Levels, Robert E.Odeh and D.B.Owen 88. Lognormal Distributions: Theory and Applications, edited by Edwin L.Crow and Kunio Shimizu 89. Properties of Estimators for the Gamma Distribution, K.O.Bowman and L.R.Shenton 90. Spline Smoothing and Nonparametric Regression, Randall L.Eubank 91. Linear Least Squares Computations, R.W.Farebrother 92. Exploring Statistics, Damaraju Raghavarao 93. Applied Time Series Analysis for Business and Economic Forecasting, Sufi M.Nazem 94. Bayesian Analysis of Time Series and Dynamic Models, edited by James C.Spall 95. The Inverse Gaussian Distribution: Theory, Methodology, and Applications, Raj S. Chhikara and J.Leroy Folks 96. Parameter Estimation in Reliability and Life Span Models, A.Clifford Cohen and Betty Jones Whitten 97. Pooled Cross-Sectional and Time Series Data Analysis, Terry E.Dielman 98. Random Processes: A First Look, Second Edition, Revised and Expanded, R.Syski 99. Generalized Poisson Distributions: Properties and Applications, P.C.Consul 100. Nonlinear Lp-Norm Estimation, Rene Gonin and Arthur H.Money 101. Model Discrimination for Nonlinear Regression Models, Dale S.Borowiak 102. Applied Regression Analysis in Econometrics, Howard E.Doran 103. Continued Fractions in Statistical Applications, K.O.Bowman and L.R.Shenton 104. Statistical Methodology in the Pharmaceutical Sciences, Donald A.Berry 105. Experimental Design in Biotechnology, Perry D.Haaland 106. Statistical Issues in Drug Research and Development, edited by Karl E.Peace

107. Handbook of Nonlinear Regression Models, David A.Ratkowsky 108. Robust Regression: Analysis and Applications, edited by Kenneth D.Lawrence and Jeffrey L.Arthur 109. Statistical Design and Analysis of Industrial Experiments, edited by Subir Ghosh 110. U-Statistics: Theory and Practice, A.J.Lee 111. A Primer in Probability: Second Edition, Revised and Expanded, Kathleen Subrahmaniam 112. Data Quality Control: Theory and Pragmatics, edited by Gunar E.Liepins and V.R.R. Uppuluri 113. Engineering Quality by Design: Interpreting the Taguchi Approach, Thomas B.Barker 114. Survivorship Analysis for Clinical Studies, Eugene K.Harris and Adelin Albert 115. Statistical Analysts of Reliability and Life-Testing Models: Second Edition, Lee J.Bain and Max Engelhardt 116. Stochastic Models of Carcinogenesis, Wai-Yuan Tan 117. Statistics and Society: Data Collection and Interpretation, Second Edition, Revised and Expanded, Walter T.Federer 118. Handbook of Sequential Analysis, B.K.Ghosh and P.K.Sen 119. Truncated and Censored Samples: Theory and Applications, A.Clifford Cohen 120. Survey Sampling Principles, E.K.Foreman 121. Applied Engineering Statistics, Robert M.Bethea and R.Russell Rhinehart 122. Sample Size Choice: Charts for Experiments with Linear Models: Second Edition, Robert E.Odeh and Martin Fox 123. Handbook of the Logistic Distribution, edited by N.Balakrishnan 124. Fundamentals of Biostatistical Inference, Chap T.Le 125. Correspondence Analysis Handbook, J.-P.Benzécri

126. Quadratic Forms in Random Variables: Theory and Applications, A.M.Mathai and Serge B.Provost 127. Confidence Intervals on Variance Components, Richard K.Burdick and Franklin A. Graybill 128. Biopharmaceutical Sequential Statistical Applications, edited by Karl E.Peace 129. Item Response Theory: Parameter Estimation Techniques, Frank B.Baker 130. Survey Sampling: Theory and Methods, Arijit Chaudhuri and Horst Stenger 131. Nonparametric Statistical Inference: Third Edition, Revised and Expanded, Jean Dickinson Gibbons and Subhabrata Chakraborti 132. Bivariate Discrete Distribution, Subrahmaniam Kocherlakota and Kathleen Kocherlakota 133. Design and Analysis of Bioavailability and Bioequivalence Studies, Shein-Chung Chow and Jen-pei Liu 134. Multiple Comparisons, Selection, and Applications in Biometry, edited by Fred M. Hoppe 135. Cross-Over Experiments: Design, Analysis, and Application, David A.Ratkowsky, Marc A.Evans, and J.Richard Alldredge 136. Introduction to Probability and Statistics: Second Edition, Revised and Expanded, Narayan C.Giri 137. Applied Analysis of Variance in Behavioral Science, edited by Lynne K.Edwards 138. Drug Safety Assessment in Clinical Trials, edited by Gene S.Gilbert 139. Design of Experiments: A No-Name Approach, Thomas J.Lorenzen and Virgil L.Anderson 140. Statistics in the Pharmaceutical Industry: Second Edition, Revised and Expanded, edited by C.Ralph Buncher and Jia-Yeong Tsay 141. Advanced Linear Models: Theory and Applications, Song-Gui Wang and SheinChung Chow 142. Multistage Selection and Ranking Procedures: Second-Order Asymptotics, Nitis Mukhopadhyay and Tumulesh K.S.Solanky

143. Statistical Design and Analysis in Pharmaceutical Science: Validation, Process Controls, and Stability, Shein-Chung Chow and Jen-pei Liu 144. Statistical Methods for Engineers and Scientists: Third Edition, Revised and Expanded, Robert M.Bethea, Benjamin S.Duran, and Thomas L.Boullion 145. Growth Curves, Anant M.Kshirsagar and William Boyce Smith 146. Statistical Bases of Reference Values in Laboratory Medicine, Eugene K.Harris and James C.Boyd 147. Randomization Tests: Third Edition, Revised and Expanded, Eugene S.Edgington 148. Practical Sampling Techniques: Second Edition, Revised and Expanded, Ranjan K. Som 149. Multivariate Statistical Analysis, Narayan C.Giri 150. Handbook of the Normal Distribution: Second Edition, Revised and Expanded, Jagdish K.Patel and Campbell B.Read 151. Bayesian Biostatistics, edited by Donald A.Berry and Dalene K.Stangl 152. Response Surfaces: Designs and Analyses, Second Edition, Revised and Expanded, André I.Khuri and John A.Cornell 153. Statistics of Quality, edited by Subir Ghosh, William R.Schucany, and William B.Smith 154. Linear and Nonlinear Models for the Analysis of Repeated Measurements, Edward F. Vonesh and Vemon M.Chinchilli 155. Handbook of Applied Economic Statistics, Aman Ullah and David E.A.Giles 156. Improving Efficiency by Shrinkage: The James-Stein and Ridge Regression Estimators, Marvin H.J.Gruber 157. Nonparametric Regression and Spline Smoothing: Second Edition, Randall L.Eubank 158. Asymptotics, Nonparametrics, and Time Series, edited by Subir Ghosh 159. Multivariate Analysis, Design of Experiments, and Survey Sampling, edited by Subir Ghosh

160. Statistical Process Monitoring and Control, edited by Sung H.Park and G.Geoffrey Vining 161. Statistics for the 21st Century: Methodologies for Applications of the Future, edited by C.R.Rao and Gábor J.Székely 162. Probability and Statistical Inference, Nitis Mukhopadhyay 163. Handbook of Stochastic Analysis and Applications, edited by D.Kannan and V.Lakshmikantham 164. Testing for Normality, Henry C.Thode, Jr. 165. Handbook of Applied Econometrics and Statistical Inference, edited by Aman Ullah, Alan T.K.Wan, and Anoop Chaturvedi 166. Visualizing Statistical Models and Concepts, R.W.Farebrother 167. Financial and Actuarial Statistics: An Introduction, Dale S.Borowiak Additional Volumes in Preparation

For my loving wife, I love you always, thanks for the trip!

Preface

In the fields of financial and actuarial modeling modern statistical techniques are playing an increasingly prominent role. The use of statistical analysis in areas such as investment pricing models, options pricing, pension plan structuring and advanced actuarial modeling is required. After teaching two actuarial science courses I realized that both students and investigators need a strong statistical background in order to keep up with modeling advances in these fields. This book approaches both financial and actuarial modeling from a statistical point of view. The goal is to supplement the texts and writings that exist in actuarial science with statistical background and present modern statistical techniques such as saddlepoint approximations, scenario and simulation techniques and stochastic investment pricing models. The aim of this book is to provide a strong statistical background for both beginning students and experienced practitioners. Beginning students will be introduced to topics in statistics, financial and actuarial modeling from a unified point of view. A thorough introduction to financial and actuarial models such as investment pricing models, discrete and continuous insurance and annuity models, pension plan modeling and stochastic surplus models from a statistical science approach is given. Statistical techniques associated with these models, such as risk estimation, percentile estimation and prediction intervals are discussed. Advanced topics related to statistical analysis including single decrement modeling, saddlepoint approximations for aggregate models and resampling techniques are discussed and applied to financial and actuarial models. The audience for this book is made up of two sectors. Actuarial science students and financial investigators, both beginning and advanced, who desire thorough discussions on basic statistical concepts and techniques will benefit from the approach of this text that introduces statistical principles in the context of financial and actuarial topics. This approach allows the reader to develop knowledge in both areas and understand the existing connections. Advanced readers, whether students in college undergraduate or graduate programs in mathematics, economics or statistics, or professionals advancing in financial and actuarial careers will find the in-depth discussions of advanced modeling topics useful. Research discussions include approximations to aggregate sums, single decrement modeling, statistical analysis of investment pricing models and simulation approaches to stochastic status models. The approach this text takes is unique in that it presents a unified structure for both financial and actuarial modeling. This is accomplished by applying the actuarial concept of financial actions being based on the survival or failure of predefined conditions, referred to as a status. Applying either a deterministic or stochastic feature to the general

status unifies financial and actuarial models into one structure. Basic statistical topics, such as point estimation, confidence intervals and prediction intervals, are discussed and techniques are developed for these models. The deterministic setting includes basic interest, annuity, investment pricing models and aggregate insurance models. The stochastic status models include life insurance, life annuity, option pricing models and pension plans. In Chapter 1 basic statistical concepts and functions including an introduction to probability, random variables and their distributions, expectations, moment generating functions, estimation, aggregate sums of random variables, compound random variables, regression models and an introduction to autoregressive modeling are presented in the context of financial and actuarial modeling. Financial computations such as interest and annuities in both discrete and continuous modeling settings are presented in Chapter 2. The concept of deterministic status models is introduced in Chapter 3. The basic loss model along with statistical evaluation criteria are presented and applied to single risk models including investment and option pricing models, collective aggregate models and stochastic surplus models. In Chapter 4 the discrete and continuous future lifetime random variable along with the force of mortality is introduced. In particular, multiple future lifetime and decrement models are discussed. In Chapter 5, through the concept of group survivorship modeling, future lifetime random variables are used to construct life models and life tables. Ultimate, select multiple decrement and single decrement tables, along with statistical measurements are presented. Stochastic status models that include actuarial life insurance and annuity models and applications make up the material for Chapter 6. Risk and percentile premiums, reserve calculations and common notations are discussed. More advanced topics such as computational relationships between models, general time period models, multiple decrement computations, pension plan models and models that include expenses are included. In Chapter 7 modern scenario and simulation techniques, along with associated statistical inference, are introduced and applied to both deterministic and stochastic status models. In particular, collective aggregate models, investment pricing models and stochastic surplus models are evaluated using simulation techniques. In Chapter 8 introductions to advanced statistical topics, such as mortality adjustment factors for increased risk cases and mortality trend modeling, are presented. I would like to thank the people at Marcel Dekker Inc., in particular Maria Allegra, for her assistance in the production of this book. Dale S.Borowiak

Contents

Preface 1 Statistical Concepts

xv 1

2 Financial Computational Models

54

3 Deterministic Status Models

78

4 Future Lifetime Random Variable

104

5 Future Lifetime Models and Tables

135

6 Stochastic Status Models

161

7 Scenario and Simulation Testing

214

8 Further Statistical Considerations

236

Appendix: Standard Normal Tables

249

References

253

Index

257

Financial and Actuarial Statistics

1 Statistical Concepts

The modeling of financial and actuarial systems starts with the mathematical and statistical concepts of variables. There are two types of variables used in financial and actuarial statistical modeling, referred to as non-stochastic or deterministic and stochastic variables. Stochastic variables are proper random variables that have an associated probability structure. Non-stochastic variables are deterministic in nature without a probability attachment Interest and annuity calculations based on fixed time periods are examples of non-stochastic variables. Examples of stochastic variables are the prices of stocks that are bought or sold, and insurance policy and annuity computations where payments depend upon stochastic events, such as accidents or deaths. The evaluation of stochastic variables requires the use of basic probability and statistical tools. This chapter presents the basic statistical concepts and computations that are utilized in the analysis of such data. For the most part the concepts and techniques presented in this chapter are based on the frequentist approach to statistics as opposed to the Bayesian perspective and are limited to those that are required later in the analysis of financial and actuarial models. This is due in part to the lack of Bayesian model development and application in this area. The law of large numbers is relied upon to add validity to frequentist probabilistic approach. Basic theories and concepts are applied in a unifying approach to both financial and actuarial modeling. The basis of statistical evaluations and inference is probability. Therefore, we start this chapter with a brief introduction to probability and then proceed to the various statistical components. Standard statistical concepts such as discrete and continuous random variables, probability distributions, moment generating functions and estimations are discussed. Further, other topics such as approximating the aggregate sum of random variables, regression modeling and an introduction to stochastic processes through auto correlation modeling are presented. 1.1 Probability In this section we present a brief introduction into some basic ideas and concepts in probability, There are many texts in probability that give a broader background but a review is useful since the basis of statistical inference is contained in probability theory. The results discussed are used either directly in the later part of this book or give insight

Financial and actuarial statistics

2

useful to later topics. Some of these topics may be review for the reader and we refer to Larson (1995) and Ross (2002) for further background in basic probability. For a random process let the set of all possible outcomes comprise the sample space, denoted Ω. Subsets of the sample space, consisting of some or all of the possible outcomes, are called events. Primarily we are interested in assessing the likelihood of events occurring. Basic set operations are defined on the events associated with a sample space. For events A and B the union of A and B, , is comprised of all outcomes in A, B, or common to both A and B. The intersection of two events A and B is the set of all outcomes common to both A and B and is denoted A∩B. The complement of event A is the event that A does not occur and is Ac. In general, we wish to quantify the likelihood of particular events taking place. This is accomplished by defining a stochastic or probability structure over the set of events. For any event A, the probability of A, measuring the likelihood of occurrence is denoted P(A). Taking an empirical approach, if the random process is observed repeatedly then as the number of trials or samples increases the proportion of time A occurs within the trials approaches the probability of A or P(A). This is called the long run relative frequency of event A. In various settings mathematical models are developed to determine this probability function. There are certain mathematical properties that every probability function more formally referred to as a probability measure follows. A probability measure, P, is a real valued set function where (i) P(A)≥0 for all events A. (ii) P(Ω)=1 (iii) Let A1, A2,…be a collection of disjoint sets, i.e.

for i≠j.

Then (1.1.1) Conditions (i), (ii) and (iii) are called the axioms of probability and, (iii) is referred to as the countable additive property of a probability measure. These conditions form the basic structure of a probability system. In practice probability measures are constructed in two ways. The first is based on assumed functional structures derived from physical laws and are mathematically constructed. The second, more statistical in nature, relies on observed or empirical data. Both methods are used in financial and actuarial modeling. An illustrative example is now given. Ex. 1.1.1. A survey of 125 people in a particular age group, or strata, is taken. Let J denote the number of future years an individual holds a particular stock. Here J is an integer future lifetime and is the number of full years a person retains possession of a stock. From the survey data a table, Table 1.1.1, of frequencies, given by f, for values of J is constructed. The relative frequency concept is used to estimate probabilities when the choices corresponding to individual outcomes are equally likely. For example the probability a person sells the stock in less than 1 year is the proportion P(J=0) =2/125=.016. The probability a stock is held for 5 or more years is P(J≥5)= 100/125=.8.

Statistical concepts

3

The simple concepts presented in Ex. 1.1.1 introduce basic statistical ideas and notations, such as integer years, used in the development of financial and actuarial models. In later chapters model evaluation and statistical inferences are developed based on these basic ideas. Further, the concept of conditioning on observed outcomes and conditional probabilities is central to financial and actuarial calculations. For two events, A and B, the conditional probability of A given the fact B has occurred is defined by (1.1.2) provided P(B) is not zero. This probability structure satisfies the previously

Table 1.1.1 Survey of Future Holding Lifetimes of a Stock J

0

1

2

3

4

5 or more

f

2

3

4

6

10

100

stated probability axioms. The previous example is now extended to demonstrate conditioning on specified events. Ex. 1.1.2. Consider the conditions of Ex. 1.1.1. Given an individual holds a stock for the first year the conditional probability of selling the stock in subsequent years is found using (1.1.2). For J≥1 (1.1.3) For example, the conditional probability of retaining possession of the stock for at least 5 additional years is P(J≥5|J≥1)=(100/125)/(123/125)=100/123= .8130. The conditional probability concept can be utilized to compute joint probabilities corresponding to many events. For a collection of events A1, A2,…, An the probability of all Ai, 1≤i≤n, occurring is (1.1.4) The idea of independence plays a central role in many applications. A collection of events A1, A2,…, An are completely independent or just independent if (1.1.5) It is a mathematical fact that events can be “pair-wise” independent but not completely independent In practice formulas for the analysis of financial and actuarial actions are based on the ideas of conditioning and independence. A clear understanding of these concepts aids in the mastery of future statistical, financial and actuarial topics.

Financial and actuarial statistics

4

There are some properties and formulas that follow from the axioms of probability. Two such properties are used in the application and development of statistical models. For event A let the complement be Ac. Then

Fig. 1.1.1 Probability Rules (1.1.6) and (1.1.7).

(1.1.6) Also, for two events A and B the probability of the union can be written as (1.1.7) It is sometimes useful to view these probability rules in terms of graphs of the sample space and the respective events referred to as Venn Diagrams. Given in Fig. 1.1.1 are the Venn Diagrams corresponding to rules (1.1.6) and (1.1.7). In Prob 1.1 the reader is asked to verify rules (1.1.6) and (1.1.7) using probability axiom property (iii), (1.1.1) and Fig. 1.1.1, by utilizing disjoint sets. These formulas have many applications and two examples that apply these computations as well as introduce two important actuarial multiple life settings now follow. Ex. 1.1.3. Two people ages x and y takes out a financial contract that pays a benefit predicated on the survival of the two people, referred to as (x) and (y), for an additional j years. Let events be A={(x) lives past age x+j} and B={(y) lives past age y+j}. We consider two different types of status where the events A and B are considered independent. i) Joint Life Status requires both people to survive an additional n years. The probability of paying the benefit, using (1.1.5), is P(A∩B)=P(A)P(B). ii) Last Survivorship Status requires at least one person to survive an additional n years. Using (1.1.5) and (1.1.7) the probability of paying the benefit is . In particular let the frequencies presented in Table 1.1.1 hold where the two future lifetimes are given by J1 and J2. Thus, for any individual P(J≥3)=116/125=.928. From i) the probability both survive an additional 3 years is

From (1.1.7) the probability at least one of the two survive an additional 3 years is

Statistical concepts

5

These basic probabilistic concepts easily extend to more than two future lifetime variables. Ex. 1.1.4. An insurance company issues insurance policies to a group of individuals. Over a short period, such as a year, the probability of a claim for any policy is .1. The probability of no claim in the first 3 years is found assuming independence and applying (1.1.5)

Also, using (1.1.6) the probability of at least one claim in 3 years is

This insurance setting is referred to as short term insurance where the force of interest, introduced in Sec. 2.1, can be ignored. The basic formula in a variety of settings and it is helpful to understand the basic structures. For conceptual understanding and applicability it is sometimes helpful to see formulas in a variety of financial and actuarial modeling settings. We now turn our attention to topics in both applied and theoretical statistics. 1.2 Random Variables In financial and actuarial modeling there are two types of variables, stochastic and nonstochastic. Non-stochastic variables are completely deterministic lacking any stochastic structure. Examples of these variables include the fixed benefit in a life insurance policy, the length of time in a mortgage loan or the amount of a fixed interest mortgage payment Random variables are stochastic variables that possess some probabilistic or stochastic component. Random variables include the lifetime of a particular health status, the value of a stock after one year or the amount of a health insurance claim. In general notation, random variables are denoted by uppercase letters, such as X or T, and fixed constants take the form of lower case letters, like x and t. There are three types of random variables characterized by the possible values they can assume. Along with the typical discrete and continuous random variables there are combinations of discrete and continuous variables, referred to as mixed random variables. For a discussion of random variables and corresponding properties we refer to Hogg and Tanis (2001, Ch 3 and Ch 4) and Rohatgi (1976, Ch 2). In financial and actuarial modeling the time until a financial action occurs may be associated with a probability structure and therefore be stochastic. In actuarial science, conditions prior to initiation of the financial action are referred to as the holding of a status. The action is initiated when the status changes or fails to hold. We use this general concept of a status along with its change or failure to unite financial and actuarial modeling in a common framework. For example, with a life insurance policy the status is the act of the person surviving. During the person’s lifetime the status, defined as survival, is said to hold. After the death of the person the status is said to fail and an

Financial and actuarial statistics

6

insurance benefit is paid. Similarly, in finance an investor may retain a particular stock, thereby keeping ownership or the “status” of the stock the same, until the price of the stock reaches a particular level. Upon reaching the desired price the status, or ownership of the stock, changes and the status is said to fail. In general the specific conditions that dictate one or more financial actions are referred to as a status and the lifetime of a status is a random variable, which we denote by T. 1.2.1 Discrete Random Variables A discrete random variable, X, can take on a countable number of values or outcomes. Associated with each outcome is a corresponding probability. The collection of these probabilities comprise the probability density function, pdf, denoted f(x)

Fig 1.2.1 Discrete pdf.

(1.2.1) for possible outcome values x. The support of f(x), denoted by S, is the domain set on which f(x) is positive. From the association between the random variable and the probability axioms (i), (ii) and (iii) we se that f(x)≥0 for all x in S and the sum f(x) over all elements in S is one. We note that in some texts the discrete probability density function is referred to as a probability mass function. In many settings the analysis of a financial or actuarial model depends on the integer valued year a status fails denoted by J. For example, an insurance policy may pay a fixed benefit at the end of the year of death. The variable J is the year of death as measured from the date the policy was issued or T=0 and J =1, 2,….. We follow with examples in the context of life insurance that demonstrate these concepts and introduce standard probability measures and their corresponding pdfs. Ex. 1.2.1. In the case death of a insurance policy holder within five years of the initiation of the policy a fixed amount or benefit b is paid at the end of the year of death, If the policyholder survives five years amount b is immediately paid. Let J denote the year a payment is made, so that J=1, 2,…, 5 and the support is S={1, 2, 3, 4, 5}. Let the probability of an accident be q and the probability of no accident be p, so that 0≤p≤1 and q=1−p. The probability structure is contained in the pdf of J, which for demonstrational purposes takes the Geometric random variable form, given by

Statistical concepts

7

(1.2.2)

The pdf (1.2.2) can be used to assess the expected cost and statistical aspects of the policy. The graph of the pdf (1.2.2) is given in Fig. 1.2.1 and is typical of a discrete pdf where the probabilities are represented as spikes at the support points of the pdf. Later in this text the expected cost is computed using the probability structure defined by the pdf along with the time value of money. Ex 1.2.2. Over a short time period a collection of m insurance policies is considered. For policy i, 1≤i≤m, let the random variable Xi=1 if a claim is made and=0 in the event of no claim. Also, for each i let P(Xi=1)=q and P(Xi=0)=p=1−q for 00. The exponential distribution has many applications (see Walpole, Myers and Myers (1998, p. 166) and is frequently used in survival and reliability modeling. Ex. 1.2.6. Let the future lifetime of a status, T, follow a Gaussian or normal distribution with mean µ and standard deviation σ, denoted by T~n(µ, σ2). The pdf associated with T is given by

Financial and actuarial statistics

10

where the support is S=(−∞, ∞). This pdf is symmetric about the mean µ and to compute probabilities the transformation to the standard normal random variable is required. The standard normal random variable, denoted Z, is a normal random variable that takes mean 0 and variance 1. The Z-random variable associated with T=t is given by the transformation Z=(T−µ)/σ. The df for T is (1.2.11) for any real valued t where Φ is the df of the standard normal random variable. The evaluation of Φ in (1.2.11) is achieved using numerical approximation methods. Tabled values of Φ(t) for fixed t, such as given in is Appendices A1 and A2, or computer packages are utilized to compute normal probabilities. For example let lifetime T be a normal random variable with parameters µ=65 and σ=10. The probability the age of an individual exceeds 80 is computed using (1.2.11) and Appendix A2 as

Further, the probability an individual dies between ages 70 and 90 is found as

We remark that the continuous nature of the above random variable, where the probability of attaining an exact value is negligible, is utilized. 1.2.3 Mixed Random Variables Mixed random variables are a combination of both discrete and continuous random variables. If X is a mixed random variable the support is partition into two disjoint parts. One part is discrete in nature while the other part of the support is continuous. Applications of mixed-type random variables are rare in many fields but this type of random variable is particularly useful in financial and actuarial modeling. Many authors atttack mixed random variable problems in the context of statistical conditioning while we present a straightforward approach. The simple example that follows demonstrates the versatility of this variable. Ex. 1.2.7. Here an insurance policy pays claims between $100 and $500. The amount of the claim, X, is defined as a mixed random variable. The discrete part defines the probability X=0 as .5 and X=$100 and $500 as .2. The continuous part is defined by a constant (or uniform) pdf over [$100, $500] with value .00025. Hence, the pdf is defined as

Statistical concepts

11

The support of f(x) is decomposed into where S1={0, 100, 500} and S2= (100, 500). Probabilities are computed using the procedures for discrete and continuous random variables. For example the condition that the total probability associated with X is one implies

Also, the probability that the claim is at most $250 is the combination of discrete and continuous type calculations

This example, although simple, demonstrates the possible types of mixed discrete and continuous random variables that can be constructed. There is a variation of the mixed-type random variable that utilizes both discrete and continuous random variables in defining the pdf. This plays a part in insurance modeling and an example of this type of random variable structure follows in the next example. Ex. 1.2.8. A one-year insurance policy pays a claim or benefit denoted by B in case of an accident. The probability of a claim in the first year is given by q. Given there is a claim let B be a continuous random variable with pdf f(B). The overall claim variable can be written as X=IB where the indicator function I=1 if there is a claim and=0 if there is no claim. The pdf of X can be approached from a conditioning point of view, as introduced in (1.1.2), is (1.2.12)

The probability the claim is greater than c>0 is P(X>c)=qP(B>c). This situation of single insurance policies has many practical applications. One is extension of models of the form (1.2.12) to a set or portfolio of many policies. These are referred to as collective risk models, and discussed in Sec. 3.4. Further, over longer periods of time adjustments must be made to account for the effect or force of interest. Much statistical work concerns the estimation of collective stochastic structures. As we have seen in some of the examples the pdf, f(x) and the df, F(x) may be a function of one or more parameters. In practice the experimenter may estimate the

Financial and actuarial statistics

12

unknown parameters from empirical data. Probabilistic and statistical aspects of such estimation must be accounted for in financial and actuarial models. 1.3 Expectations The propensities of a random variable or a function of a random variable to take on particular outcomes is often important to financial and actuarial modeling. The expectation is one method of predicting and assessing stochastic outcomes of random variables. The expected value of function g(x), if it exists, is denoted E{g(x)}. Often the expected values of properly selected functions are used to characterize the probability distribution associated with one or more random variables. Along with the possible types of random variables three cases, discrete, continuous and mixed random variables, produce different formulas for expected values. First, if X is discrete with support Sd and pdf f(x) (1.3.1) Second, X is continuous and the pdf f(x) has support Sc (1.3.2) In the last case if X is a mixed random variable the expected value is a combination of then (1.3.1) and (1.3.2). If the support is (1.3.3) In financial and actuarial modeling expectations play a central role. The central core of financial and actuarial risk analysis is the computation of properly chosen random variables. There are a few standard expectations that play an important role in analyzing data. Employing the identity function, g(x)=x, yields the expected value of X or the mean of X given by (1.3.4) The mean of X is a weighted average, with respect to the probability structure, over the support and is a measure of the center of the pdf. If g(x)=Xr, for positive integer r, then the expected value, E{Xr}, is referred to as the rth moment or a moment of order r of X. It is a mathematical property that if moments of order r exist then moments of order s exist for s≤r. Central moments of order r, for positive integer r, are defined by E{(X−µ)r}. The variance of X is a central moment with r=2 and is denoted by Var{X}=σ2 and after simplification the variance becomes (1.3.5)

Statistical concepts

13

We note that existence of the second moment implies existence of the variance. The standard deviation of X is σ=(σ1/2)2. The variance and standard deviation of a random variable measures the dispersion or variability associated with the random variable and the associated pdf. The discrete case computation is demonstrated in the next example. Ex. 1.3.1. Let N be Poisson as described in Ex. 1.2.3. The mean of N is found using a Taylor Series (see Prob. 1.3)

In a similar manner, so that from (1.3.5), . Hence, for the Poisson random variable the mean and the variance are equivalent and completely determine the distribution. As mentioned earlier for random variable X the mean, µ, measures the “center” and the standard deviation, σ, measures the “variability” or dispersion associated with the pdf of X. Other useful moment measurements are the skewness and the kurtosis denoted by, respectively, Sk and Ku. These are defined by (1.3.6) These moments are classically used to characterize distributions in terms of shape. For an applied discussion in the usage of (1.3.6) we refer to McBean and Rovers (1998). Examples concerning moment computations in the continuous and mixed random variable cases are now given. Ex. 1.3.2. Let X be uniform on (a, b) as in Ex. 1.2.4. The mean or expected value of X is (1.3.7) Further, the second moment is

From (1.3.5), the variance of X simplifies to (1.3.8) The special case of the uniform distribution over the unit interval has many applications and takes a=0 and b=1 and from (1.3.7) and (1.3.8) produces moments µ=1/2 and σ2=1/12.

Financial and actuarial statistics

14

Ex. 1.3.3. Let X have an exponential pdf given in Ex. 1.2.5. To find the mean of X we use integration by parts to find

Using integration by parts twice (see Prob. 1.6) we find

Hence, from (1.3.5)

In fact for positive integer r, the general moment formula is given by . Applying the general moment formula the skewness and kurtosis, defined by (1.3.6), can be computed (see Prob. 1.8). Ex. 1.3.4. In this example we consider the mixed variable case of Ex. 1.2.7. The supports for the discrete and continuous parts are defined by S1={0, 100, 500} and S2=(100, 500), respectively. The mean takes the form

Hence, to represent a typical value of X close to the center of the pdf we might use the mean or expectation of $150.00. Two additional general formulas are used to compute the expected value of a function of X when X≥0. Let X have pdf f(x) with support S and df F(x) and G(x) be monotone where G(x)≥0. There are two cases to consider, continuous and discrete random variables. If X is continuous we assume G(x) is differentiable with (d/dx)G(x)=g(x) and assuming E{G(X)} exists the expectation is (1.3.9) In the case X is discrete with corresponding support on the nonnegative integers then, if the expectation exists, the expected value of G(X) is (1.3.10) where δ(G(x))=G(x+1)−G(x), The proofs of these expectation formulas are outlined in Prob. 1.7.

Statistical concepts

15

Ex. 1.3.5. Let the number of claims over a period of time be N so that the support is S={0, 1,…}. The pdf corresponding to X is assumed to take the form of the discrete geometric distribution introduced in Ex. 1.2.1. The general pdf is given by

for constants p, 0c} produces a pdf that is a function of only the fixture lifetime X−c and is independent of the values of c and parameter θ. Ex 1.6.2. Let X be a discrete geometric random variable with pdf given in Ex. 1.3.5. The survival function can be shown to be (1.6.3) for x=0, 1,…. For a fixed positive integer c the truncated distribution (1.6.1) becomes (1.6.4) As with the exponential distribution in the previous example the conditional distribution takes the form of the initial distribution. Hence, the geometric random variable exhibits a lack of memory property in the discrete random variable setting. In most financial and actuarial models conditioning is applied where, unlike the previous two examples, the conditioning affects the future distribution. It is common to have financial actions conditioned on statuses and their associated survival functions. For example, a stock may be sold if its price reaches or exceeds a particular value. For x>c the conditional survival function is (1.6.5) provided x>c and c is in the support of the pdf of X. The conditioning concept and related formula is central to many financial and actuarial calculations in presented in later chapters. 1.7 Joint Distributions In modeling of real data, such as that found in financial and actuarial fields, there is often more than one variable required. The situation where we have two random variables, X

Financial and actuarial statistics

22

and Y, is now considered and the discussion can be extended to the general multiple variable setting. Generally, these variables can be of any type, discrete, continuous or mixed and the joint pdf is denoted f(x, y). The basic concepts and formulas relating to one random variable are now extended to the multivariable case. The initial concept in multivariable random variable modeling is that of the probability that (X, Y) falls in a defined set A. There are the following three possibilities depending on the type of random variables involved: (i) Discrete X and Y: (ii) Continuous X and Y: (iii) Discrete X and Continuous Y: Here (i), (ii) and (iii) define the probability structure of the joint random variable. Other statistical concepts, such as dependence and independence, are extended to the case of more than one random variable. Further, the distribution of single variables and relationships among the variables can be explored. The marginal distributions are the distributions of the individual variables alone. Similar to the formulas for probabilities there are three possible cases. The marginal pdfs denoted g(x) and h(y) are given by: (iv) Discrete X and Y: (v) Continuous X and Y: (vi) Discrete X and Continuous Y: Applications of these are encountered in various financial modeling and actuarial science applications. Relationships between the variables are often important in statistical modeling. To do this we need the concept of conditional distributions presented in Sec. 1.6 applied in the joint random variable setting. The conditional distributions and independence conditions follow the same pattern as those introduced in the probability measure setting of Sec. 1.1. The conditional pdf of X given Y=y explores the distribution of X while Y is held fixed at value y and is (1.7.1) Further, X and Y are called independent if either (1.7.2) or (1.7.3)

Statistical concepts

23

over the support associated with the random variables. We remark that this definition of independence is analogous to the independence of sets definition given in (1.1.5). The above definitions, formulas and concepts hold for all types of random variables. Two examples are now presented. Ex. 1.7.1. Let X and Y have with support S={(x, y) where x>0, y> 0} and joint pdf given by

for positive θ1 and θ2. Using condition (v) of the joint distributional setting we have

From criteria (1.7.2) we see the X and Y are independent and the marginal distributions are both exponential in type. Ex. 1.7.2. Insurance structures are often defined separately for different groups of individuals. We consider an insurance policy where there are two risk categories or strata, J=1 or 2, for claims. The amount of the claims, in thousands of dollars, are denoted by X and the corresponding pdfs defined on their supports are for J=1

and for J=2

Now, we assess the frequencies of the risk categories are defined by P(J=1)= .6 and P(J=2)=.4. The joint pdf is a function of the two random variables J and X. Similar to the insurance modeling examples of Ex. 1.2.8 and Ex. 1.3.7 the pdf is (1.7.4)

The marginal pdfs are computed as

and (1.7.5)

Probabilities of basic events can be found using these pdfs. The overall probability a claim is at most 3, from (1.7.5),

Financial and actuarial statistics

24

Common statistical measurements, such as expectations and variances can be computed in the multi-ruled distributions using the standard rules. These basic statistical concepts and formulas extend to the setting of multiple random variables. In the case of two random variables certain standard moments are useful to compute. For function g(x, y) the expected value, if it exists, is denoted E{g(x, y)}. If X and Y are discrete with corresponding pdf f(x, y) and support S the expectation is (1.7.6) In the continuous case we have (1.7.7) while in the mixed variable case the expectation takes the form (1.7.8) In these computations we assume all the moments exists. Applying (1.7.6). (1.7.7) and (1.7.8) means and variances can be computed but the relationship between variables must be considered in forming the proper summation and integral limits. A measure of the linear relationship between X and Y is the covariance between X and Y denoted by Cov{X, Y} or σxy. If the mean of X and Y are given by µx and µy then the covariance is defined as (1.7.9) We remark that if X and Y are independent then E{XY}=µxµy and σxy=0. A scaled or normed measure of variable association based on (1.7.9) is the correlation coefficient denoted by ρ. This measure or parameter is used in correlation modeling and it’s sample estimate is used as a diagnostic tool in regression modeling (see Sec. 1.13). Many applications in financial and actuarial modeling involve the sum of more than one random variable. Let two random variables X and Y have means µx and µy, variances σx2 andσy2 and covariance σxy. Since the expectation acts as a linear operator for the sum S=X+Y (see (1.7.6), (1.7.7) and (1.7.8)) the expectation is (1.7.10) Further, applying the definition of variance, (1.3.5), we write the variance of S as (1.7.11)

Statistical concepts

25

We note that if the variables are independent, (1.7.11) indicates that the variance of the sum of variables is the sum of the separate variance terms. This concept extends to the general multiple random variable case in the same manner. Expectations of functions of multiple random variables can be approached using the concept of statistical conditioning. When two variables are involved expectations are simplified by conditioning on one of the random variables inside the probability structure of the other. Let X and Y be random variables with joint pdf f(x, y)=h(y) f(x|y) and we compute the mean and variance of functions of the form g(x, y)=v(y)w(x, y). Here for any type of random variable (1.7.12) Also, the variance of g(x, y) is (1.7.13) Further, if w(x, y)=w(x) and X and Y are independent then (1.7.14) and (1.7.15) The general statistics theory required to derive the conditioning arguments is given in general theoretical statistical texts such as Mood, Graybill and Boes (1974). The derivation of these formulas is considered in Prob. 1.18. Formulas (1.7.14) and (1.7.15) can be used to derive (1.3.14) and (1.3.16). We remark that the concepts and ideas presented in this section for the two variable setting can easily be extended to more than two random variables. For a review of joint distributions and their manipulations see Hogg and Craig (1995, Chapter 2), Hogg and Tanis (2001, Chapter 5) and Bowers et al. (1997, Chapter 9). 1.8 Sampling Distributions and Estimation There are two conceptual views in statistical modeling referred to, respectively, as frequentist and Bayesian approaches. In both approaches the financial and actuarial actions are modeled utilizing one or more statistical distributions. These distributions are functions of unknowns called parameters. In frequentist statistics the parameters are considered to be fixed constants and information about the parameters, such as point estimates and bounds, is entirely a function of observed data. In Bayesian statistics the parameters themselves are modeled as random variables that are associated with specified prior probability distributions and probability estimation and statistical inference take on a more mathematical flavor. In this section, topics in parameter and statistical estimation

Financial and actuarial statistics

26

based on the frequentist approach are discussed. For a review of Bayesian methods we refer Bickel and Doksum (2001, sec. 1.2) and Hogg and Craig (1995, Sec. 8.1). Throughout this text the frequentist approach to statistical modeling is applied, Based on the information contained in sample data, such as financial or actuarial records, unknown parameters are estimated using well-chosen statistics. The computed values of the statistics depend on the samples that are observed. In this way one or more probability distributions are imposed on observed statistics. The probability distribution associated with a statistic is referred as the sampling distribution of the statistic. Much work in frequentist statistics concerns the selection and evaluation of efficient statistics through analysis of their associated sampling distributions. The topic of statistical estimation for unknown parameters, as well as useful functions of parameters, is very broad and we discuss some of the areas that are applied in financial and actuarial modeling. In this section the typical topics of point and interval estimation along with percentile measurements are presented. For a basic review of the principles of estimation we refer to Rohatgi (1976), Hogg and Tanis (2001) and Hogg and Craig (1995). 1.8.1 Point Estimation In the study of frequentist statistical theory parameters are considered fixed constants that characterize the distribution of a random variable. In this chapter some of the parameters discussed include the mean, µ, the variance and standard deviation, σ2 and σ. In financial and actuarial modeling applications, other relevant quantities such as percentile points required for model analysis are investigated. Generally, statistics are defined as functions based on sample data and are used, in part, to estimate parameters. A point estimator is a single valued statistic that is used as an estimate of a parameter. Some common statistics are now discussed A random sample is a collection of variables, X1,…, Xn, where each random variable is independent and comes from the same distribution and the sample size is fixed. This is referred to as independent and identically distributed from pdf f(x) which we denote by xi~iid. f(x) for i≥1. This is the mathematical structure conveyed in the notion of random sampling. The sample mean is given by (1.8.1) and the sample variance is (1,8.2) The sample standard deviation is S=(S2)1/2. These point estimates possess many desirable statistical properties based on their associated sampling distributions and a full discussion of these is outside of our investigation of financial and actuarial modeling. Sample moments, defined by for positive integer r, can be used to estimate central moments like the skewness and kurtosis defined by (1.3.6). The sample moments

Statistical concepts

27

are substituted for the theoretical moments, E{Xr}, in these formulas. In statistical estimation this is referred to as the method of moments procedure and is supported by the convergence in probability of these estimators. For example substitution in (1.3.6) yields the sample or estimated skewness and kurtosis, denoted by their sample or plug-in estimators and . Another common applied modeling setting involves the modeling of proportions. If Xi is a Bernoulli variable, or Xi=1 or=0 depending on if an event, say A, is realized, In this case applying (1.8.1) the sample proportion is defined as (1.8.3) and measures the proportion, out of the sample size n, that event A has occurred, The sample proportion is used to estimate the theoretical likelihood or probability of event A occurring. As the number of samples increase, (1.8.3) converges to the theoretical probability associated with event A. This is the weak law of large numbers and is the intuitive concept for the long run relative frequency approach to probability that is central to frequentist probability and statistics. There are many properties associated with the efficient statistics, such as consistency, sufficiency and unbiasedness. A statistic, W, is unbiased for parameter θ if (1.8.4) We remark that , S2 and are unbiased for µ, σ2 and P(A). The unbiased property states that the center or mean of the sampling distribution associated with the statistic matches the parameter to be estimated. To choose between unbiased estimators for a given parameter we select the estimator with the smallest variance. These estimators are referred to as minimum variance unbiased or best estimators. As an example of estimation suppose we have a random sample X1, X2,…, Xn. By this we mean Xi~iid with common pdf f(x) for i=1, 2,…, n. Let the sample mean be given by (1.8.1). Using the linear aspects of expectation we find (1.8.5) Hence, the sample mean is an unbiased estimator of the parameter mean. Further, from (1.8.3) we note that the sample proportion is an unbiased estimate for the population proportion. Ex. 1.8.1. The lifetimes of a series of people, past an initial age, are 11, 15, 21, 25, 21, 30, 32, 39 and 42. This is a sample of size n=10 and find the sums and sums of squares ΣX=278 and ΣX2=8,646. From (1.8.1) and (1.8.2) the sample mean and variance are

The sample standard deviation is s=101.91/2=10.095. Many techniques and formulas in statistics utilize these basic statistics.

Financial and actuarial statistics

28

Another useful property of point estimators is the idea of consistency. A statistic is consistent for a parameter if as the sample size increases the statistic converges to the parameter to be estimating. In this situation statistical theory supports the substitution of statistic for the unknown parameter. This is utilized in the constructing of limiting distributions for useful statistics. Examples of consistent estimators are the sample mean, variance, standard deviation and sample proportions. For a review of point estimators and related properties we refer to Hogg and Craig (1995, Ch. 6 and Ch. 7) and Bickel and Doksum (2001, Ch. 2). 1.8.2 Percentiles and Prediction Intervals Percentiles give a relative measure of data point relationships that are commonly used in many fields of study. For a random variable or statistic, X, we now define percentile values. For α, 0≤α≤1, the (1−α)100th percentile, denoted by x1−α, is defined by the relation (1.8.6) We note that the 50th percentile, x.5, is the median of a continuous distribution. An interval that covers a fixed proportion of the distribution associated with a random variable X, if the random variable is used for investigation and analysis, is referred to as a prediction interval. For α, 0≤α≤1, from (1.8.6) an example of a prediction interval is (1.8.7) Therefore, the prediction interval written as [xα/2, x1−α/2] contains probability 1 −α. We remark that one-sided prediction intervals can also be constructed. This statistical interval construction is closely related to the idea of tolerance intervals for statistics and distributions where coverage probabilities are defined Ex. 1.8.2. In this example we consider the general normal random variable conditions of Ex. 1.2.6. The lifetime of a status, T, is assumed normal with mean µ and standard deviation a. To construct a prediction interval with probability 1−α we set (1.8.8) Using (1.2.11) to standardize T we find prediction interval (1.8.9) where α=Φ(zα). For example if µ=1,000 and α=100 we find the 95th percentile t.95. Approximating from the Z tables in Appendix A2 we find z.95= 1.645 so that

We remark that intervals of this form are symmetric about the mean and in the normal random variable case posses the trait that they contain the smallest width.

Statistical concepts

29

1.8.3 Parameter Interval Estimation Point estimates lack information about their variability and reliability. To address this lack of assessment, interval estimates similar to prediction intervals, incorporating the variability and reliability, for unknown parameters are constructed. An interval estimate for parameter θ is [a, b] where we estimate a≤θ≤b. The probability the interval contains the unknown parameter is called the confidence coefficient or confidence level. Hence, a confidence interval for θ with confidence coefficient 1−α satisfies (1.8.10) We remark that the parameter is a fixed value while the interval is dependent on the selected observations and is therefore a random set. The theoretical construction of these intervals is out of the scope of this book (see Rohatgi (1976, Secs. 11.2 and 11.3)) but we follow with an illustrative example from basic statistics. Ex. 1.8.3. A random sample of size n is taken from a normal distribution with unknown mean µ. The sample mean and sample standard deviation, s, are found. Under large sample size conditions

is distributed approximately as a standard normal random variable. A confidence intervals for µ with confidence coefficient 1−α is (1.8.11) In the case where the standard deviation a is unknown and the sample size n is small the t-random variable is applied to form a confidence interval for µ. In that case the (1−α/2)100 percentile value based on a t-distribution with degrees of freedom is used in place of the standard normal percentile in (1.8.11). Confidence intervals for unknown parameters possess interpretive qualities. In statistical inference the interval estimates, such as (1.8.9) and (1.8.11), constructed for model assessment statistics are used to shed light on the structure being modeled. This is done for financial and actuarial models throughout this text. 1.9 Aggregate Sums of Independent Variables In practice we often observe a series of independent variables X1,…, Xm, where the number of variables may be fixed, denoted by m, or stochastic, represented by the random variable by N. For example, these could be the future lifetimes of a group of people or the current values of a number of stocks held in a portfolio. In financial and actuarial analysis applications involving the aggregate sum, denoted Sm, are referred to as a collective risk or aggregate modeling. If these variables hold over a short time period the force of interest, discussed in Chapter 2, can be ignored. We follow with mathematical and statistical investigations of these models starting with the two random variable setting.

Financial and actuarial statistics

30

In this section let the number of variables be fixed at m. For positive integer m we consider the distribution of the aggregate sum,

The two variable case m=2, are presented and can be extended to the multiple variable situation. To compute the distribution of the sum of two independent variables X and Y we discuss first the classical convolution method. The df for S=X+Y is defined as (1.9.1) for constant s. In the general setting, conditioning on Y=y, for the discrete case we find (1.9.2) and for continuous X and Y (1.9.3) Further, if X and Y are independent then the dfs (1.9.2) and (1.9.3) become (1.9.4)

Here (1.9.4) gives are the formulas for the convolution for the dfs Fx(x) and Fy(y) which we denote Fx*Fy. Taking the derivatives or differences yields the pdf. The corresponding pdf in the discrete case and continuous cases are, respectively, (1.9.5)

An example of the convolution method in the continuous setting is given in Prob.1.1.19. As mentioned the convolution process can be extended to the situation of n independent variables. Let Sn=X1+…+Xn where the df of X1 is F1 and the df of X1+…+Xk be F(k) for positive integer k≤n. Iteratively we find the convolution df by using (1.9.6) for i=2, 3,…, n. A computational example, analogous to the example in Bowers, et. al. (1997, p. 35), is now given.

Statistical concepts

31

Ex. 1.9.2. Let X1 and X2 be independent and S=X1+X2. The pdfs are f1(x) and the dfs are F2(x). The defining probabilities are given below: X f1(x) f2(x) F1(x) 0

.4

.3

.4

1

.4

.4

.8

2

.1

.1

.9

3

.1

.1

1.0

4

.0

.1

1.0

5

.0

.0

1.0

The distribution of S is computed using (1.9.2). Here,

Continuing the pattern, F(2)(2)=.9(.3)+.8(.4)+.4(.1)=.63, F(2)(3)=.78, F(2)(4)=.91, F(2)(5)=.97, F(2)(6)=.99 and F(2)(7)=1. The pdf of S is found by subtracting the consecutive values of F(2). In Prob. 1.20 the process is extended to the convolution for a third random variable computing df F(3). In a theoretical sense the convolution process can be used to find the distribution of the sum of independent variables as long as the number of variables is not too large. In many practical settings such as in the case of many random variables several approximations to the distribution of aggregate sums of variables are used in financial and actuarial modeling. In this chapter we present three approximations, namely, the celebrated Central Limit Theorem, Haldane Type A approximation and the statistical saddle point approximation. This section ends with theoretical distributional considerations of a sum of independent random variables. To find the distribution of the sum of random variables theoretical techniques based on transformations or dfs can be used. We discuss another procedure that utilizes the mgf. Let the sum corresponding to m random variables be Sm=ΣXi where the mgf for Xi is Mi(t) and the random variables are independent. This setting of the mgf of Sm reduces to (1.9.7)

The mgf associated with a random variable, if it exists, is unique. Thus, if the mgf of S matches the mgf of a known distribution then the distribution of S must be the

Financial and actuarial statistics

32

distribution associated with the matched mgf. If Xi, 1≤i≤n, are independent identically distributed, iid, each with mean µ, standard deviation σ and mgf M(t) then from (1.8.7) (1.9.8) This form of the mgf is useful in finding distributions of sums resulting from random samples. This matching of mgfs procedure is commonly used, in connection with the Continuity Theorem, to prove the Central Limit Theorem. It can be shown that the mgf of Sn/n, employing a Taylor Series expansion, converges to the mgf corresponding to a normal random variable with mean µ and variance σ2. The Central Limit Theorem is discussed in Sec. 1.11.1. Here the mgf (1.9.8) can be used to find the moments of Sm through the derivative formula (1.4.6). Taking the derivatives of (1.9.8) and evaluating at t= 0 we find (1.9.9) Hence, for the sample mean

the mean and variance (1.9.10)

These moments can be used to calculate or approximate relevant probabilities. In the case where the aggregate sum is modeled by a normal random variable and we standardized the sum to produce the df. (1.9.11) for constant c. The next example demonstrates the application of the normal distribution. Ex 1.9.3. Let X1,…, Xn be a random sample, or iid, from a normal distribution with mean µ and standard deviation σ. From (1.4.7) and (1.9.8) the mgf of Sm is (1.9.12) Since (1.9.12) takes the form of a “normal” mgf then Sn is a normal random variable. Further, from the mgf we see that Sm has mean and variance given in (1.9.10) and probabilities can be computed using (1.9.11). The technique employed in Ex. 1.9.3 to find the distribution of the aggregate sum works for other distributions such as Poisson, Binomial and Gamma random variables (see Prob. 21). In other distributions approximations, such as the Central Limit Theorem, are used to estimate the distribution.

Statistical concepts

33

1.10 Order Statistics In many applications there is a natural arrangement of random variables. We consider m continuous random variables given by X1, X2,…, Xm where the order statistics arise from an ordered rearrangement of the corresponding observed variables. The order statistics are denoted by X(1)0 and σ(h, r)>0.

Statistical concepts

37

Ex. 1.11.3 Consider the portfolio of 25 policies introduced in Ex. 1.11.2. The claims, denoted B, occur in 10% of the time periods, and are normal variables with mean and standard deviation given by µB=$1,000 and σB=$200 respectively. We wish to approximate the probability the aggregate sum exceeds $5,000. Using the CLT, P(S25>5,000)=1−Ф((5,000–2,500)/1532.97)=Φ(1.63082)=.05146. To apply HAA we first find a form for the 3rd central moment in this situation. Noting the claim variable is X=IB the central third moment. similar to the derivation of the variance in (1.3.16), is (1.11.4)

The normality of B and the computations of Ex. 1.11.2 imply E{(X−µx)3}= 82,800,000 and kus=.574604. Further, r=.613188, h=.6876413, µ(h, r)= .9392614 and σ(h, r)=.4082815. The desired HAA approximation is

In this example the CLT and the HAA are close in computed survival probability. The exact value obtain using a simulation resampling method is approximated with a high degree of accuracy in Chap. 7 and is computed to be .06616. Hence both approximations have a large relative error of about (.06616 −.05)/.06616=.24425. 1.11.3 Saddlepoint Approximation The previous two approximation techniques depend only on the moments associated with the iid random variables. We now turn to an approximation method that utilizes information contained in the entire distribution. Since their introduction by Daniels (1954) saddlepoint approximations, denoted SPA, have been utilized to approximate tail 0 probabilities corresponding to sums of independent random variable. For an in-depth discussion of the accuracy of saddlepoint approximations we refer to Field and Ronchetti (1990). The approximation is shown to be accurate for small sample sizes, even as small as one. Further, in the case of data from a normal distribution the SPA reduces to the CLT approximation. For this reason the SPA can be viewed as an extension to the CLT in the case of small sample sizes. Saddlepoint approximations have been applied to a variety of situations and for further references see articles by Goutis and Casella (1999), Huzurbazar (1999), Butler and Sutton (1998), Tsuchiya and Konishi (1997) and Wood, Booth and Butler (1993). We present the simplest setting where there are independent identically distributed random variables X1,…, Xm where m is fixed. Unlike other saddlepoint approximation developments that utilize the cumulants of hypothesized distributions this discussion is based on the associated moments. The moment generating function of X1 is assumed to exist and is denoted by M1(β) where E{X1}=µ and Var{X1}=σ2. The corresponding moment generating function for Z=(X−µ)/σ is

Financial and actuarial statistics

38

(1.11.5) For a fixed value of t, let β solve (1.11.6) Further, let (1.11.7) For constant s the saddlepoint approximation, denoted SPA, for percentile calculations of the form P(Sm≤s) is (1.11.8)

where t=(s−mµ)/(mσ) and Φ(x) is the standard normal distribution function. Tail probabilities are computed using the complement method (1.1.6) and approximate prediction intervals can be constructed. In application, for a chosen s the associated t value is computed and a numerical method, such as Newton’s Method or the secant method (see Stewart (1995, p. 170) or Burden and Faires (1997, Chapter 2) may be required to solve for β in (1.11.6). The saddlepoint approximation is found by substitution of (1.11.7) into (1.11.8). We remark that if the distribution of individual random variables is iid normal with any fixed mean and variance the SPA yields exact standard normal probabilities. Ex. 1.11.3. We demonstrate the SPA and compare it to the CLT and HAA in the case of the exponential random variable with pdf given by f(x)= (1/θ) exp(−x/θ) for support S=(0, ∞). The mean is µ=θ, the variance σ2=θ2 and the mgf is M1(β)=(1−βθ)−1. For a fixed value of s we find

and solving (1.11.6) for β we compute (1.11.7). Applying the SPA to the exponential distribution we have required constants that are computed with the formulas (1.11.9)

In this case the distribution of the sum is known to be Gamma (see Prob 1.21, c). For different sample sizes the exact percentile points as computed using the Gamma distribution with parameters α=m and θ=1 are found. The cumulative probabilities associated with these points using the CLT, HAA and the SPA are found and results for sample sizes of m=1 and m=2 are given in Table 1.11.1

Statistical concepts

39

From Table 1.11.1 we see that the HAA and SPA outperform the CLT. This is to be expected since these approximations use more information, specifically, information about the skewness of the random variables. The CLT is most efficient in the case of symmetric random variables. The most efficient method is the SPA yielding efficient percentile approximations for the exponential distribution even for sample sizes of one and two.

Table 1.11.1 CLT, HAA and SPA Percentile Approximations Sample Size m=1 Percentile

m=2

.99

.95

.90

.75

.99

.95

.90

.75

CLT

.9998

.9770

.9036

.6504

.9995

.9738

.9092

.6879

HAA

.9900

.9513

.9023

.7512

.9899

.9506

.9012

.7511

SPA

.9900

.9498

.8997

.7502

.9900

.9499

.8998

.7499

In general the SPA requires computation of (1.11.5), (1.11.6) and (1.11.7) which may be cumbersome. These computations can be somewhat eased by the reduction given in Prob 1.26. The SPA has applications in financial and actuarial modeling and has been extended to the case of life table data with uniform distributions within each year by Borowiak (2001). 1.12 Compound Random Variables Generally a compound random variable is a random variable that is composed of more than one random variable. We consider the structure of an aggregate sum of iid. random variables where the number of independent random variables is a random variable. In this section the statistical properties and applications of compound random variables are explored and presented using the techniques and formulas presented in previous sections. The theoretical distribution can be investigated using statistical conditioning in connection with other statistical models, such as either hierarchal or Baysian models, Compound random variables have applications in actuarial and financial modeling where examples include investment portfolio analysis and collective risk modeling (see Bowers et al. (1997, Chapter 12)). 1.12.1 Expectations of Compound Variables Let the random variables X1, X2,…, XN be independent from the same distribution and N be a discrete random variable. Let E{X1}=µ1, E{X12}=µ2 and Var{X1}=σ2. The random variable of interest is the aggregate sum

Financial and actuarial statistics

40

(1.12.1) where the pdf of N is given by P(N=n) and the support of P(N=n) is SN. The mean and variance of (1.12.1) can be found using conditioning arguments. We assume X1,…, XN and N are independent and the joint pdf is given by f(n, SN)= P(N=n) f(Sn|N=n). From this the expectation of the aggregate sum is (1.12.2) Further, using the conditioning argument on N that the variance is (1.12.3) These formulas are used to construct statistical inference such as confidence and prediction intervals. The derivations of (1.12.2) and (1.12.3) are considered in Prob 1.29. The mgf corresponding to the compound variable SN can also be found using a conditioning argument. Let the mgf of Xi be M(t) for i=1,…, N. The mgf of (1.12.1) is (1.12.4)

We can also show that (1.12.2) and (1.12.3) can be found from the mgf (1.12.4) by taking the usual derivatives (see Prob. 1.29). Two illustrative examples follow where the second describes the much-investigated compound Poisson random variable. Ex. 1.12.1. Let N be discrete geometric with pdf given by Ex. 1.3.5. where P(N=n)=pqn for n=0, 1,…. Applying the summation formula (1.3.11) on (1.12.4) the mgf (1.12.5) is computed as (1.12.5) Taking the derivative of (1.12.5) we find the mean is E{SN}=pqµ1/(1−q)2. The distribution with associated mgf, (1.12.5), is referred to as a compound geometric random variable. Ex. 1.12.2. A collection of insurance policies produces N claims where N is modeled by a Poisson random variable (see Ex. 1.2.3) with parameter . The distribution of SN is said to be a compound Poisson random variable. Since E{N}=Var{N}=λ from (1.12.2) and (1.12.3) we find (1.12.6) From (1.12.5) the mgf of SN is

Statistical concepts

41

(1.12.7) This mgf is used in the next section to validate and construct limiting distributions for the compound Poisson random variable. These distributions are employed in statistical inference methods. 1.12.2 Limiting Distributions for Compound Variables Limiting distributions exist for some compound distributions. We give two limiting distribution approximations for the compound Poisson distribution. The first utilizes the standard normal distribution and is similar to the CLT while the second applies the saddlepoint approximation approach. Other approximation approaches exist, such as the discretizing method given by Panjer (1981). We assume the number of random variables follows a Poisson distribution and we let E{Xi}=µi for i=1, 2. From (1.12.6) we form the standardized variable (1.12.8) The mgf of ZN can be written as (1.12.9) This development assumes the mgf of Xi exists and using a Taylor Series expansion (see Prob. 1.3) we have (1.12.10) Putting (1.12.7) and (1.12.10) into (1.12.9) yields (1.12.11) where o(λ) are terms that approach zero as λ approaches infinity. Hence, as λ approaches infinity Mz(t) approaches the mgf of the standard normal, distribution exp(t2/2). By the Continuity Theorem ZN defined by (1.12.8) converges to a standard normal random variables as λ approaches infinity. Hence, the limiting distribution of (1.12.8), for large λ, is standard normal. Ex. 1.12.3. Let the amounts of accident claims, Xi, be independent with mean µ=100 and variance σ2=100. Let N be Poisson with mean λ=50. Considering the sum of the claims SN, from (1.12.6) E(SN}=5,000 and Var{SN}=1,000,000. The approximate probability the sum of the claims is less than 7,000 using the limiting standard normal distribution is

Financial and actuarial statistics

42

Also, from (1.8.8) a 95% prediction interval, using z.975=1.96, for SN is

Thus, prediction limits for the aggregate sum SN are $3,040 on the low side and $6,960 on the high side. The saddlepoint approximation approach can also be applied to the compound Poisson distribution when λ is large. If the required functions are known the SPA of Sec. 1.11 can be directly applied We present a three moment SPA. Applying the approximation (1.12.10), including only the first three terms, to (1.12.7) and (1.12.9) we approximate the mgf of

by (1.12.12) The required SPA calculations are now found. For a fixed value of t=(s− λµ1)/(λµ2)1/2, from (1.11.6), we need to solve for β in t=β+ β2µ3/[2λ1/2µ23/2]. The solution is found to be (1.12.13) In addition (1.11.7) becomes (1.12.14)

The SPA, (1.11.8), can now be computed for any t=(s−λµ1)/(λµ2)1/2. This application is the topic of the next example. Ex. 1.12.4. In this example we demonstrate the SPA to the compound Poisson distribution where X is distributed as a Gamma random variable, given in Prob. 1.21c) with parameters α=β=1. and λ=10. We compute the probability the aggregate sum is at most 15. The exact probability can be found by conditioning on N=n and using the fact that, for a fixed n, Sn is distributed as a Gamma random variable with parameters α=n and β=1. We compute with the aid of a computer package the exact cumulative probability

To apply the SPA, using (1.12.13) and (1.12.14), we find β=.86631, c= 1.68308 and σ=1.2574. The SPA yields approximation .860929 which is very close to the true value with a relative error of only (.86584−.860929)/.86584= .00567. Limiting distributions exist for other compound random variables and can be sought using the mgf and the continuity theorem. Bowers et al. (1997, Chapter 11) presents the limiting distribution for compound Poisson and negative binomial distributions.

Statistical concepts

43

1.13 Regression Modeling One of the most widely used statistical techniques is that of linear regression. Linear regression applied to a collection of variables can demonstrate relationships among the variables and model predictive structures. In this section we present a basic introduction to simple linear regression that will be utilized in the modeling and analysis of financial systems, This is not meant to be a comprehensive discussion on the subject but give the flavor of the interaction of regression and financial estimation. For an introduction to the theory and application of linear regression modeling we refer to Myers (1986) and Draper and Smith (1981). In simple linear regression modeling there are two variables. The independent or predictor variable, denoted by X, impacts the dependent or response variable, denoted Y. The empirical data takes the form of ordered pairs (xj, yj), for j=1, 2,…, n, and corresponds to observed outcomes of the variables. A linear relationship between the variables is assumed and the simple linear regression model is (1.13.1) where β0 and β1 are intercept and slope parameters and ej is the error term. For model estimation we assume ej are iid from a continuous distribution that has mean zero and constant variance for j=1, 2,…, n. For inference, such as hypothesis testing and confidence intervals, the additional assumption that the errors are normally distributed with constant variance is required. The simple linear regression model is applied to observed pairs of data points and the modeling assumptions can be assessed for accuracy. A plot of the data points referred to as a scatter plot is used as an initial check for linearity. An example of a scatter plot and the graph of an estimating line is given in Fig. 1.13.1. The normality assumption can be investigated through statistical techniques such as goodness of fit tests and hazard or probability plotting. The model given by (1.13.1), containing one predictor variable, is referred to as a simple linear regression model. In general, other predictor variables may be added to the model resulting in a multiple linear regression model. In the case of multiple linear regression topics of variable selection and influence, colinearity and testing become important. In this text we apply only simple linear regression models to financial and actuarial data. The estimation of the intercept and slope parameters is investigated followed by standard statistical inference techniques. 1.13.1 Least Squares Estimation To estimate the parameters β0 and β1 the mathematical method of least squares is applied. Least squares estimators have desirable statistical properties. Under general regularity conditions Jennrich (1969) has been shown general least squares estimators to be asymptotically normal as the sample size increases. In this method the sums of squares error is used as a measure for fit of the estimated line to the data and is given by

Financial and actuarial statistics

44

(1.13.2) Taking the partial derivatives of (1.13.2) the least squares estimators are found and they take the form (1.13.3) , , and (see Prob. where 1.30). Applying the estimators (1.13.3) to (1.13.1), the estimated or fitted regression line is (1.13.4) One of the primary main applications of (1.13.4) is the prediction of y at future values of x. The accuracy of such a prediction depends on the accuracy of the model, through modeling shape, and efficiency of estimation. One of the oldest measures of the linear fit of the estimated regression model is the correlation coefficient denoted ρ. This parameter arises when where both X and Y are random variables and the random vector (X, Y) comes from the Bivariate normal distribution. The parameter ρ measures the linear association between X and Y. Theoretically, −1≤ρ≤1 with proximity to −1 or +1 indicating a linear relationship. It is a mathematical fact that ρ2=1 if and only if the probability Y is a linear funtion of X is one (see Rohatgi (1976, p. 175)). The sample estimate of this parameter is called the sample correlation coefficient and is computed as

Table 1.13.1 Stock Price Over One Year Month Price

1

2

3

4

5

6

7

8

9

10

11

12

10.5 11.31 12.75 12.63 12.17 12.56 12.17 12.56 14.69 15.13 12.75 13.44

(1.13.5) where Syy=Σyj2−(Σyj)2/n. Here, r measures the linear relationship between X and Y where −1≤r≤1 and the closer |r| is to one the closer the points lie to a straight line. Another useful diagnostic statistic connected to the correlation coefficient is the coefficient of determination defined by r2. The statistic (1.13.5) is used to asses the accuracy of the estimated regression line (1.13.4). An example is now given. Ex. 1.13.1: The price of a stock is recorded at the start of each month for a year The data is listed in Table 1.13.1 We would like to predict the stock price over time and we apply the regression of X, the month, on Y, the stock price. The least squares are found to be

Statistical concepts

45

and the estimated regression line is

The plot of the data listed in Table 1.13.1 is referred to as a scatter plot of the data. In Fig. 1.13.1 is a scatter plot of the data along with the least squares line. We see the data is increasing over time but the linear fit of the data is somewhat suspect. The estimated slope, given by .255, indicates the price of the stock is increasing over time. Further, the sample correlation coefficient is found to be r=.724 indicating a fairly linear relationship. Based on this regression model we would expect the price of the sock after 6 months to be 11.0617+6(.255)=12.59. Analysis questions about the efficiencies of these types of point estimates arise and we now consider statistical inference topics associated with regression models.

Fig. 1.13.1 Regression of Stock Prices

1.13.2 Regression Model Based Inference Desirable properties of the least squares estimators, such as unbiasedness and minimum variance, are well known and do not rely on the form of the distribution of the error term. For a review of theoretical topics in linear models we refer to Searle (1971) and Myers and Milton (1998). For inference techniques we require that the error terms are normally distributed with zero mean and common standard deviation a, i.e. in (1.13.1) we assume ei ~iid n(0, σ2) for i=1, 2,…, n. Under this assumption inference procedures rely on the resulting normality of the least squares estimators. Further, when normality of the errors is assumed the least squares estimators coincide with the maximum likelihood estimators. Normality of the error term in the linear regression model implies that many statistics used in the analysis of these models also have normal distributions. As an example, the least squares estimator of the slope and the predicted response associated with chosen point xo are both normal random variables where

Financial and actuarial statistics

46

(1.13.6)

To construct inference based on the above we require an estimate of the variance σ2 that is statistically independent of the estimated parameters. Such an estimator, based on the fitted model, is

where we denote S=(S2)1/2. Using standard distribution theory we form confidence intervals for parameters and useful functions of parameters. The (1− α)100% percentile point for a t-random variable with degrees of freedom d is denoted by t(1−α,d). From (1.13.6) a confidence interval for β with coefficient with confidence coefficient 1−α is (1.13.7) In a similar manner, a confidence interval for the predicted value of Y based on X=xo, can be constructed. Letting yo=βo+β1xo the confidence interval (1.13.8)

Other confidence intervals, such as for the regression line or for a corresponding value of x, can be constructed. In financial and actuarial applications discussed in this text we apply only the confidence intervals given in (1.13.7) and (1.13.8). The width of the confidence intervals can be used to judge the accuracy of the estimation. More accuracy is signified by tighter or shorter confidence intervals. Different factors can adversely affect the width of the interval. In addition to the choice confidence coefficient the sample size has an effect, with larger samples resulting in better accuracy. Further, in new point estimation the farther the predicted point’s dependent or x-value is from the dependent variable’s mean the wider the interval. This structure implies a penalty for estimation far in the future. Ex. 1.13.2: We consider the stock prices and regression model in Ex. 1.13.1. After running an analysis we find s2=.846332 and Sxx=143. The confidence interval with coefficient be .95 for the slope parameter, from (1.13.7), is .083971≤β≤.42679. From this we see that the price for the stock is increasing over time, but the slope of the increase is not certain as seen by large width of the confidence interval for β. Further, the estimated price in six months or at future time x=13 is estimated as 12.59. Computing a 95% confidence interval for the price from (1.13.8) we find the confidence interval

Statistical concepts

47

10.15≤y≤15.02. Again a large-width confidence interval implies that the estimated stock price after 6 months is uncertain. 1.14 Autoregressive Systems In the practice of modeling financial and actuarial systems dependent random variables play an important role. In applications we may observe a series of random variables X1, X2,…where the individual random variables are dependent. Variables such as interest and financial return rates are often modeled using dependent variable systems. Many dependent variable techniques, such as time series, moving average and auto regressive modeling, exist In this section an introduction to dependent modeling is presented where one such modeling procedure, namely autoregressive modeling of order one, is discussed For a discussion of dependent random variable models we refer to Box and Jenkins (1976). In a general autoregressive system of order j, with observed x1, x2,…, the relation between variables is defined by (1.14.1)

where and µj are constants and the error terms ej are independent random variables for j≥1. It is clear from (1.14.1) that the random variables xj are dependent. In our discussion only the autoregressive system of order one, taking j=1, is considered. An autoregressive process or system of order one, denoted AR(1), takes the form of (1.13.1) where j=1 and can be written in the form (1.14.2)

where the error terms ej are independent from the same distribution with zero mean and variance Var{ej}=σ2. From (1.14.2) we solve iteratively to find (1.14.3) From this the mean and variance of the marginal distributions are given by (1.14.4)

Financial and actuarial statistics

48

The moments given in (1.14.4) are functions of unknown parameters that need to be estimated before these formulas can be utilized. After estimation approximate inference techniques, such as confidence interval, can be applied. We remark that alternative conditions to (1.14.2) on the observed sequence of random variables lead to moving average dependent variable models. In moving average modeling conditions relating the observed variables, Xj−µj, and error terms are imposed for j≥1. A mixture of autoregressive and moving average models lead to ARMA and ARMIA models. For an introduction and exposition to these techniques see Box and Jenkins (1976, Chapter 3). In an AR(1) system we estimate the parameters and σ2 using the method of least squares. Let Zi=Xi−µi for i≥1. In matrix notation (1.14.3) is written as (1.14.5) where e=(e1,…, en)’ and P is comprised of elements pij where (1.14.6)

The inverse of P, denoted P−1, can be shown to consists of elements given by the formula (1.14.7)

In the absence of a presumed trend in the means, the means are estimated by To estimate the unknown parameters least squares is applied where

The least squares point estimate for

for 1≤j≤n.

is found to be (1.14.8)

Using (1.14.5), (1.15.7) and (1.14.8), we compute estimator of σ2 as

and write the least squares (1.14.9)

In the computation of (1.14.9) it is useful to note that

Statistical concepts

49

(1.14.10) and (1.14.11) The distributional properties of the AR(1) fitted model are not easy to assess. Based on the asymptotic normality of the least squares estimators approximate inference can be constructed for some cases. We now follow with an approximate procedure. An ad-hock approximate confidence interval for a new response can be formed using the AR(1) model when the means at the individual locations are treated as fixed. The estimators (1.14.8) and (1.14.9) are used to produce the interval estimate for a new value Xn+m (1.14.12)

We remark that to employ these AR(1) an assumed model of mean values, µj for j≥1, is required. Confidence and prediction, such as given in (1.14.12), can be evaluated for their accuracy using simulation resampling techniques as discussed in Chapter 7. An example of AR(1) modeling is now given. Ex. 1.14.1. In this example we consider the stock price data in Ex. 1.13.1. We apply the AR(1) model where we take the mean at the different locations to be fixed at µi=11+.2i for i≥1. Utilizing formulas (1.14.8) to (1.14.11) we compute the estimates

Also, the 95% confidence interval (11.14.12) at future time corresponding to 18 months computes to be 10.05408≤x18≤15.41925. Comparing these findings with the least squares analysis in Ex.1.13.2 we observe that AR(1) resulted in a slightly larger variance point estimate and wider resulting confidence interval. Autoregressive models can be used to model the aggregate sums used in collective risk models. Let the aggregate sum consisting of correlated AR(1) random variables be

for fixed constant n. We find the sum can be written in terms of the errors as (1.14.13)

Financial and actuarial statistics

50

Also, the mean and variance of Sn are computed to b (1.14.14)

We remark that the constants µj model the trend in the variables over locations or time. Utilizing (1.14.14) ad-hoc interval estimates, similar to (1.14.12), can be constructed. These interval estimates can be validated by modern simulation and resampling methods, such as those presented in Chapter 7. The section closes with a theorem that is used in conjunction with simulation methods presented in Chapter 7. Ex. 1.13.2. We consider AR(1) settings where the error terms are normally distributed with a common variance. If the errors are independent with common variance we observe that marginally, Xj is normal with mean and variance given by (1.14.4). In a more general setting if we take (X1, X2,…Xn) to be multivariate normal under (1.14.2) then the conditional distribution of Xj given Xj−1,…, X1 depends only on Xj−1. Here X1 is normal with E{X1}=µ1 and Var{X1}=σ2 and using normal theory Xj|Xj−1=Xj−1 is distributed normal where (1.14.15) This conditioning strategy is useful in certain financial settings where values at certain time intervals, such as stock prices, depend on the previous values. The formulas given in (1.14.15) are utilized in simulation procedures where a series of variable outcomes are sequentially generated. Problems 1.1. Let A and B be events and use the axioms of probability to prove by constructing disjoint

events.

a)

. 1.2. For real number a and positive integer n consider the series and associated partial sum defined by

respectively. The infinite series exists if

Statistical concepts

51

Show for j=s formula (1.3.11), . Hint: First find Sn−Sn+1. 1.3. For continuously differentiable function f(x) the general Taylor Series is f(x)=f(a)+f (1)(a)(x−a)/1!+f (2)(a)(x−a)2/2!+… a) Find the Taylor Series approximation for exp(x) with a=0. b) For the Poisson distribution given in Ex. 1.2.3 find the mean and variance. c) For the geometric distribution given in Ex. 1.2.1 find the mgf and use it to compute the mean and variance. 1.4. For random variable X find the mean and variance, µ and σ2, for pdf a) f(x)=6x(1−x) for S={x|0≤x≤1}, b) f(x)=1/5 for S={1, 2,…, 5}, c) f(x)=1/2x for S={1, 2,…} 1.5. a) Let X~U[0, 1] with pdf given in Ex. 1.2.4. Find i) F(x), ii) S(x), iii)µ and σ2, iv) P(X>1/4), v) P(X>1/2|X>1/4). b) Let X~n(100, 100). Use (1.2.11), Appendix A1 and Appendix A2 to compute i) P(X>115, ii) P(X≤ 83), iii) P(85