1,797 121 2MB
Pages 223 Page size 432 x 648 pts Year 2012
Introduction to Estimating Economic Models
The book’s comprehensive coverage on the application of econometric methods to empirical analysis of economic issues is impressive. It uncovers the missing link between textbooks on economic theory and econometrics and highlights the powerful connection between economic theory and empirical analysis perfectly through examples on rigorous experimental design. The use of data sets for estimation derived with the Monte Carlo method helps facilitate the understanding of the role of hypothesis testing applied to economic models. Topics covered in the book are: consumer behavior, producer behavior, market equilibrium, macroeconomic models, qualitative-response models, panel data analysis and time-series analysis. Key econometric models are introduced, specified, estimated and evaluated. The treatment of methods of estimation in econometrics and the discipline of hypothesis testing makes it a must-have for graduate students of economics and econometrics and it will aid their understanding of how to estimate economic models and evaluate the results in terms of policy implications. Atsushi Maki is presently with the Department of Economics, Tokyo International University, Japan. He is Professor Emeritus of Economics at Keio University, Japan. Previously, he was Professor of Economics (1987–2009) at the Faculty of Business and Commerce, Keio University. He has been a visiting scholar at several universities such as Harvard University and the Australian National University, and has taught at several universities and institutions such as Osaka University, ESSEC (France) and KSMS (Kenya) as a visiting professor. His main fields are empirical analysis of consumer behavior and market behavior. For his distinguished and invaluable contributions to scholarship, education and society, Professor Maki has been given many honours which include the Japanese Ministry of Education, Science and Culture Travel Award by the Ministry of Education, Japan and Koizumi Travel Award by Keio University. He has also been awarded the following research grants: Abe Fellowship grant, the Social Science Research Council (SSRC), the American Council of Learned Society (ACLS) and the Japan Foundation Center for Global Partnership (CGP), 2001; and the Harvard-Yenching Fellowship grant, Harvard-Yenching Institute, Harvard University, 2001.
Routledge Advanced Texts in Economics and Finance
╇ 1. Financial Econometrics Peijie Wang
╇ 8. Applied Health Economics Andrew M. Jones, Nigel Rice, Teresa Bago d’Uva and Silvia Balia
╇ 2. Macroeconomics for Developing Countries, second edition Raghbendra Jha
╇ 9. Information Economics Urs Birchler and Monika Bütler
╇ 3. Advanced Mathematical Economics Rakesh Vohra
10. Financial Econometrics, second edition Peijie Wang
╇ 4. Advanced Econometric Theory John S. Chipman
11. Development Finance Debates, dogmas and new directions Stephen Spratt
╇ 5. Understanding Macroeconomic Theory John M. Barron, Bradley T. Ewing and Gerald J. Lynch ╇ 6. Regional Economics Roberta Capello ╇ 7. Mathematical Finance Core theory, problems and statistical algorithms Nikolai Dokuchaev
12. Culture and Economics On values, economics and international business Eelke de Jong 13. Modern Public Economics, second edition Raghbendra Jha 14. Introduction to Estimating Economic Models Atsushi Maki
Introduction to Estimating Economic Models Atsushi Maki
First published 2011 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Simultaneously published in the USA and Canada by Routledge 270 Madison Avenue, New York, NY 10016 Routledge is an imprint of the Taylor & Francis Group, an informa business This edition published in the Taylor & Francis e-Library, 2011. To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk. © 2011 Atsushi Maki The right of Atsushi Maki to be identified as author of this work has been asserted by him in accordance with the Copyright, Designs and Patent Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging in Publication Data Maki, Atsushi, 1948– Introduction to estimating economic models / Atsushi Maki. p. cm.—(Routledge advanced texts in economics and finance) 1. Economics—Methodology. 2. Economics—Econometric models. I. Title. HB131.M335 2010 330.01′5195—dc22 2010021806 ISBN 0-203-83949-8 Master e-book ISBN
ISBN: 978–0–415–58986–4 (hbk) ISBN: 978–0–415–58987–1 (pbk) ISBN: 978–0–203–83949–2 (ebk)
To My Mentor, late Hendrik S. Houthakker
Contents
List of figures List of tables Preface
ix xi xv
1
Introduction 1.1 Overview╇ 1 1.2 Experimental design╇ 1 1.3 Introduction of stochastic concept╇ 5 1.4 Data sets derived from the Monte Carlo method╇ 6
1
2
Consumer behavior 2.1 Theory of consumer behavior╇ 10 2.2 Model╇ 15 2.3 How to generate a data set by the Monte Carlo method╇ 17 2.4 Examples╇ 18 Bibliography╇ 50
10
3
Producer behavior 3.1 Theory of producer behavior╇ 52 3.2 Models╇ 55 3.3 How to generate a data set by the Monte Carlo method╇ 57 3.4 Examples╇ 61 Bibliography╇ 78
51
4
Market equilibrium models 4.1 Theory of market equilibrium╇ 80 4.2 Identification problem╇ 84 4.3 Models: competitive, oligopolistic, and monopolistic markets╇ 91 4.4 How to generate a data set by the Monte Carlo method╇ 96 4.5 Examples╇ 101 Bibliography╇ 110
79
viii╇╇ Contents 5
Macroeconomic models 5.1 General models╇ 112 5.2 Empirical models╇ 117 5.3 How to generate a data set by the Monte Carlo method╇ 124 5.4 Examples╇ 128 Bibliography╇ 141
6 Microeconomic analysis using micro-data: qualitative-response models 6.1 Qualitative-response models╇ 143 6.2 How to generate a data set by the Monte Carlo method╇ 150 6.3 Examples╇ 156 6.4 The merits of micro-data sets revisited╇ 165 Bibliography╇ 166 7
111
142
Microeconomic analysis using panel data 7.1 Models╇ 167 7.2 How to generate a data set by the Monte Carlo method╇ 173 7.3 Examples╇ 176 Bibliography╇ 180
167
181
Macroeconomic time-series analysis 8.1 Characteristics of time series and time-series models╇ 182 8.2 How to generate a data set by the Monte Carlo method╇ 185 8.3 Examples╇ 186 Bibliography╇ 195
9
Summary and conclusion
197
Index
198
8
Figures
1.1 1.2 1.3 2.1 2.2 2.3 2.4 2.5 2.6 2.7
The relation between observation and theory Orthodox procedure for empirical economic analysis Present procedure Dual approach in consumer-demand theory Heteroskedasticity (case 1) Heteroskedasticity (case 2) Heteroskedasticity (pooled LESUNIR1 with LESUNIR2) Auto-correlation Auto-correlation (enlargement) The relationship between the constant-utility and Laspeyres price indexes 2.8 Constant-utility price index, Fisher index, and Törnqvist index 2.9 Constant-utility price index and Laspeyres price index 2.10 Forecasting (p1 = 1.1, p2 = 1.1) 2.11 Forecasting (p1 = 0.7, p2 = 1.5) 3.1 Production and cost 3.2 Production and cost for increasing returns to scale 4.1 Individual demand and supply curves and market demand and supply curves 4.2 Equilibrium in an imperfect market 4.3 Equilibrium price path in an imperfect market 4.4 Under-identifiable demand and supply curves 4.5 Identifiable supply curve and non-identifiable demand curve 4.6 Identifiable demand and supply curves 4.7 Demand and supply curves with disturbance terms 4.8 The influence of no disturbance term on the supply curve 5.1 Haavelmo bias 5.2 IS-LM curves 5.3 The liquidity trap and full employment level 6.1 Linear probability and S-shaped probability curves 6.2 Graphical illustration of the probit model 6.3 Graphical illustration of the double-hurdle model 6.4 Probit (latent variable) standard error = 10
6 7 8 14 28 28 29 32 32 41 44 44 47 48 60 71 82 83 84 86 87 88 90–1 103 119 121 135 144 146 150 152
x╇╇ Figures 6.5 Probit (observation) standard error = 10 6.6 Probit (latent variable) standard error = 30 6.7 Probit (observation) standard error = 30 6.8 Tobit (latent variable) standard error = 10 6.9 Tobit (observation) standard error = 10 6.10 Tobit (latent variable) standard error = 30 6.11 Tobit (observation) standard error = 30 8.1 Several time series (divergent process, random walk, and stationary process) 8.2 Time series (case (c)) 8.3 Time series (case (e)) 8.4 Time series (case (h)) 8.5 Spurious regression (scatter diagram) 8.6 Time series (case (i))
152 153 153 154 154 155 155 184 191 191 192 193 193
Tables
1.1 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11
An example of experimental design Information on data sets Results of the stability test (structural change test) Results according to the two data sets with no structural changes Results according to the two data sets with structural changes Structural change test with auto-correlation Data sets used for heteroskedasticity test Results of heteroskedasticity Results using the data sets with auto-correlation Estimation results by considering the existence of auto-correlation Mean and standard error in the estimates for sample size of 20 Results with non-normal distributions and the normality test by Jarque–Bera test 2.12 Information on structural parameters, exogenous variables, and shocks 2.13 Calculated b1, b2, a1, a2, and a3 2.14 Hypothesis testing 2.15 Results of simultaneous estimation for three commodities 2.16 Estimation results for equations (59) and (60) 2.17 Elasticity of demand 3.1 Structural parameters and variables 3.2 List of constants and variables 3.3 Virtual data sets 3.4 Multi-collinearity 3.5 Linear homogeneous production function 3.6 Estimation results 3.7 Estimation results 3.8 Estimation results 3.9 Parameters for CES production function 3.10 Degree of approximation of the magnitude of the standard error of the production function and the elasticity of substitution 3.11 Degree of approximation regarding the magnitude of the standard error of the production function, the elasticity of substitution, and the correlation between capital and labor
4 19 22 24 25 26 27 29 33 34 34 36 37 38 39 40 45 49 58 59 61 64 67 68 70 71 74 75 76
xii╇╇ Tables 4.1 Virtual data set for competitive market (unidentifiable case) 4.2 Competitive model (identifiable case) for parameters 4.3 Virtual data set for conjectural variation 4.4 Virtual data (monopoly, linear model) 4.5 Virtual data (monopoly, log-linear model) 4.6 Estimates and hypothesis testing (competitive market, unidentifiable model) 4.7 Estimation results (competitive market, identifiable model) 4.8 Estimates and hypothesis testing (competitive market) 4.9 Estimates and hypothesis testing (oligopoly) 4.10 Monopoly (linear model) 4.11 Monopoly (log-linear model) 5.1 Haavelmo bias 5.2 IS-LM model: virtual data 5.3 AD-AS analysis: virtual data 5.4 Mundell–Fleming model (small open economy): virtual data 5.5 Mundell–Fleming model (large open economy): data sets 5.6 Neoclassical growth model: data set 5.7 Haavelmo bias: estimation 5.8 Haavelmo bias: hypothesis testing 5.9 IS-LM model: estimation 5.10 IS-LM model: hypothesis testing 5.11 IS-LM model: liquidity trap and full employment 5.12 AD-AS analysis: estimation and hypothesis testing 5.13 AD-AS analysis: estimation and hypothesis testing (balanced budget) 5.14 Mundell–Fleming model (small open economy): estimation 5.15 Mundell–Fleming model (large open economy): estimation and hypothesis testing 5.16 Estimation results of a neoclassical growth model 6.1 Probit and logit models: virtual data 6.2 Censored tobit model: virtual data 6.3 Truncated tobit model: virtual data 6.4 Heckman’s two-step method: virtual data 6.5 Double hurdle model: virtual data 6.6 Probit and logit models: estimation results 6.7 Censored tobit model: estimation results 6.8 Truncated tobit model: estimation results 6.9 Heckman’s two-step method: estimation results 6.10 Estimates and hypothesis testing for the double-hurdle model 7.1 Pooling time-series and cross-section data 7.2 Analysis of variance for complete two-way cross-classification model (model I) 7.3 Analysis of variance for complete two-way cross-classification model (model II)
97 98 99 100 100 102 105 105 107 108 109 125 126 127 127 127 129 130 131 133 134 135 136 137 139 140 141 151 154 156 156 156 159 162 163 164 165 168 170 171
Tables╇╇ xiii 7.4 7.5 7.6 7.7 7.8 7.9 8.1 8.2 8.3 8.4 8.5 8.6 8.7
Virtual data used for ANOVA Virtual data set for the fixed-effects model Virtual data set for the random-effects model Results by ANOVA Estimation results of the fixed-effects model Estimation results of the random-effects model VAR models: virtual data Spurious regression: virtual data VAR models: estimation results Spurious regression: adjusted coefficient of determination Unit root test Co-integration ECM
173 174 175 177 178 179 186 187 188 190 194 195 195
Preface
This book teaches about the application of econometric methods to empirical analysis concerning a range of economic issues. It provides a missing link between textbooks on economic theory and econometrics by emphasizing the powerful connection between economic theory and empirical analysis. Students are taught about this connection by studying examples of rigorous experimental design that link theoretical models with stochastic concepts and observations. It teaches methods of estimation in econometrics and the discipline of hypothesis testing. By constructing data sets used for estimation that are derived by the Monte Carlo method, students readily understand the role of hypothesis testing applied to economic models. Namely, if an estimating equation for regression analysis or an economic model is correctly specified, the structural parameters of the model can be correctly estimated using an econometric estimation. This can be verified by hypothesis testing. The topics covered in this textbook are: consumer behavior, producer behavior, market equilibrium models, macroeconomic models, microeconomic models using micro-data and panel data, and macroeconomic time series models. Some key econometric models are introduced, specified, estimated, and evaluated. Studying the book, students of economics and econometrics will understand how to estimate economic models, how to confirm the accuracy of the estimates compared with the true values of the model, and how to evaluate the results in terms of policy implications. One of the objectives of empirical economic analysis is to understand the true relationship through sample observations. Usually we don’t know the true relationship in the real world a priori. Introduction to Estimating Economic Models, however, uses data sets derived by using the Monte Carlo method that allow us to know the true relationship and true values of the structural parameters a priori. For example, when explaining whether or not there is structural change in observation data, existing textbooks usually calculate the F-value derived from actual economic statistics such as the Consumer Expenditure Survey or National Income and Product Accounts. According to the F-value, it is possible to know whether or not there is structural change in the data. In this example, students can understand how to use the Chow test, but cannot confirm whether or not there is structural change in the economy.
xvi╇╇ Preface Let us examine a standard econometrics textbook regarding the demand for food. On the topic of the stability test, it presents one example on the demand for food. As a true relationship, it specifies the estimating equation as log q = α + β1 log p + β2 log y where q is per capita food consumption, p is price of food, and y is per capita income for years 1927–41 and 1948–62. It calculates Chow test statistics in order to test for structural change between the two periods and, based on the F-value, it concludes that there is structural change between the periods. However, students in economics will question the specification of the estimating equation and also the conclusion. This is because the estimating equation does not include the price effect of other commodities that constitutes an important factor in consumer demand theory. As a result, they cannot understand if the estimating equation derives a true relationship based on economic theory. In addition, they have questions regarding the conclusion. Namely, an estimating equation used in econometric analysis may be arbitrary and lack a foundation in economic theory. In short, economics students can be confused by an example that lacks a theoretical basis. On the other hand, in the present book, the data sets are derived by the Monte Carlo method. Thus, we know a priori whether there is or is not a structural change in the data set. Using observations from the Monte Carlo method, we estimate the model applying regression analysis and test the null hypothesis that there is no structural change. As we a priori know the conclusion, the students confirm that hypothesis testing, using the Chow test, has a power to verify the null hypothesis, thereby gaining an understanding as to the meaning of structural change. In Introduction to Estimating Economic Models, I give two examples regarding structural change by the Monte Carlo method; one is a case of structural change in the data set and the other is an example of no structural change. Thus students learn the value and technique of hypothesis testing; namely, the former is rejected and the latter is not rejected by the null hypothesis that there is no structural change. This book uses economic models as a starting point for studying econometric methods. This gives students a direct link between economics and econometrics. Most other textbooks simply use econometric models as a starting point. The economic models are usually relegated to the application section and are accorded minor importance. Therefore, students are not well trained in linking economic models with econometric estimation and testing methods. This book addresses this problem. As explained above, by using the Monte Carlo method, I conduct an ideal experiment in the econometric laboratory. This helps students understand the importance of economic theory and experimental design connecting model and observation. Thus, I provide a link between economic theory and applied econometric work that is often missing in standard textbooks. Introduction to Estimating Economic Models includes examples using the Monte Carlo method, estimation results, and references. It does not provide detailed discussion of statistical and econometric theories because these subjects are covered in econometric textbooks such as Baltagi (2008), Cameron and
Preface╇╇ xvii Trivedi (2005), Greene (2008), Maddala and Lahiri (2009), Stock and Watson (2007), Wooldridge (2002) and Wooldridge (2009). The present text is prepared as a supplement to such standard economics and econometrics textbooks. I learned FORTRAN when I was an undergraduate student in Japan and I studied TSP software when I was a graduate student. Since then, I have used TSP and many of the results presented in the text are obtained by using TSP. I have been conducting empirical analysis on consumer demand for more than thirty years, and for the last twenty years I have been interested in using simulation techniques for market equilibrium models. By conducting both empirical analysis and simulation analysis in my career, I found a new approach to teaching economic modeling and econometric methods to graduate students. I would like to thank Fumio Hayashi, Hendrik Houthakker, Dale Jorgenson and Michael McAleer for their encouragement, and thank Eitaro Aiyoshi, Trevor Breusch, Richard Cornes, Thesia Garner, Jeffrey Kingston, Yukinobu Kitamura, Shigeru Nishiyama, Satoshi Ohira, Les Oxley, Murray Smith, Michael Veall, Kenji Wada, and Tom Wells for their comments and suggestions.
References Baltagi, B. H. (2008) Econometrics, Fourth edition, Berlin Heidelberg: Springer-Verlag. Cameron, A. C. and P. K. Trivedi (2005) Microeconometrics: Methods and Applications, Cambridge: Cambridge University Press. Greene, W. H. (2008) Econometric Analysis, Sixth edition, Upper Saddle River, NJ: Prentice-Hall. Maddala, G. S. and K. Lahiri (2009) Introduction to Econometrics, Fourth edition, Chichester: John Wiley. Stock, J. H. and M. W. Watson (2007) Introduction to Econometrics, Second Edition, Boston: Addison-Wiley. Wooldridge, J. M. (2002) Econometric Analysis of Cross-Section and Panel Data, Cambridge, MA: MIT Press. Wooldridge, J. M. (2009) Introductory Econometrics: A Modern Approach, Fourth edition, Mason, OH: South-Western.
1 Introduction
1.1 Overview One of the objectives of empirical economic analysis is to understand the true relationship between variables through observations of data samples. This text is about the application of econometric methods to empirical analysis of a range of economic issues. By emphasizing the connection between economic theory and empirical analysis, this book fills the gap between textbooks on economic theory and econometrics. Students will engage examples of rigorous experimental design that link theoretical models with stochastic concepts and observations. They will also learn methods of estimation in econometrics. In addition, by using the data sets derived by the Monte Carlo method, they learn about the role of hypothesis testing in econometric models. Usually we don’t know the true relationships of variables a priori. However, in this textbook, we know the true relationships and true values of the parameters in a model a priori. Careful study leads to the conclusion: If a model is correctly specified, including the specification of the distribution of random variables, and the model is estimated by a suitable method, then we can obtain the estimates of the parameters of the true relationship from the sample observation. By using the Monte Carlo method, I conduct an ideal experiment in the econometric laboratory, one that aims to show the importance of economic theory and experimental design connecting model and observation. This textbook covers consumer behavior, producer behavior, market equilibrium models, macroeconomic models, qualitative-response models using micro-data, fixed-effects and random-effects models utilizing panel data, and macroeconomic time series analyses. Through careful study, students of economics and econometrics will come to understand how to estimate econometric models and confirm the accuracy of the estimates compared with the true values in the model by applying hypothesis testing, and will also learn how to evaluate related policy implications.
1.2 Experimental design Experimental design in economics is similar to designing experimental instruments in the natural sciences. In the empirical analysis of economics, models are
2╇╇ Introduction estimated using data sets in order to make a structure, test the applicability of the structure through theoretical analysis and statistical testing, and conduct forecasting to test the robustness of the structure. The three fundamental aspects of experimental design are: 1 2 3
determining the unit; determining the observational period; establishing the correspondence between theoretical and observational variables.
Determining the unit involves deciding the theoretical unit of a model, such as the individual household, individual firm, or government, or the representative household or representative firm. Aggregated data rely on the representative concept, while micro-data sets rely on the individual unit. The observational period could be a day, week, or month, or it could be quarterly, semi-annual, or annual. For example, security markets’ sales data covering market prices and quantities is available every minute, while the System of National Accounts (SNA) is published on a quarterly and annual basis, and corporate financial data are published semi-annually. The observational unit is strongly dependent on the data sets used. As for the correspondence between theoretical and observational variables, in theory we usually write p for price, but in empirical analysis, price is strictly defined by the data, such as the price of food as published in the Family Expenditure Survey. The principal purpose of experimental design is to ensure the reproducibility of empirical results. Anyone using the same data and methods should be able to produce the same results. The second purpose of experimental design concerns the development of economic science. Whether an experiment is successful or not, the results are made available to the general public and other researchers. In published papers, the goal is to add to previous research and suggest avenues for further study. To elucidate experimental design, let’s consider an estimation of the consumerdemand function, one involving a time series for household behavior for food consumption. It is important to construct a model according to economic theory. The foundation of consumer behavior is to assume the existence of a utility function and to consume goods and services so as to maximize utility under the budget constraint. Here, consumers behave as price-takers. In this framework, exogenous variables are income and prices, and endogenous variables are quantities consumed. The constants to be estimated are parameters of the utility function. The utility function is specified as the linear expenditure system (LES) type utility function; for simplicity, as an example, total expenditure is divided into two categories, namely food and other items. The utility function is: u = β1 log(x1 - α1) + β2 log(x2 - α2)
(1)
Introduction╇╇ 3 and the budget constraint is: y = p1x1 + p2x2
(2)
where α1, α2, β1, and β2 are the parameters of the utility function, p1 and p2 are prices of food and other items, respectively, and y is income. To maximize the utility function under the budget constraint, we use the Lagrange multiplier method: V = β1 log(x1 - α1) + β2 log(x2 - α2) + λ(y - p1x1 - p2x2)
(3)
The necessary conditions for maximization are: ∂V/∂x1 = β1/(x1 - α1) - λp1 = 0 ∂V/∂x2 = β2/(x2 - α2) - λp2 = 0 ∂V/∂λ = y - p1x1 - p2x2 = 0
(4)
Solving the system with expenditures of p1x1 and p2x2, we get: p1x1 = β1/(β1 + β2)y + β2/(β1 + β2)α1p1 - β1/(β1+ β2)α2p2 p2x2 = β2/(β1 + β2)y - β2/(β1 + β2)α1p1 + β1/(β1+ β2)α2p2
(5)
As the expenditure is a linear function of y, p1, and p2, the demand system is called LES. The demand function of item 1 (food) is obtained by dividing p1 as: x1 = β1/(β1 + β2)(y/p1) + β2α1/(β1 + β2) - β1α2/(β1+ β2)(p2/p1)
(6)
and the demand function of item 2 (others) is obtained as: x2 = β2/(β1 + β2)(y/p2) - β2α1/(β1 + β2)(p1/p2) + β1α2/(β1+ β2)α2
(7)
To estimate a model, the researcher obtains data from publications or the Internet. In terms of experimental design, we first have to decide on the experimental unit. As we are interested in household behavior here, the observational unit is the household. On the other hand, if we are interested in individual consumption, the unit is the individual. When there is mutual interdependence among household members, it is better to use the household as the consumption unit. For example, parents usually pay the educational expenditure for their children, so attributing such consumption to the individual would be misleading. Using the individual as the unit for consumption of housing would also be misleading, as all members of the household are collectively consuming housing. Therefore, except in rare cases, when studying household behavior the experimental unit should be the household rather than the individual. In terms of time intervals, for example, if we use the Family Income and Expenditure Survey (FIES) in Japan, the time unit is monthly or annual. If we use the SNA, it is quarterly or annual.
4╇╇ Introduction Table 1.1╇ An example of experimental design Unit
Unit period
Correspondence between theoretical and observational variables
Seasonal adjustment
Family Households Income and Expenditure Survey
Annual
p1: food price indexa p2: price index for othersa p1x1: food expenditureb p2x2: expenditure for other itemsb y: total expenditure (p1x1 + p2x2)b
No
National Income Statistics
Quarterly
p1: food price deflatorc X-11, p2: non-food price deflatorc X12ARIMA p1x1: per capita food expenditurec p2x2: per capita non-food expenditurec y: p1x1 + p2x2 n: population size
Representative consumer (per capita)
Notes a From Annual Report on Consumer Price Index. b From FIES. c From SNA.
As for the correspondence between theoretical and observational variables, the theoretical variables in the model are p1, p2, x1, x2, and y. We have to pick up such observation data from the FIES or SNA. The prices and quantities consumed are gathered from the Annual Report on Consumer Prices and the FIES, respectively. The correspondence between theoretical and observation variables is: •â•¢ •â•¢ •â•¢ •â•¢ •â•¢
p1x1 in the model: FIES (food expenditure of all households); p2x2 in the model: FIES (non-food expenditure of all households, y - p1x1); p1 in the model: Annual Report on Consumer Price Index (food price index of all households); p2 in the model: Annual Report on Consumer Price Index (non-food price index of all households); y in the model: FIES (total expenditure of all households).
The quantities consumed (x1 and x2 here) are obtained by dividing expenditure pixi by its own price (pi). Hence we get p1x1/p1 and p2x2/p2. Examples of experimental design are depicted in Table 1.1. The table depicts two cases: the FIES on an annual basis and the SNA on a quarterly basis. The following is an unsuitable example of experimental design: to estimate the demand function for food nationwide, p1 and p2 are selected only from the Tokyo metropolitan area instead of all households in Japan. This is unsuitable because Tokyo households do not necessarily have the same demand function for food as households around the nation and thus the results would be misleading. Here’s another example: We want to estimate consumer demand for air conditioners for
Introduction╇╇ 5 all households in Japan and choose as one of the independent variables in the model the number of nights when the temperature exceeded 25°C. If we choose this figure from the Tokyo area in some years, Osaka in other years, and Sapporo in still other years, the estimated results will be meaningless and have no practical implication, though the estimating demand function is statistically satisfied. Normally, we don’t choose such experimental designs. But sometimes we find empirical results based on data sets that are not consistent with each other, and we need to be aware of the pitfalls of unsuitable experimental design. To design an experiment on household consumer demand, we next consider the causal order of variables. In applied econometric analysis, regression analysis is the most common method of casual analysis. In regression analysis, there is a dependent variable on the left-hand side of the equation, and independent variables on the right-hand side. Here we may wonder about the selection of the independent and dependent variables. In consumer-demand analysis, does the left-hand side indicate the variable price or the quantity demanded? We find the answer by considering the dependent variable according to economic theory. The demand function of x1 is: x1 = β1/(β1 + β2)(y/p1) + β2α1/(β1 + β2) - β1α2/(β1+ β2)(p2/p1) Now we set B0 as β1/(β1 + β2), B1 as β2α1/(β1 + β2) and B2 as - β1α2/(β1+ β2). These are parameters, and the previous equation becomes: x1 = B0(y/p1) + B1 + B2(p2/p1)
(8)
We can consider three alternatives for estimation: ╇ (I) x1 = B0(y/p1) + B1 + B2(p2/p1); (II) (y/p1) = C0+ C1x1 + C2(p2/p1); (III) (p2/p1) = D0(y/p1) + D1x1 + D2. After introducing a stochastic variable and applying the least-squares method to the equation, we can estimate the parameters of the models. Now, the three alternatives indicated as (I), (II), and (III) above are not identical in relation to theory and estimation. In theory, exogenous variables are prices and income, and the endogenous variable is quantity demanded. Therefore, it is necessary for regression analysis to specify that the right-hand-side variable is exogenous. Further, it is assumed that exogenous and random variables are mutually independent. This is true only in case (I). Therefore, the reasonable estimating equation is case (I).
1.3 Introduction of stochastic concept Models are a kind of simplification of reality. Accordingly, there is no perfect correspondence between economic statistics that describe reality and a given model. As everyone knows, economic data is not obtained through a controlled experiment. Constructing an empirical model to analyze economic data requires
6╇╇ Introduction Observation (economic activities) Theory: (a) Systematic parts (theoretical value) (b) Non-systematic parts (random variable)
Figure 1.1╇ The relation between observation and theory.
the inclusion of many factors simultaneously. But to include many factors, numerous data points are needed. When only a limited number of samples are available, we have to consider some other criteria in constructing a model. We need to determine dominant factors in the model to explain endogenous variables. In reality, there are thousands of independent variables influencing the dependent variable, but it is necessary to consider only a couple of variables that influence the dependent variable, combining the remaining factors into one variable, namely a random variable with a constant mean and finite variance. Considering the distribution of the random variable, the smaller the variance of the stochastic distribution, the better the model. Though the factors are large numbers, some of them are specified as systematic factors derived from the model as a set of independent variables in the right-hand side, while others are specified in general as a random variable (see Figure 1.1). In regression analysis, a true relationship is specified as y = β0 + β1x1 + β2x2. Introducing stochastic variable ε to link theory and observational data makes the regression equation y = β0 + β1x1 + β2x2 + ε. We now apply the ordinary leastsquares method. There are two kinds of random variables in a model. One is an error in the equation, the other an error in the variables. In econometric methodology, random variables are often introduced as errors in equations in regression analysis. The permanent income hypothesis specified by Milton Friedman is a typical model introducing errors in the variables. Friedman’s model includes permanent income, transitory income, and observed income as income factors, and permanent consumption, transitory consumption, and observed consumption as consumption factors. Here, observed variables are assumed to be stochastic.
1.4 Data sets derived from the Monte Carlo method The objective of applied econometrics is to examine what happens in the real world by analyzing statistical data in a theoretical framework. Concretely, we need to specify a function to explain real data according to our hypothesis, and to test whether or not the function is stable and whether or not our null hypothesis is rejected. After we confirm the viability of the model by statistical methods, we can engage in forecasting or simulation for policy purposes. This is the orthodox procedure for empirical economic analysis, as shown in Figure 1.2.
Introduction╇╇ 7 Data (economic statistics); data accuracy Model; correct specification or mis-specification Estimation; estimation method; assumptions of random variable Structure; constructing null hypothesis; right or wrong Hypothesis testing; testing the null Forecasting NO
Back to model for modification
YES
No problem
Figure 1.2╇ Orthodox procedure for empirical economic analysis.
Every researcher engaged in applied econometrics has conducted research according to this procedure. However, students who want to master applied econometric analysis usually have doubts, misgivings, and many questions, e.g.: (1) Do the estimating equations describe the true relationship of the observed data? (2) Are the assumptions of random variables applicable to the observed data in the real world? Regarding the first question, there is the possibility of mis-specification of the model for the observed data. As for the second question, analysis of economic statistics considers the movement of many economic factors simultaneously. Let’s take consumer behavior again. Actual consumption data is affected by many factors other than the effect of income and prices on consumption expenditure. For example, factors related to household characteristics (such as the members of the household) and factors related to household wealth might strongly influence household consumption behavior. Because of these unknown systematic factors, the random variable in the model may not have the desirable characteristics indicated in econometric textbooks. Usually students have little experience with empirical analysis and do not believe intuitively in all of the empirical results. They are skeptical about the accuracy of data, the possibility of mis-specification of the estimating equation, the estimation methods in particular, and the empirical results as a whole. To address this skepticism: If a model is correctly specified, including the specification of the distribution of random variables, and the model is estimated by a suitable method, then we can obtain the estimates of the parameters of the true relationship from the sample observation.
8╇╇ Introduction Model (we know a priori true relationship) Monte Carlo method Virtual data Estimation using virtual data Structure Hypothesis testing (testing the null) Forecasting
Figure 1.3╇ Present procedure.
Here we don’t use real economic statistics, though. Instead, we use virtual data produced by the Monte Carlo method and estimate the model using such data. This analytical procedure is indicated in Figure 1.3. There are several merits to using this method as a means of learning econometric analysis. The first is that we know the true relationship, including the information of the random variable, a priori, and thus we can exclude the possibility of mis-specification of the model. Using a virtual data set, we teach students that when the model is correctly specified, we can estimate the parameters of the model satisfactorily. That is, when the model is correct, we can estimate the true relationship from the data samples. In addition, if the model is mis-specified, it will be clearly evident from the Durbin–Watson statistics or other kinds of statistical tests. The second reason to use virtual data is to show the viability of hypothesis testing. In this book, I explain several types of hypothesis testing through various kinds of analysis. For example, I consider the test of structural change and give two examples of results using the Monte Carlo method: one is structural change evident in the virtual data set and the other is the absence of such structural change. Students thereby learn by applying suitable techniques of hypothesis testing that one result is rejected correctly and the other is not rejected correctly. Though these procedures are explained in detail in every section, I will explain here the general procedures for generating virtual data by the Monte Carlo method. Let’s consider the LES demand functions. The LES utility function when classified into two clusters of commodities is expressed as: u = β1 log(x1 - α1) + β2 log(x2 - α2)
(9)
Introduction╇╇ 9 where β1, β2, α1, and α2 are parameters; and β1 + β2 = 1 as for normalization. Then, LES demand functions for two clusters of commodities are, respectively: p1x1 = β1y + β2α1p1 - β1α2p2 p2x2 = β2y - β2α1p1 + β1α2p2
(10)
Exogenous variables are income, y, and prices of p1 and p2, and endogenous variables are quantities of x1 and x2. Estimated parameters are b1, with a true value of β1; b2, with a true value of β2; a1, with a true value of α1; and a2, with a true value of α2. In this text, we identify true values and estimated values as follows: Greek letters are population parameters or true values, and English letters are estimated values. Due to budget constraints, we will obtain the estimates of the parameters by estimating only one of the two equations, which is sufficient for our purposes. Here, the equation for category 1 is estimated. Consumer demand x1 is obtained by dividing p1x1 of equation (10) by p1. When introducing random shock ε in equation (10), the demand function of category 1 is: x1 = β1(y/p1) + β2α1 - β1α2(p2/p1) + ε
(11)
The x1 is divided into two parts: that which is fundamental to the model, namely β1(y/p1) + β2α1 - β1α2(p2/p1), and the non-predictable parts, namely ε. The virtual data generated by the Monte Carlo method is constructed according to the following three steps: (a) Determine β1, β2, α1, and α2. (b) Generate random numbers for exogenous variables y, p1, and p2, and random variable ε, and fix the values of the exogenous variables and the realized values of the stochastic random variable. (c) Calculate the data of endogenous variables x1 by using parameters obtained from (a), and the set of exogenous variables and the realized values of the random variables obtained by (b) and equation (11). After obtaining the data set (x1, y, p1, p2), we estimated the structural parameters determined by (a). Then we evaluate the results by hypothesis testing to determine whether or not the null hypothesis (for example, the structural change test) is confirmed by the data. In this text, the true data set and the method for generating virtual data sets are presented in every chapter.
2 Consumer behavior
This chapter explains the analysis of consumer behavior. Section 2.1 describes the theory of consumer demand and introduces analytical tools for understanding utility maximization, elasticity of demand, and the dual approach in consumerdemand theory. Section 2.2 discusses the consumer-demand model specified in terms of linear expenditure system (LES) demand functions. Section 2.3 describes how to make a data set using the Monte Carlo method. Section 2.4 discusses some ways to estimate consumer demand and test models. Section 2.4.1 considers structural change, Section 2.4.2 presents the concept of heteroskedasticity, Section 2.4.3 discusses auto-correlation, Section 2.4.4 describes the normality test, and Section 2.4.5 explains the cross-equation restriction. Section 2.4.6 introduces readers to the indirect utility function of the LES demand system, the cost function, Hicksian demands, and measurement of the constant-utility price index. It also discusses the dual approach to measuring consumer demand and the theory and application of the Consumer Price Index (CPI). The CPI is one of the most important variables in the field of policy formulation and plays an important role in helping us understand the link between macro- and micro-economic activities. Section 2.4.7 explains the effect of mis-specification, Section 2.4.8 considers forecasting based on correctly specified and mis-specified models, and Section 2.4.9 discusses the amount of elasticity of demand derived from LES demand functions and misspecified models.
2.1 Theory of consumer behavior Utility maximization is the fundamental assumption in the theory of consumer behavior. The utility function is: u = u(x1, x2,…, xn)
(1)
where u is the utility indicator and xi is the quantity consumed of the i-th item. The budget constraint is: y = ∑i pixi
(2)
Consumer behavior╇╇ 11 where y is total expenditure called income, and pi is the price of the i-th item. The utility maximization principle assumes that a consumer behaves so as to maximize his or her utility within given budget constraints. Utility maximization behavior is analyzed by using the Lagrange multiplier method. The evaluation function is: V = u(x1, x2, …, xn) + λ(y - ∑i pixi)
(3)
The first-order conditions for utility maximization are: ∂V/∂xi = ∂u/∂xi - λpi = 0 ∂V/∂λ = y - ∑i pixi = 0
(i = 1, 2, …, n) (4)
The first equation is: λ = (∂u/∂xi)/pi = (∂u/∂xj)/pj
(i ≠ j)
and is called the law of equal marginal utilities per dollar, meaning that the ratio of the marginal utilities is equal to the ratio of prices for a maximum of the utility indicator. The second equation in (4) is the budget constraint. The Marshallian demand function is derived to solve equations of the first-order conditions as a function of income and prices, as follows: xi = xi(y, p1, p2, …, pn)
(5)
Marshallian demand functions are derived by applying utility maximization under the condition of budget constraints. Alfred Marshall introduced the concept of elasticity of demand in the nineteenth century, based on an idea in physics. In economics we have two kinds of price elasticity of demand: own- and cross-price elasticity of demand, and income elasticity of demand. Own-price elasticity of demand is the percentage that the demand for an item changes when the price of the item changes by 1 percent (other things being equal). As the amount of goods and services demanded decreases as their prices increase due to the downward-sloping demand schedule, the ratio between the change in quantity and that of price is negative. When the absolute value of the price elasticity is greater than unity (i.e., the original value < -1) – the goods are classified as price-elastic goods and luxury goods. On the other hand, when the absolute value of the price elasticity of demand is between zero and unity, the goods are classified as price-inelastic goods and necessary goods. For example, cereals are classified as basic necessities in many countries. This is because after estimating the demand function for cereals we observe that the demand for cereals is inelastic, meaning the value of the price elasticity is between -1 and 0. This means that when the price of cereals increases by one percent, the demand for cereals decreases less than one percent (e.g., 0.5 percent
12╇╇ Consumer behavior when the price elasticity of demand for cereals is -0.5). On the other hand, when the price of cereals decreases by 1 percent, the demand for cereals increases 0.5 percent. As the demand for cereals is stable with regard to price fluctuation, we call cereals necessary goods. When we specify the demand function as a linear function of income and price, we can derive the elasticity of demand through the following process. The demand function is: q = a + by + cp
(b > 0, c < 0)
(6)
where q is quantity demanded, y is income, p is the market price, and a, b, and c are parameters in the demand function. The price elasticity of demand is defined as:s ∂log q/∂log p = (∂q/∂p)/(p/q) = c(p/q)
(7)
We define income elasticity as: ∂log q/∂log y = (∂q/∂y)/(y/q) = b(y/q)
(8)
When an item is a necessary good, the income elasticity of the item is less than the absolute value of the price elasticity. On the other hand, when an item is a luxury good, income elasticity is greater than the absolute value of the price elasticity of demand (cf. Wold 1953). Since Marshall’s contribution to economic thought it has been important to estimate price elasticity of demand using the consumer-demand function. The reason is that the value of the elasticity has no effect on price and quantity units, i.e., the value is a dimension-free number. Therefore the value is useful for making international comparisons of demand for items. Consider the difference in consumer demand for gasoline between Japan and the United States. In Japan, the quantity of gasoline is measured in liters and its price is measured in yen, while in the United States the quantity is measured in gallons and the price is measured in dollars. If we compare the demand function for gasoline between Japan and the United States in order to determine the difference in the price effect, we have to consider the difference in the units used for quantity and price. However, the concept of elasticity is not affected by differences in the measurement of quantity or monetary units and thus there is no need to convert these variables. As noted earlier, the figure of elasticity is the percentage that the demand decreases when the price increases by 1 percent. Imagine that the price elasticity of demand for gasoline is -0.8 both in Japan and the United States. In Japan this indicates that the price of gasoline increased from 100 yen to 110 yen and that the quantity demanded decreased from 100 liters to 92 liters. That is, Elasticity of demand = ∂log q/∂log p = (∆q/q)/(∆p/p) = ((92 - 100)/100)/((110 - 100)/100) = -0.8
Consumer behavior╇╇ 13 where q is quantity demanded and p is the market price. In the United States this indicates that the price of gasoline increased from one dollar to one dollar and ten cents, and that the quantity demanded decreased from 50 gallons to 46 gallons. That is, Elasticity of demand = ((46 - 50)/50)/((1.1 - 1.0)/1.0) = -0.8 We consider elasticity of demand focusing on price and quantity unit in Japan as: (∆q/q)/(∆p/p) = (liter/liter)/(yen/yen) = 1 = dim (0) and in the United States as: (∆q/q)/(∆p/p) = (gallon/gallon)/(dollar/dollar) = 1 = dim (0) We ignore the price and quantity units and conduct international comparisons without considering any transformation of them. Let’s now turn to the dual approach in consumer-demand theory. Here we’ll introduce the concepts of direct utility, indirect utility, cost function (or minimum expenditure function), Marshallian demands and Hicksian demands. The direct utility function was already introduced in equation (1), where utility is the function of the quantities consumed. We will follow Deaton and Muellbauer (1980) in defining direct utility and the other concepts as: Direct utility: u = f (x) Indirect utility: u = ψ(y, p) Cost function: y = C(u, p) Marshallian demands: x = g(y, p) Hicksian demands: x = h(u, p) Budget constraint: y = p • x where u is the utility indicator (not directly observable), x is the vector of commodities and services (directly observable), p is the vector of the corresponding prices (directly observable), and y is income (directly observable). The above equations explain the relationship between the left-hand-side variable and the right-hand-side arguments. For example, the cost function is the relationship between income (y) and the function of utility indicator and prices. These concepts are connected with each other by the mechanism indicated in Figure 2.1. We can see that the Marshallian demands are obtained by utility maximization under the budget constraint. By substituting Marshallian demands in the direct utility function, we obtain the indirect utility function. On the other hand, the Hicksian demands are obtained by cost minimization under the constraint of constant levels of utility. Inserting the Hicksian demands into the budget constraint, we get the cost function. The indirect utility function is the inverse function of the cost function.
14╇╇ Consumer behavior
Direct utility function: u = u(x1, x2, ...,xn) Budget constraint y = ∑i pi xi
dual problem
Cost C = ∑i pi xi Constant level of utility u0 = u(x1, x2, ...,xn)
Cost minimization under the constant level of utility
Utility maximization under the budget constraint
Marshallian demands:
Hicksian demands:
xi = xi(y, p1, p2, ...,pn)
xi = xi(u, p1, p2, ...,pn)
Roy’s identity
Shephard’s lemma
Indirect utility function: u = ψ (y, p)
Cost function: inverse function
C = C (u, p)
Figure 2.1╇ Dual approach in consumer-demand theory.
The Hicksian demands of h(u, p) are also obtained by Shephard’s lemma through the following procedure: the function obtained by differentiating the cost function by the price pi is equal to the quantity consumed as: h(u, p) : ∂C(u, p)/∂pi = xi
(9)
On the other hand, by applying Roy’s identity to the indirect utility function, we get the Marshallian demands of g(y, p) as: xi = -(∂ψ(u, p)/∂pi)/(∂ψ(u, p)/∂y)
(10)
As the utility indicator itself is not directly observable, the estimation of the parameters of the utility function or the cost function is conducted using Marshallian demand functions. This is because the right-hand-side and the lefthand-side variables in the Marshallian demands are all observable; the left-hand variable is quantity consumed, and the right-hand variables are prices and income. After estimating the parameters of the demand functions, it is easy to calculate the direct utility function, the indirect utility function, the cost function, and the Hicksian demands.
Consumer behavior╇╇ 15
2.2 Model The direct utility function for the LES proposed by Stone (1954) is: u = ∑i βi log(xi - αi)
(11)
where βi’s and αi’s are the parameters of the utility function and the sum of the βi’s is normalized as unity (∑i βi = 1). The Marshallian expenditure functions are the Linear Expenditure System expressed as: pixi = αipi + βi(y - ∑jαjpj)
(i = 1, 2, …, n)
(12)
Now let’s consider the indirect utility function, the cost function and the Hicksian demands. The indirect utility function is obtained by using the Marshallian demands. From equation (12), we can get the following relationship: xi - αi = βi(y - ∑j αjpj)/pi
(13)
When we substitute equation (13) for equation (11), we derive the indirect utility function presented by income and prices as: u = ∑i βi log(βi(y - ∑j αjpj)/pi) = log((y - ∑j αjpj)Πj βjβj/Πj pjβj)
(14)
Or, after applying monotonic transformation to the original form of equation (14), we get: v = exp(u) = (y - ∑jαjpj)Πj βjβj/Π j pjβj
(15)
This is also the indirect utility function of the LES demand system. By applying Roy’s identity to the indirect utility function, we derive the Marshallian demands as: xi = -(∂ψ(u, p)/∂pi)/(∂ψ(u, p)/∂y)
(16)
In the LES demand system, we derive the Marshallian demands as: xi = -(-αi Πj βjβj/∏k pkβk + (y - ∑j αjpj) Πj βjβj)(-βi/pi)/∏k pkβk)/(Πj βjβj)/∏k pkβk) = -(-αi - (βi/pi) (y - ∑jαjpj)) (17) Therefore, we can obtain the well-known specification of the LES expenditure function as: pixi = αipi + βi(y - ∑j αjpj)
(i =1, 2, …, n)
(18)
Next let’s consider the cost function of the LES demand system. Equation (15) is rewritten as y - ∑j αjpj = vΠj pjβj/Πj βjβj. When we write y as C(u, p) at the equilibrium point, the cost function is:
16╇╇ Consumer behavior C(u, p) = ∑i αipi + (v/Πj βjβj)∏k pkβk
(19)
where C(u, p) is the minimum expenditure under conditions of constant-utility level and prices. The Hicksian demands of h(u, p) are obtained by differentiating the cost function by the price pi as: ∂C(u, p)/∂pi = xi
(20)
This equation is called Shephard’s lemma. In the case of the LES demand system, the Hicksian demand is derived by applying Shephard’s lemma as: ∂C(u, p)/∂pi = αi + (v/Πj βjβj)(βi/pi)∏k pkβk = xi
(21)
Hicksian demand is also obtained by applying the cost minimization principle under the given level of utility indicator. The evaluation function under the constant utility level, u0, is: V = ∑i pixi - λ(∑i βi log(xi - αi) - u0)
(22)
The first-order conditions for cost minimization are: ∂V/∂xi = pi - λ(βi/(xi - αi)) = 0 (i = 1, 2, …, n) ∂V/∂λ = ∑i βi log(xi - αi) - u0 = 0
(23)
The first equation of (23) is indicated as: λ = (pi (xi - αi))/βi = (pj (xj - αj))/βj
(i ≠ j)
(24)
Using the above equations, we get: xj - αj = (pi/pj)(βj/βi)(xi - αi)
(i ≠ j)
(25)
Substituting equation (25) for the second equation of (23), we obtain: βi log(xi - αi) + ∑j≠i βj log(pi/pj)(βj/βi)(xi - αi) = ∑j βj log(xi - αi) + ∑j≠i βj log(pi/pj)(βj/βi) = u0 Therefore, the Hicksian demands whose components are the utility indicator and prices are obtained as: pi(xi - αi) = βi exp(u)(∏k pkβk/Πj βjβj) Finally, equation (26) becomes: pixi = piαi + βi exp(u)(∏k pkβk/Πj βjβj)
(26)
Consumer behavior╇╇ 17 or, xi = αi + (v/Πj βjβj)(βi/pi)∏k pkβk
(27)
This is mathematically equivalent to the Hicksian demands of equation (21) derived from Shephard’s lemma. We thus have confirmed that the Hicksian demands derived by cost minimization and by Shephard’s lemma are mathematically identical. As for the dual approach in consumer-demand theory, at the equilibrium point we get the same solutions for quantities consumed, income and prices by employing either the direct utility function, the indirect utility function, or the cost function. The merit of the dual approach in empirical analysis is that by specifying one of the functions of the direct utility, the indirect utility, or the cost (or the minimum expenditure), we can derive the Marshallian demands. In addition, the Marshallian demand functions can be estimated due to the fact that all the data used for estimation – quantities consumed, prices, and income – are observable.
2.3 How to generate a data set by the Monte Carlo method We will now explain the method of generating data by the Monte Carlo method, including how to make virtual data of quantities consumed (x1 and x2), prices of commodities and services (p1 and p2), and income (y). The LES utility function is specified as: u = β1 log(x1 - α1) + β2 log (x2 - α2)
(28)
The parameters of the utility function are a priori determined as β1 = 0.4, β2 = 0.6, α1 = 100; α2 = -100 as the benchmark. A series of exogenous variables, y, p1, and p2, are obtained by generating normal random numbers. To create a series of income (y), we can generate random numbers from the normal distribution with a mean of 1,000 and a standard error of 100. To create a series of p1, we can generate random numbers from the normal distribution with a mean of 1 and a standard error of 0.2. To create a series of p2, data is obtained from the normal distribution with a mean of 1 and a standard error of 0.3. These series are written as: y ∼ N(1,000, 1002), p1 ∼ N(1, 0.22), and p2 ∼ N(1, 0.32), where N is the abbreviation for normal distribution. The first category of the LES expenditure function is described as: p1x1 = β1y + β2α1p1 - β1α2p2
(29)
and the first category of the LES demand function is: x1 = β1(y/p1) + β2α1 - β1α2(p2/p1)
(30)
Now, a random variable is introduced in order to make a data set for conducting regression analysis:
18╇╇ Consumer behavior x1 = β1(y/p1) + β2α1 - β1α2(p2/p1) + ε
(31)
where ε is the random variable. This random variable is the combined effect of many other factors affected by the consumption demand for the first category of the LES demand function. It is a stochastic variable specified by a normal distribution with a constant mean and variance. The realized value of ε is called residual and is denoted by e. It is obtained by generating random numbers from the normal distribution with, say, a mean of 0 and a standard error of 10. It is included in equation (31) as: x1 = β1(y/p1) + β2α1 - β1α2(p2/p1) + e
(32)
Therefore, e is the realized value obtained from ε ∼ N(0, 102). We generated 100 samples for each variable. Therefore, 100 sets of the realized values of the random variable and 100 sets of the variables y, p1, p2, and x1 were obtained by utilizing the Monte Carlo method. Table 2.1 shows the fundamental eight categories of the virtual data sets.
2.4 Examples 2.4.1 Structural change Structural change is a familiar term that often appears in government reports and newspapers. It refers to a dynamic process of transformation in an economic system. A typical use of the term can be seen in the following passage: “In Japan, the high economic growth era ended in the early 1970s, and due to structural change the Japanese economy entered into an era of slow economic growth. The turning point is usually traced to the first oil price shock in 1973. In the 1985 Plaza Accord, Japan agreed to a sharp appreciation in the value of the yen to correct trade imbalances, and subsequently there were also structural changes in the Japanese economy as a consequence of globalization. In the early 1990s, Japan’s asset bubble burst and further structural change took place.” In common parlance, structural change results from significant events such as the oil shock, the Plaza Accord, or the bursting of the bubble. It is important to emphasize that popular use of the term differs from that in economics and econometrics. Structural change in economics and econometrics refers to change in the parameters of the structural equations. More concretely, structural change can be explained by reference to consumer demand. The parameters for the consumer-demand model are those of the utility function. The variables are income, prices, and quantities consumed; the first two are exogenous variables and the last one is an endogenous variable. When changes in consumption patterns are explained by the changes of exogenous variables, but not by the changes in the parameters of the utility function, there is no structural change in terms of how the concept is defined in economics and econometrics. The reason why there is no structural change is that the parameters
2
N(1000, 100 )
N(1, 0.22)
N(1, 0.32)
N(0, 102)
N(1000, 100 )
N(1, 0.22)
N(1, 0.32)
N(0, 102)
y
p1
p2
ε
ρ
-100
-100
α2
2
100
100
0.55
0.6
β2
α1
0.45
0.4
(2) LESR2
β1
(1) LESR1
Table 2.1╇ Information on data sets
2
N(0, 102)
N(1, 0.32)
N(1, 0.22)
N(1000, 100 )
-100
100
0.6
0.4
(3) LESR11
N(0, 102)
[0.4, 1.6]
[0.5, 1.5]
[500, 1500]
-100
100
0.6
0.4
(4) LESUNIR1
0.8
N(0, 102)
[0.4, 1.6]
[0.5, 1.5]
[500, 1500]
-100
100
0.55
0.45
(5) LESUNIR3
2
-0.8
N(0, 102)
N(1, 0.32)
N(1, 0.22)
N(1000, 100 )
-100
100
0.6
0.4
(6) LESSER
2
N(0, 102)
N(1, 0.32)
N(1, 0.22)
N(1000, 100 )
-100
100
0.6
0.4
(7) LESSER1
N(0, 52)
N(1, 0.32)
N(1, 0.22)
N(1000, 1002)
-100
100
0.6
0.4
(8) LESR4
20╇╇ Consumer behavior of the utility function remain unchanged. In this case, the changes in consumption patterns are explained by the changes in the exogenous variables of income and relative prices. The rapidly decreasing Engel’s coefficient during the high economic growth era in the 1950s and 1960s is fully explained by rising income and changes in relative prices between commodities and services. Hence, this is not an example of structural change. When substitution between commodities and services in the 1980s is fully explained by rising income and changes in relative prices among commodities and services, there is also no structural change. From an economist’s point of view, structural change occurs only when the parameters of the utility function change for one reason or another. That is, structural change is determined based on the model. Changes in consumption patterns caused by a shifting combination of commodities and services resulting from changes in exogenous variables do not constitute structural change from the perspective of economists. Next let’s consider the model, regression analysis, and hypothesis testing as ways to identify structural change. The LES utility function, as the benchmark, is specified as: u = β1 log(x1 - α1) + β2 log (x2 - α2)
(33)
The parameters of the utility function are a priori determined as β1 = 0.4, β2 = 0.6, α1 = 100, with α2 = -100 as the benchmark. The parameters of the utility function after structural change has occurred are indicated by the difference between the parameter sets of β1, β2, α1, and α2 and the benchmark case. This is indicated in column (2) of Table 2.1. Comparing the parameters of the utility function in the benchmark with the parameters after structural change, we see that β1 changed from 0.4 to 0.45 and β2 changed from 0.6 to 0.55, while there were no differences in the values of the parameters for α1 and α2. When we consider the changes of β1 and β2 from the perspective of economic theory, we see that the marginal budget share for item 1 increases and affects the magnitude of income elasticity. After changing the values of β1 and β2, the virtual data of a sample of 100 observations is generated by the Monte Carlo method. Here we’ve applied the same procedure we used in the case of the benchmark. After pooling these two data sets, the size of the sample is 200 observations. Now a structural change test is conducted by utilizing the ordinary leastsquares (OLS) estimation method. The random variable has the characteristics of homoskedasticity, of mutual independence from other random variables, and of independence from independent variables and the disturbance term. Therefore, the value estimated by regression analysis is the best linear unbiased estimator (BLUE) according to the Gauss–Markov theorem. To assess structural change, the Chow test is also used. With the Chow test (cf. Maddala and Lahiri 2009), there are two sets of observations: group 1 and group 2. We thus consider two regression equations:
Consumer behavior╇╇ 21 Group 1 (sample size of n1): y = α1 + β11x1 + β12x2 + ··· + β1kxk + u
(34)
Group 2 (sample size of n2): y = α2 + β21x1 + β22x2 + ··· + β2kxk + v
(35)
The null hypothesis of the model is: H0: β11 = β21, β12 = β22, ··· , β1k = β2k, α1 = α2
(36)
We’ll define the values of RSS1, RSS2, RRSS, and URSS as follows: The residual sum of squares for Group 1 is RSS1, the residual sum of squares for Group 2 is RSS2, the residual sum of squares for the pooled data of Groups 1 and 2 is RRSS, and RSS1 + RSS2 = URSS. When the null hypothesis is correct, the following statistic: F = ((RRSS - URSS)/(k + 1))/(URSS/(n1 + n2 - 2k - 2)) follows the F-distribution with a degree of freedom of k + 1 and n1 + n2 - 2k - 2. To conduct the structural change test, the demand function for the first category is estimated by the OLS method. The regression equation is: x1i = B0 + B1(y/p1)i + B2(p2/p1)i + ui
(i = 1, 2, …, 100)
(37)
where ui is a random variable called shock. The result of the structural change test is indicated in Table 2.2. The regression parameters are B0, B1, and B2. The estimation results are indicated in Table 2.2(a). The standard error for the first group is 10.7 and for the second group it is 11.5. This corresponds to the standard error of the normal random number, ε, which is 10. The correspondence between the regression coefficients and structural parameters is: B0 = β2α1 B1 = β1 B2 = -β1α2
(38)
Therefore, the estimated parameters of the utility function are derived from those of the regression coefficients as: β1 = B1 α1 = B0/(1 - B1) α2 = -B2/B1
(39)
Table 2.2(b) shows the results of hypothesis testing for the structural parameters applied by the Wald test, chi-square test, Chow test, White test, and Jarque– Bera test. In Table 2.2(a), The coefficient B0 is 60.62 and its t-value is 13.2 for the first group. The coefficient of determination, R2, is 0.991, and the
22╇╇ Consumer behavior Table 2.2╇ Results of the stability test (structural change test) (a)╇ Estimates of regression Group 1 (LESR1) B0 B1
60.62 (13.2) 0.40009 (81.0)
Group 2 (LESR2)
Pooled
53.59 (10.3)
56.92 (6.40)
0.4527 (86.4)
0.4244 (44.6)
B2
39.27 (12.6)
43.88 (13.1)
44.26 (7.3)
SE
10.7
11.5
29.4
R2
0.991
0.991
0.942
D-W
2.16
2.21
0.45
Note: Figures in parentheses are t-values.
(b)╇ Parameters of the utility function, P-value, and other test statistics Group 1 (LESR1)
Group 2 (LESR2)
Pooled (i)
b1 b2
0.4009 [0.851] 0.5990 [0.851]
0.4528 [0.597] 0.5427 [0.597]
0.4243 [0.010] 0.5756 [0.010]
a1
101.19 [0.865]
97.94 [0.806]
98.87 [0.937]
a2
-97.97 [0.810]
-96.92 [0.700]
-104.3 [0.780]
χ2(3)
0.684 [0.876]
0.361 [0.948]
181.8 [0.000] (ii)
b1
0.4009 [0.851]
0.4528 [0.597]
0.4243 [0.007]
b2
0.5990 [0.851]
0.5427 [0.597]
0.5756 [0.007]
a1
101.19 [0.865]
97.94 [0.806]
98.87 [0.937]
a2
-97.97 [0.810]
-96.92 [0.700]
-104.3 [0.780]
χ2(3)
0.684 [0.876]
0.361 [0.948]
White test
0.611 [0.987]
3.994 [0.550]
78.2 [0.000]
J-B test
2.03 [0.362]
3.28 [0.194]
11.2 [0.000]
Chow test
181.8 [0.000] 392.4 [0.000]
Note: Figures in brackets are P-values.
Durbin–Watson statistic is 2.16, indicating that there is a high correlation between observed values and estimated values using the regression method, and no auto-correlation. In Table 2.2(b), we find that the estimated value of β1, namely b1, is 0.4009. We know a priori that the true value is β1 = 0.40. In testing the null hypothesis of H0: β1 = 0.40 by the Wald test, the P-value is 0.851, as shown in the brackets. Let us now consider the relation between the P-value and the significance level utilized for hypothesis testing. When we a priori decide that the significance level is 0.05 as usual, the null hypothesis is not rejected if the P-value is greater than
Consumer behavior╇╇ 23 0.05. In the above case, the P-value is 0.851, so the null hypothesis of β1 = 0.40 is not rejected. When we conduct hypothesis testing for the parameters, β2, α1, and α2, the estimated values of b2, a1, and a2 derived by regression analysis are 0.5990, 101.19, and -97.97, respectively, as indicated in Table 2.2(b). The true values used for the Monte Carlo method are 0.6, 100, and -100, indicating that the accuracy of the estimated values appears high. Applying hypothesis testing by the Wald test, the P-values of H0: β1 = 0.4, H0: α1 = 100, and H0: α2 = -100 are 0.851, 0.865, and 0.810, respectively. Therefore, we can get suitable estimates for the utility function by applying regression analysis. In addition, we have confirmed that the estimates are statistically equivalent to the true values of the utility function. Using the Wald test, we tested the null hypothesis for each parameter. Now we can check the validity of the estimates as a whole. For this purpose, we test the null hypothesis of H0: β1 = 0.4, α1 = 100, and α2 = -100 simultaneously, utilizing the χ2-test. The P-value of the null hypothesis is 0.876, as seen in Table 2.2, indicating a high level of accuracy for the estimates b1, a1, and a2. The results for group 2 are similar to those for group 1, as indicated in Table 2.2(a). The true value of β1 is 0.45, the estimated value of b1 is 0.4528, and the P-value for the null hypothesis H0: β1 = 0.45 is 0.597, so the null hypothesis is not rejected. The estimated values for β2, α1, and α2 are 0.5472, 97.94, and -96.92, respectively, while the true values for β2, α1, and α2 are 0.55, 100, and -100, respectively, indicating that we can derive statistically plausible estimates of the true values of the utility function from the samples. The situation is different for the pooled data of groups 1 and 2. This is clear by looking at the third column of Table 2.2(b). We see that the estimated value of β1, namely b1, is 0.4243. As we know a priori that the true value of β1 for group 1 is 0.4 and that its true value for group 2 is 0.45, the estimated value of 0.4243 is almost the average of these two figures. The P-value of the null hypothesis H0: β1 = 0.4, is 0.010, indicating that the null hypothesis is rejected at the significance level of 0.05. And we confirm that the null hypothesis H0: β1 = 0.45 is also rejected at the significance level of 0.05. As for the Chow test, the null hypothesis here is that there is no structural change in the sample. The statistic of the Chow test indicated in the third column in Table 2.2(b) is 392.4 and the corresponding P-value is 0.000, indicating that the null hypothesis that there is no structural change is rejected at the significance level of 0.05. According to the Chow test, there is clearly structural change in the pooled data. Moreover, we find that the Durbin–Watson statistic is 0.45, indicating positive auto-correlation. This raises the possibility of mis-specification in the model. As we a priori know the true relationship of the values of the utility function and the specification of the utility function, we can say a priori that the estimating equation for the pooled data of groups 1 and 2 has a mis-specification, as indicated by the Durbin– Watson statistic. Mis-specification can be checked by Durbin–Watson statistics. Next we consider three other examples of testing structural change: one involves combining the two sets of virtual data that includes no structural change with the pooled data for two different data sets derived from cases (1) and (3) in Table 2.1.
24╇╇ Consumer behavior Table 2.3╇ Results according to the two data sets with no structural changes (a)╇ Estimates of regression B0 B1
Group 1 (LESR1)
Group 2 (LESR11)
Pooled
60.62 (13.2)
59.85 (14.0)
60.19 (19.3)
0.40009 (81.0)
B2
39.27 (12.6)
SE
10.7
2
0.4058 (86.4) 36.17 (14.0) 9.78
0.4035 (125.9) 37.57 (18.9) 10.2
R
0.991
0.992
0.991
D-W
2.16
2.21
2.19
Note: Figures in parentheses are t-values.
(b)╇ Parameters of the utility function Group 1 (LESR1) b1 b2
0.4009 [0.851] 0.5990 [0.851]
Group 2 (LESR11) 0.4058 [0.170] 0.5942 [0.170]
Pooled 0.4035 [0.273] 0.5965 [0.273]
a1
101.19 [0.865]
100.73 [0.913]
100.92 [0.849]
a2
-97.97 [0.810]
-89.13 [0.107]
-93.13 [0.193]
6.06 [0.108]
5.05 [0.168]
2
χ (3)
0.684 [0.876]
Chow test
0.443 [0.722]
White test
0.611 [0.987]
3.16 [0.675]
2.34 [0.800]
J-B test
2.03 [0.362]
1.57 [0.455]
2.38 [0.303]
Note: Figures in brackets are P-values.
By pooling the data sets, we can determine whether they can be assessed correctly according to the Chow test. In other words, we can learn whether or not the Chow test can assess the data correctly in a case where there is no structural change. As indicated in Table 2.3(b), the P-value according to the Chow test is 0.722, indicating that the null hypothesis of no structural change is not rejected at the level of 0.05. Another example involves combining two virtual data sets in which the income levels and the parameters of the two utility functions are different, as in cases (4) and (5) in Table 2.1. In this example, the income level in case (4) ranges from $500 to $1,500, while in case (5) it ranges from $1,500 to $2,500; the values of the parameters of β1 and β2 are also different in the two cases. We conduct the Chow test to determine whether structural change took place. As indicated in Table 2.4(b), the P-value of the Chow test is 0.000, indicating that the null hypothesis of no structural change is rejected. A third example involves combining two data sets: cases (1) and (6), where (6) includes positive auto-correlation; and cases (1) and (7), where (7) includes negative auto-correlation (see Table 2.1). The utility functions have the same parameters, but one data set includes auto-correlation in the disturbance term. In Table 2.5, we can see that the P-value obtained from the Chow test is 0.883 for
Consumer behavior╇╇ 25 Table 2.4╇ Results according to the two data sets with structural changes (a)╇ Estimates of regression B0 B1
Group 1 (LESR1)
Group 2 (LESR2)
Pooled
60.48 (21.6)
58.34 (16.4)
25.79 (4.6)
0.4014 (157.7)
0.4465 (239.0)
0.4687 (164.6)
B2
38.47 (18.2)
47.44 (18.5)
17.50 (74.2)
SE
10.7
11.2
30.6
R2
0.997
0.999
0.994
D-W
2.08
1.98
1.42
Note: Figures in parentheses are t-values.
(b)╇ Parameters of the utility function Group 1 (LESUNI1)
Group 2 (LESUNI3)
Pooled
(i) b1 b2
0.4015 [0.560] 0.5985 [0.560]
0.4466 [0.065] 0.5534 [0.065]
0.4687 [0.000] 0.5313 [0.000]
a1
101.05 [0.812]
105.42 [0.378]
48.54 [0.000]
a2
-95.83 [0.455]
-106.23 [0.298]
-37.34 [0.000]
χ (3)
0.727 [0.867]
5.30 [0.151]
b1
0.4015 [0.560]
0.4466 [0.065]
2
1202.0 [0.000] (ii)
b2
0.5985 [0.560]
0.5534 [0.065]
0.4687 [0.000] 0.5313 [0.000]
a1
101.05 [0.812]
105.42 [0.378]
48.54 [0.000]
a2
-95.83 [0.455]
-106.23 [0.298]
-37.34 [0.000]
χ (3) 2
0.727 [0.867]
5.30 [0.151]
Chow test
248.5 [0.000] 448.7 [0.000]
White test
6.03 [0.302]
4.75 [0.446]
24.79 [0.000]
J-B test
0.921 [0.631]
2.48 [0.289]
22.58 [0.000]
Note: Figures in brackets are P-values.
positive auto-correlation, and 0.908 for negative auto-correlation, indicating that there is no structural change. Using the Chow test, we can verify whether or not there is structural change in the parameters of the model, excluding the characteristics of random variables. Regarding the null hypothesis in Table 2.5, we have no restrictions on the characteristics of the disturbance term. 2.4.2 Heteroskedasticity When conducting cross-sectional analysis, one may find heteroskedasticity due to differences in the number of households in different income classes. When
26╇╇ Consumer behavior Table 2.5 Structural change test with auto-correlation (a)╇ Estimates of regression B0
ρ = 0.8
ρ = -0.8
59.60 (13.6)
58.94 (23.3)
0.3998 (83.6)
B1
0.4013 (143.8)
B2
40.80 (12.8)
39.99 (21.1)
SE
14.6
10.9
R
2
D-W
0.983
0.994
0.90
1.81
Note: Figures in parentheses are t-values.
(b)╇ ρ = 0.8; data sets (LESR1, LESSER) b1
b2
a1
a2
Chow test
0.3998
0.6002
99.31
-102.1
0.218 [0.883]
Note: The estimated ρ by the pooled data is 0.547.
(c) ρ = -0.8; data sets (LESR1, LESSER1) b1
b2
a1
a2
Chow test
0.4029
0.5971
97.99
96.19
0.182 [0.908]
Note: The estimated ρ by the pooled data is -0.569.
we draw a sample with a stochastic variable from the population with a mean µ and a variance σ2, the sampling distribution of the stochastic variable has a mean µ and a variance σ2/n. This is a case of heteroskedasticity in aggregated cross-section data. In time-series data, when considering the relationship between consumption and income, we will sometimes see the variance of consumption increases as income increases. This is a case of heteroskedasticity in time-series data. To test for heteroskedasticity, we prepare two sets of virtual data. The true parameters generating the virtual data sets are indicated in Table 2.6. In one virtual data set, the variance of consumption increases according to increases in income. We use cases (1) and (2) in Table 2.6 to generate this virtual data set. In cases (1) and (2), the realized value, e, from the random distribution of ε is not heteroskedastic, but rather homoskedastic. In order to transform the data from homoskedastic to heteroskedastic, we construct data for the two cases as follows: Using the parameters of the utility function and a set of variables indicated in Table 2.6, the consumption expenditure of item 1 is calculated as p1x1 = β1y + β1α1p1 - β1α2p2. Using the values for the expenditure of p1x1 and the realized value of εi, namely ei, a series of the value of consumption demand of x1i is obtained as: x1i = (p1x1/p1)i + (y/500)ei
(40)
Consumer behavior╇╇ 27 Table 2.6╇ Data sets used for heteroskedasticity test (1) LESUNIR1 β1 β2
0.4 0.6
(9) LESUNI2 0.4 0.6
α1
100
100
α2
-100
-100
y
[500, 1500]
[1500, 2500]
p1
[0.5, 1.5]
[0.5, 1.5]
p2
[0.4, 1.6]
[0.4, 1.6]
ε
N(0, 102)
N(0, 202)
In this case, the variance of the disturbance term increases as income increases. Instead of a disturbance term of (y/500)ei, we consider the other disturbance term of (y/500)2ei and calculate the consumption demand as: x1i = (p1x1/p1)i + (y/500)2ei
(41)
The magnitude of variance is larger in equation (41) than in equation (40). The scatter diagram is shown in Figure 2.2 for case (1) and in Figure 2.3 for case (2). In Figure 2.2 the random disturbance specified as (y/500)e is depicted, and in Figure 2.3 the random disturbance specified as (y/500)2e is depicted. The x-axis indicates income and the y-axis indicates random disturbances. Comparing cases (1) and (2), the scatter of random disturbances is broader in case (2). Next, we consider two different income intervals. The variance within the same interval is the same, but the variance is different between the two intervals. The virtual data is constructed by pooling cases (1) and (2) in Table 2.6. The interval of income is separated into two, and the variance of the stochastic disturbance term, ε, is different. Figure 2.4 indicates the pooled data, combining case (1) and case (2). The income level of $1,500 is the separating point, and after $1,500 the standard error increases. The estimated results by the regression method are indicated in Table 2.7. The regression equation is: x1i = B0 + B1(y/p1)i + B2(p2/p1)i + ui
(i = 1, 2, …, 100)
(42)
where ui is a random variable that has characteristics of heteroskedasticity. In order to test for heteroskedasticity in the disturbance term, we apply White’s heteroskedasticity test. We will explain White’s heteroskedasticity test using equation (42). Let us define the estimates of the residual as vi = x1i - (B0 + B1(y/ p1)i + B2(p2/p1)i). We regress v2i on all the explanatory variables and the squares and cross products, i.e., on (y/p1), (p2/p1), (y/p1)2, (p2/p1)2, and (y/p1) (p2/p1). And wi is defined as the difference between vi and the estimated value of vi. When the
28╇╇ Consumer behavior 80 60
Residual
40 20 0 −20 −40 −60 −80
0
200
400
600
800
1000
1200
1400
1600
Income
Figure 2.2╇ Heteroskedasticity (case 1).
200 150 100
Residual
50 0 −50 −100 −150 −200 −250
0
200
400
600
800 Income
1000
1200
1400
1600
Figure 2.3╇ Heteroskedasticity (case 2).
null hypothesis of homoskedasticity is correct, the value of n∑w2i is followed by the chi-squared distribution with the degree of freedom of k, namely χ2(k), where k is the number of independent variables regressed on v2i. In the present case, k is 5. The statistic generated from White’s test for the regression equation of case (1) is 12.98 and the corresponding P-value is 0.023. Thus the null hypothesis – that the disturbance is homoskedastic – is rejected at the significance level of 0.05. In the
60 40
Residual
20 0 −20 −40 −60
0
500
1000
1500
2000
2500
3000
Income
Figure 2.4╇ Heteroskedasticity (pooled LESUNIR1 with LESUNIR2).
Table 2.7╇ Results of heteroskedasticity (a)╇ Estimates of regression Data set (LESUNI1) from eq. (40) B0 B1
58.90 (9.7) 0.4040 (73.2)
Data set (LESUNI1) from eq. (41) 54.10 (3.6) 0.4108 (30.3)
Pooled LESUNIR1, LESUNIR2 63.09 (20.7) 0.3991 (257.0)
B2
38.11 (8.3)
38.03 (3.4)
38.21 (16.8)
SE
23.1
56.8
15.9
R2
0.988
0.936
0.997
D-W
2.11
2.13
1.85
Note: Figures in parentheses are t-values.
(b)╇ Structural parameters and the related P-values Data set (LESUNI1) from Data set (LESUNI1) from eq. (40) eq. (41)
Pooled LESUNIR1, LESUNIR2
b1
0.4040 [0.460]
0.4108 [0.422]
0.3991 [0.554]
b2
0.5959 [0.460]
0.5891 [0.422]
0.6009 [0.554]
a1
98.84 [0.905]
91.83 [0.734]
104.99 [0.309]
a2
-94.31 [0.636]
-92.57 [0.797]
-95.75 [0.008]
χ2(3) White test
0.869 [0.832] 12.98 [0.023]
Note: Figures in brackets are P-values.
1.238 [0.743] 14.58 [0.012]
1.495 [0.683] 15.77 [0.008]
30╇╇ Consumer behavior regression equation of case (2), in which the disturbances are larger than in the first case, the P-value of White’s test is 0.012, indicating that the null hypothesis – that there is no heteroskedasticity – is also rejected at the level of 0.05. In the case of separating income intervals into two, the P-value of White’s heteroskedasticity test is 0.008, and hence the null hypothesis is rejected at the significance level of 0.05. These results confirm that when the disturbance has heteroskedasticity, we can correctly judge the existence of that heteroskedasticity using White’s heteroskedasticity test. Explaining cases of heteroskedasticity is an essential topic in standard econometrics textbooks. The estimates obtained by the OLS method are known to be unbiased and efficient according to the Gauss–Markov theorem when the random variable is homoskedastic. On the other hand, the OLS estimates, in cases when the data is heteroskedastic, are unbiased but not efficient. When there is heteroskedasticity in the distribution, we can modify the covariance matrix by White’s method; for estimation method, the feasible generalized least-squares (FGLS) method and generalized least-squares (GLS) method are proposed instead of the OLS method. In cases where a statistic has no efficiency, it is difficult to conduct a significance test of an estimate; specifically, the null hypothesis that the parameter is zero is difficult to reject due to large variance in the estimates. With such large variance, the confidence interval of the null hypothesis becomes broader. In econometrics, it is crucial to produce estimates that have efficiency, but in economic modeling the most important goal is to determine the true parameters of the model. Under the existence of heteroskedasticity, we cannot obtain statistically efficient parameters of the model by utilizing the OLS method for the existing data sets. In this case, we use the OLS estimates as a set of tentative estimates of the model. When looking at the structural parameters, the true values are as follows: β1 = 0.4, β2 = 0.6, α1 = 100, and α2 = -100. The estimate of β1, namely b1, is 0.404; that of β2, namely b2, is 0.5959; that of α1 is 98.84; and that of α2 is -94.31 (see Table 2.7). On the other hand, the OLS estimates, by assuming homoskedasticity, are b1 = 0.4015, b2 = 0.5985, a1 = 100.05, and a2 = -95.83. Thus there is little difference between the two sets of estimates (see Table 2.4). Various methods are employed to exclude heteroskedasticity. For example, using quantile data is useful because the number of households is equally divided into ten, and there is the same number of households in each income class. For time-series data, it is possible to correct for heteroskedasticity by including other variables in addition to income and prices, such as assets, age of household head, life-stage, retirement age, and so on. 2.4.3 Auto-correlation In econometric theory, when we apply the OLS method to data whose disturbance term is characterized by auto-correlation, the estimates are unbiased but not efficient. In this section, we construct a model that has auto-correlation of the first order, then compare the estimates of two data sets: one without auto-correlation and one with auto-correlation. Then we will estimate the parameters and the
Consumer behavior╇╇ 31 auto-correlation coefficient using various methods. This exercise will show us how to deal with auto-correlation. When we generate a data set for first-order auto-correlation, at first the parameters of the utility function are determined. The values of the utility function are β1 = 0.4, β2 = 0.6, α1 = 100, and α2 = -100. As for exogenous variables, the income of y, the price of item 1 of p1, and the price of item 2 of p2 are obtained by generating random numbers. The income values are obtained from the normal distribution with a mean of 1,000 and a standard error of 100, such as y ~ N(1,000, 1002). As for the price of item 1, it is generated by p1 ~ N(1, 0.22); the price of item 2 is generated from p2 ~ N(1, 0.32); and the realized value of the random variable is generated from ε ~ N(0, 102), i.e., the normal distribution with a mean of zero and a variance of 102. After generating the realized value, ei, we calculate the series of ui using the ei’s and the value of the auto-correlation parameter, ρ, as: ui = ρui-1 + ei
(i = 1, 2, …, 100)
(43)
where u0 is fixed as 0. A series of the random variable ui meets the conditions of first-order auto-correlation. We consider two cases of auto-correlation: positive auto-correlation of ρ = 0.8 and negative auto-correlation of ρ = -0.8. In columns (6) and (7) in Table 2.1, the values used for generating the virtual data set are indicated. The plot for ui is depicted in Figures 2.5 and 2.6. In Figure 2.5, due to positive auto-correlation of disturbances, the periodicity of the disturbances becomes larger than in the case of ρ = 0, while due to negative auto-correlation of -0.8, it becomes shorter than in the case of ρ = 0. In Figure 2.6 we focused on the period from 41 to 72 in order to clarify the difference in the tendency due to the value of ρ’s. We estimate the regression coefficient using such virtual data sets. First, we estimate the parameters, ignoring auto-correlation in a data set that includes the effect of auto-correlation. Next we estimate the model considering the existence of auto-correlation. We compare several kinds of estimation methods to check the parameters of the estimates. The following estimation methods are applied: the maximum likelihood (ML) method, the ML with grid method, the Chocrane and Orcutt method, and the Hildreth and Lu method. Finally, with regard to the property of unbiasedness, we generate 80 data sets consisting of 20 observations out of the sample size of 100, estimate the parameters by the OLS method, and calculate the mean and variance of the estimates. When we apply the OLS method to the data set that includes auto-correlation, we get estimates with unbiasedness but without efficiency. For the test statistic of autocorrelation, we use the Durbin–Watson test statistics. The test statistic is defined as: d = ∑(vt - vt-1)2/∑vt2
(44)
where v is residual. The range of the Durbin–Watson statistic, d, is between 0 (perfect positive auto-correlation) and 4 (perfect negative auto-correlation). When there is no auto-correlation, d is 2.
60 40
Residual
20 0 −20 −40 −60
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 Time ρ = 0.8
ρ = −0.8
ρ=0
Figure 2.5╇ Auto-correlation.
50 40 30
Residual
20 10 0 −10 −20
−40
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
−30
Time ρ = 0.8
Figure 2.6╇ Auto-correlation (enlargement).
ρ = −0.8
ρ=0
Consumer behavior╇╇ 33 Table 2.8╇ Results using the data sets with auto-correlation (a)╇ Estimates of regression B0 B1
ρ = 0.8
ρ = -0.8
58.39 (7.8)
56.57 (9.6)
0.3983 (47.8)
0.4046 (67.1)
B2
42.89 (7.2)
38.29 (8.7)
SE
17.7
15.5
R
2
D-W
0.976
0.987
0.41
3.58
Note: Figures in parentheses are t-values.
(b)╇ Parameters of the utility function
b1 b2
LESSER
LESSER1
(ρ = 0.8)
(ρ = -0.8)
0.3983 [0.839] 0.6017 [0.839]
0.4046[0.440] 0.5954 [0.440]
a1
97.04 [0.797]
95.03 [0.588]
a2
-107.70 [0.634]
-94.64 [0.645]
D-W
0.4148
3.583
Note: Figures in brackets are P-values.
Table 2.8 displays the estimates obtained by the OLS method. The Durbin–Watson statistic is shown in order to test for auto-correlation. When the value of the autocorrelation parameter is 0.8, the Durbin–Watson statistic is 0.4148, but if the auto-correlation parameter is -0.8, the Durbin–Watson statistic is 3.58. The first case has positive auto-correlation and the second has negative auto-correlation. We will now introduce several methods for estimating the existence of firstorder auto-correlation: the ML, ML with grid, Chocrane and Orcutt, and Hildreth and Lu methods. The results are presented in Table 2.9. We can see from the results that through such methods we were able to derive suitable estimates of the parameters of the utility function with a coefficient of auto-correlation of ρ. Table 2.10 shows the results of estimates generated by the OLS method using 80 sets of data from a sample size of 20. The average of the estimates remains close to the true values we fixed a priori. The average of the mean for b1 is 0.3950, as indicated by the point estimate of the true value of β1. This result looks reasonable. We can get an unbiased estimate by applying the OLS method when the autocorrelation parameter of ρ is stable. Therefore, if we are interested in the values of the structural parameters, it is sufficient to estimate the structural parameters by applying the OLS method to a data set with auto-correlation. On the other hand, when we are interested in obtaining the efficient estimates to know the significance level of the regression coefficients, we use methods
34╇╇ Consumer behavior Table 2.9╇Estimation results by considering the existence of auto-correlation: maximum likelihood method, maximum likelihood with grid method, Chocrane–Orcutt method and Hildreth–Lu method (a)╇ ρ = 0.8 ML
ML with grid
C-O
H-L
b1
0.4030
0.4030
0.4030
0.4030
b2
0.5970
0.5970
0.5970
0.5970
a1
85.73
85.72
86.46
86.50
a2
-109.69
-109.70
-109.73
-109.73
estρ D-W
0.7984
0.8000
0.8053
0.8000
(0.0595)
(0.0595)
(0.0595)
(0.0603)
1.954
1.957
1.959
1.948
Note: Figures in parentheses are standard errors.
(b)╇ ρ = -0.8 ML b1 b2
ML with grid
0.4022 0.5978
0.4022 0.5978
C-O 0.4022 0.5978
H-L 0.4022 0.5978
a1
98.29
98.29
98.28
98.28
a2
-96.25
-96.25
-96.25
-96.24
estρ D-W
-0.79996
-0.8000
-0.8055
-0.8000
(0.0593)
(0.0593)
(0.0593)
(0.0600)
2.228
2.228
2.215
2.224
Note: Figures in parentheses are standard errors.
Table 2.10╇ Mean and standard error in the estimates for sample size of 20 b1
b2
a1
a2
Mean
0.3950
0.6050
90.96
-131.41
Standard error
0.0285
0.0285
24.99
71.02
other than the OLS method, such as the ML, ML with grid, Chocrane and Orcutt, and Hildreth and Lu methods. When we conduct forecasting by using the existing structural parameters, it is necessary to confirm the stability of the lag structure of auto-correlation. Here we can consider two further problems: Why is the lag structure stable, and how are systematic factors extracted from the lag structure? Whether we are satisfied with testing for a stable lag structure or wish to identify systematic factors out of the lag structure is strongly affected by our research interests. In the case of creating a model of consumer behavior, we have two
Consumer behavior╇╇ 35 options when we obtain stable estimates with auto-correlation in the disturbance term: We can be satisfied with the estimates and do no further research, or we can try to extract systematic factors from the distribution in order to satisfy the assumption of independence between random variables in the disturbances. We are interested in modifying existing models to explain the data more accurately. For example, we want to introduce new variables in order not to have auto-correlation in the disturbance term, because we want unbiased and efficient estimates. The second option involves modifying the existing experimental design. 2.4.4 Normality Under the usual assumptions about random variables, a random variable specified as shock is assumed to be homoskedastic, with independence from other random variables. Independence between exogenous variables and random disturbances is also assumed. Based on these assumptions, the OLS estimator becomes the minimum variance within the linear unbiased estimators. Adding the assumption of normality, we conduct testing using a normal distribution, t-distribution, F-distribution, and χ2-distribution. We use Jarque–Bera statistics to test the normality of shock in regression models. The test statistic is written as: JB = n[S2/6 + (K - 3)2/24]
(45)
where S is skewness and K is kurtosis. When the null hypothesis of normality is correct, the value of JB follows χ2(2). Tables 2.2, 2.3, and 2.4 show the results of the normality test. Table 2.3 indicates that the normality test is not rejected in three sets of data with no structural change; these three data sets include two separated data sets and a pooled one. But in the case of existing structural change, pooled data does not satisfy the normality test, though each of the other two data sets do. Next, we apply the normality test to three different distributions for disturbances. We consider the models in which the structural parameters are the same but the distribution is different from normal; these are the Cauchy, exponential, and t-distributions. The results are displayed in Table 2.11. The results confirm that for non-normal distributions, normality is rejected by the Jarque–Bera test. The P-values for the Cauchy, exponential, and t-distributions are 0.000, 0.000, and 0.000, respectively, indicating that the Jarque–Bera test is useful for testing normality. 2.4.5 Cross-equation restriction: extension of the model from two to three categories In the previous models, total expenditure is divided into two clusters of items. When classifying the two categories, there is one estimating equation due to the Walras law.
36╇╇ Consumer behavior Table 2.11╇Results with non-normal distributions and the normality test by Jarque–Bera test (a)╇ Estimates of regression B0
Cauchy
Exponential
t-distribution
64.48 (20.6)
60.24 (272.8)
59.97 (123.3)
0.3982 (115.0)
B1 B2
38.21 (15.2)
0.3998 (1632.7) 39.96 (224.8)
0.4002 (742.2) 39.59 (101.1)
SE
8.2
0.58
1.28
R2
0.996
0.999
0.999
D-W
2.33
1.90
2.03
Note: Figures in parentheses are t-values.
(b) Cauchy
Exponential
t-distribution
b1
0.3983 [0.620]
0.3998 [0.498]
0.4002 [0.657]
b2
0.6017 [0.620]
0.6002 [0.498]
0.5998 [0.657]
a1
107.17 [0.135]
100.39 [0.257]
a2
-95.94 [0.553]
-99.95 [0.924]
J-B test
973.6 [0.000]
574.8 [0.000]
99.9989 [0.999] -98.92 [0.311] 44.02 [0.000]
Note: Figures in brackets are P-values.
Therefore, we used the single-equation regression models to get the parameters of the utility function. But when the categories of the model are extended to more than three items, it is necessary to estimate the model by the simultaneous estimation method. In this section, we explain the simultaneous estimation of a model and consider cross-equation restrictions. First we explain the extended model. The utility function is: u = β1 log(x1 - α1) + β2 log(x2 - α2) + β3 log(x3 - α3)
(46)
and the budget constraint is: y = p1x1 + p2x2 + p3x3
(47)
After applying utility maximization under the condition of budget constraints, we derive the consumer expenditure functions for three categories as: p1x1 = β1y + (1 - β1)α1p1 - β1α2p2 - β1α3p3 p2x2 = β2y - β2α1p1 + (1 - β2)α2p2 - β2α3p3 p3x3 = β3y - β3α1p1 - β3α2p2 + (1 - β3)α3p3
(48)
Consumer behavior╇╇ 37 Table 2.12╇ Information on structural parameters, exogenous variables, and shocks β1
β2
β3
α1
α2
α3
0.4
0.25
0.35
100
0
-100
y
p1
p2
p3
[500, 1500]
[0.5, 1.5]
[0.4, 1.6]
[0.45, 1.55]
ε1
ε2 2
N(0, 10 )
ρε1ε2 2
N(0, 10 )
0
where β’s are normalized as β1 + β2 + β3 = 1. The above LES expenditure functions are written in general as: pi xi = βi y + αi pi - βi ∑j αjpj = αi pi + βi(y - ∑j αjpj) (i = 1, 2, …, n)
(49)
In the LES model, the term αipi is referred to as committed expenditure for the i-th item, the term ∑j αjpj is referred to as committed income, and the term (y - ∑j αjpj) is referred to as supernumerary income. In the LES system, consumer expenditure of pixi includes both committed expenditure piαi and a constant rate (βi) of supernumerary income. In the LES expenditure functions, the idea of a subsistence level is included in the specification and denoted by piαi. Consumption for maintaining a subsistence level may be divided between expenditures for food, water, housing, and some categories of goods and services in the real world. When there is some amount of income at hand, after securing minimum food for subsistence, consumers buy more food according to a fixed rate of the amount of supernumerary income. The more consumption increases, the more the standard of living increases. That is, after buying rice and vegetables for sustenance, consumers buy steaks or go to a restaurant. In considering housing for a family with two or three children, one bedroom is a minimum space to live in, but consumers desire a larger space – two or three bedrooms – to live more comfortably. It is important to note that we cannot say with certainty that a given consumer will buy items for subsistence in the first stage and then buy extra “quality of life” items in the second stage according to the level of supernumerary income. We are more concerned with overall observed tendencies. The values of the parameters and variables are indicated in Table 2.12. In the case of three clusters of items, it is sufficient to obtain a set of structural parameters by estimating two out of three equations. That is, two equations are independent and the other one is dependent due to the budget constraint. The estimating regression equations are: x1 = A0 + A1(y/p1) + A2(p2/p1) + A3(p3/p1) + ε1
(50)
x2 = B0 + B1(y/p2) + B2(p1/p2) + B3(p3/p2) + ε2
(51)
38╇╇ Consumer behavior Table 2.13╇ Calculated b1, b2, a1, a2, and a3 Equation
b1
(50)
0.3986
(51)
b2 0.2499
a1
a2
a3
99.20
2.07
-104.9
104.38
0.33
-106.1
when Ai and Bi are indicated as the form of structural parameters as: A0 = (1 - β1)α1 A1 = β1 A2 = -β1α2 A3 = -β1α3 B0 = (1 - β2)α2 B1 =β2 B2 = -β2α1 B3 = -β2α3
(52)
When the OLS method is applied in estimating equations (50) and (51), the regression parameters of A0, A1, A2, and A3 in equation (50) and B0, B1, B2, and B3 in equation (51) are estimated. After estimating the two equations, we can obtain all the parameters of the utility function as: β1 = A1 β2 = B1 α1 = A0/(1 - A1) = -B2/B1 α2 = -A2/A1 = B0/(1 - B1) α3 = -A3/A1 = -B3/B1
(53)
From equations (53) we can get two sets of estimates for α1, α2, and α3 by using either equation (50) or equation (51). The parameter β3 is calculated by utilizing the constraint of β1 + β2 + β3 = 1, namely β3 = 1 - (β1 + β2). For example, the estimates of α1 are 99.20 from equation (50) and 104.38 from equation (51), respectively, as indicated in Table 2.13. The reason why there are two estimates of a1 is that we estimated the regression coefficients equation by equation, but did not estimate them simultaneously. Let us consider hypothesis testing as follows: (I.1) (I.2) (I.3) (II.1.1) (II.1.2) (II.2.1) (II.2.2)
H0: α1(1) = α1(2) H0: α2(1) = α2(2) H0: α3(1) = α3(2) H0: α1(1) = 100 H0: α1(2) = 100 H0: α2(1) = 0 H0: α2(2) = 0
Consumer behavior╇╇ 39 Table 2.14╇ Hypothesis testing (a)╇ Type I P-value
I.1
I.2
I.3
(I.1,I.2,I.3)
[0.642]
[0.793]
[0.920]
[0.933]
(b)╇ Type II P-value
P-value
II.1.1
II.2.1
II.3.1
(II.1.1,II.2.1,II.3.1)
[0.855]
[0.790]
[0.447]
[0.896]
II.1.2
II.2.2
II.3.2
(II.1.2,II.2.2,II.3.2)
[0.679]
[0.967]
[0.555]
[0.929]
(II.3.1) H0: α3(1) = -100 (II.3.2) H0: α3(2) = -100 The P-values are indicated in Table 2.14. From Table 2.14(a) we can see that the two sets of a1, a2, and a3 are statistically equal and not different from 100, 0 and -100, respectively. For the simultaneous estimation, the estimating equations are: x1 = β1(y/p1) + (1 - β1)α1 - β1α2(p2/p1) - β1α3(p3/p1) + ε1 x2 = β2(y/p2) - β2α1(p1/p2) + (1 - β2)α2 - β2α3(p3/p2) + ε2
(54)
where ε1 and ε2 are mutually distributed with means of zero and covariance of Σ. Table 2.15 shows the results obtained by the seemingly unrelated regression (SUR) method and the three-stage least-squares (3SLS) method. According to the table, the estimates obtained by these methods are similar, and we know that the estimates are not different from the true values obtained by hypothesis testing. 2.4.6 Measurement of the Consumer Price Index There are price fluctuations for many commodities between two periods and we want to consider the average rate of price movements overall as a price index. The most familiar price index is the CPI. Next we explain the relationship between the CPI and the constant-utility price index based on utility theory. The CPI published by the Bureau of Labor Statistics in the United States or by the Statistics Bureau in Japan is based on the Laspeyres price index. The Laspeyres price index is specified as: PL = ∑i p1ix0i/∑i p0ix0i
(55)
where PL is the Laspeyres price index, p1i is the price of the i-th item in the current period, p0i is the price of the i-th item in the base period, and x0i is the quantity of
40╇╇ Consumer behavior Table 2.15╇ Results of simultaneous estimation for three commodities (a)╇ Estimation results SUR
3SLS
b1
0.3980 (162.2)
0.3978 (155.8)
b2
0.2494 (124.6)
0.2481 (117.4)
a1
99.26 (24.6)
99.95 (23.8)
a2
-0.2959 (0.0)
1.38 (0.4)
-104.08 (21.2)
a3
-105.39 (20.6)
Note: Figures in parentheses are t-values.
(b)╇ Hypothesis testing (P-value) SUR
3SLS
b1
[0.421]
[0.502]
True value
b2
[0.765]
[0.768]
a1
[0.853]
[0.873]
100
a2
[0.921]
[0.927]
0
a3
[0.404]
[0.443]
-100
χ2(5)
[0.927]
[0.929]
0.40 0.25
Note: The null hypothesis related to χ2(5) is H0: β1 = 0.4, β2 = 0.25, α1 = 100, α2 = 0, and α3 = -100.
the i-th item in the base period. From equation (55), we can see that the Laspeyres price index is the ratio of total expenditures between the current and base periods under the condition of a set of similar quantities. The common quantities in the base year are called the market basket. Assume there are only two kinds of commodities: commodity A and commodity B. In the base period, some households purchase 0.5 unit of commodity A for $10 (the unit price of commodity A is $20) and 2 units of commodity B for $20 (the unit price of commodity B is $10). In the current period, the same household purchases 1 unit of commodity A for $15 (the unit price of commodity A is $15) and one unit of commodity B for $15 (the unit price of commodity B is $15). In the two cases, the total expenditure for commodities A and B is $30. This total expenditure does not mean there has been no price change, however, because the amount of commodity A is 0.5 unit in the base period and 1 unit in the current period, while the amount of commodity B is 2 units in the base period and 1 unit in the current period. Thus the comparison of total expenditures does not reflect changes in price. In order to compare prices, the amount purchased must be the same in the base and current periods. We will now calculate a price index based on the base period. The quantities purchased are 0.5 unit for commodity A and 2 units for commodity B. In the current period, if a consumer buys 0.5 unit of commodity A and 2 units
Consumer behavior╇╇ 41 Commodity B
a
$30
B C P
A
R
Q
u0
$30
$37.50 B
a
A
C Commodity A
$37.50 − α
Figure 2.7╇The relationship between the constant-utility and Laspeyres price indexes. (Laspeyres price index = 37.50/30; constant-utility price index = (37.50 - α)/30.)
of commodity B, it costs $7.50 and $30, respectively. The total expenditure in the base period is $30 and that in the current period is $37.50. Thus the ratio is 1.25 (37.50/30), meaning prices increased by 25 percent from the base-year market basket. Next, we consider the price index for the current base period. In the current period, a consumer buys 1 unit of commodity A and 1 unit of commodity B. Therefore, total expenditure is $30 in the base period. The ratio between current and base-year total expenditure is 1.00, and there is no inflation based on the current year’s market basket. A price index using a market basket fixed in a specified base year is called a Laspeyres price index, while that using a market basket fixed in the current year is called a Paache price index. Using the above example, we can explain the constant-utility price index based on economic theory. First we consider the relationship of the Laspeyres and constant-utility price indices based on the base period. The fact that a consumer purchased 0.5 unit of commodity A and 2 units of commodity B means the following: The consumer has an income of $30, and facing a price of $20 per unit for commodity A and $10 per unit for commodity B, she/he chose to maximize utility by purchasing 0.5 unit of commodity A and 2 units of commodity B. Figure 2.7 explains the situation using the indifference curve and budget constraint. Point P is equilibrium and also indicates the market basket in the base period. At point P the satisfaction of a consumer is indicated by u0 in Figure 2.7.
42╇╇ Consumer behavior We now consider the current price level. In the current period, the prices of commodities A and B change. Commodity A becomes cheaper, declining from $20 per unit to $15 per unit. And commodity B becomes more expensive, rising from $10 to $15 per unit. In Figure 2.7, the change in relative prices is indicated by the change of tangency due to the change in relative prices for the budget line. The lines AA, BB, and CC correspond to the new relative prices in the current year. Line AA shows that income is $30, an insufficient amount to buy 0.5 unit of commodity A and 2 units of commodity B. The line BB crosses the point at P and the income level is $37.50. But the line BB does not touch at the utility level of u0. The line BB crosses at two points, P and Q, meaning that there is no need to spend $37.50 to maintain the utility level of u0; the income indicated by the budget line of CC is optimal (or the minimum expenditure level indicated by the cost function) to maintain the utility level of u0. At point R in line CC, the price of commodity B increases, and the consumer forgoes buying more of commodity B, instead buying more of commodity A to maintain the utility level of u0. When we consider income levels, we see that line AA indicates an income of $30, BB indicates an income of $37.50, and CC indicates income between $30 and $37.50. The constant-utility price index is defined as the ratio between base and current incomes to maintain the same level of satisfaction (utility in economics); in this example, it is the ratio between income of CC and $30. The constant-utility price index is calculated directly by applying the dual approach explained in Figure 2.1. After determining the parameters of the utility function by estimating Marshallian demands, the constant-utility price index, PIND (ut+1 = u0), is obtained as follows: PINDt+1(ut+1 = u0) = C(u0, pt+1)/C(u0, pt) = C(u0, pt+1)/yt
(56)
The difference between the constant-utility price index and the Laspeyres price index is that the constant-utility price index does not have a fixed market basket. With changes in relative prices, a consumer buys more of a commodity that becomes relatively cheap, and less of another, in order to allocate income in a way that maintains the same level of satisfaction. This is called the substitution effect in consumer behavior. Significantly, the Laspeyres price index is overvalued compared to the constant-utility price index based on the base-year market basket, while the Paache price index is undervalued compared to the constant-utility price index based on the current-year market basket. How is the constant-utility price index derived? At equilibrium, C(u, p) = y is satisfied. To calculate the utility level at the equilibrium point: u = (y - ∑i αipi)/∏k pkβk
(57)
To define the constant-utility price index as the ratio between the t-th and (t+1)-th income levels under the same utility level:
Consumer behavior╇╇ 43 Pt = C(ut, pt+1)/C(ut, pt) = C(ut, pt+1)/yt
(58)
One then uses the estimates of the utility function and exogenous variables to calculate the constant-utility price index. On the other hand, the market basket plays an important role in constructing the statistical price index, as in the Laspeyres, Paache, Törnqvist, and Fisher price indices. The formulations are: Laspeyres: PL = ∑(p1/p0)w0 Paache: PP = ∑(p1/p0)w1 Fisher: PF = √PLPP Törnqvist: PT = ∏(p1/p0)(w1+w0)/2 Diewert (1976) discovered that the Törnqvist price index based on the market basket of goods and the constant-utility price index based on utility are equivalent when the utility function is specified as a transcendental logarithmic-type utility function. Figure 2.8 displays the Törnqvist, Fisher, and constant-utility price indices. The Laspeyres price index whose base period is the first period and the constant-utility price index are depicted in Figure 2.9; you will notice a large gap between these two price indices. The data used for the estimations in these figures are generated from the Monte Carlo method. The price fluctuations are large compared with the observed data. However, comparing the three indices, we see that the Törnqvist and Fisher price indices are similar to the constant-utility price index. 2.4.7 Mis-specification We have checked the accuracy of econometric estimation methods through hypothesis testing for structural change, heteroskedasticity, auto-correlation, and normality. We have also compared the OLS and other estimates derived from other types of estimation methods. Generally speaking, even if we use complex estimation methods other than the OLS method, the gap between the estimates obtained by OLS and the estimates obtained by complex methods is small, as is the gap between true values and estimates. In this section, we will consider the gap between estimates derived from a correctly specified estimating equation and those derived from a mis-specified estimating equation. We know a priori that the LES demand system is the correctly specified demand function as: xi = B0 + B1(y/p1) + B2(p2/p1) + ε1
(59)
We also know from consumer-demand theory that quantity demanded is a function of income and prices. Instead of the true relationship, we specify the demand function as: x1 = A0 + A1(y/(p1 +p2)/2) + A2(p2/(p1 +p2)/2) + e1
(60)
2.5
2
Index
1.5
1
0.5
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97
0 Time Constant-utility PI
Fisher Index
Tornqvist Index
Figure 2.8╇ Constant-utility price index, Fisher index, and Törnqvist index.
2.5
2
Index
1.5
1
0.5
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97
0 Time Constant-utility PI
Laspeyres PI
Figure 2.9╇ Constant-utility price index and Laspeyres price index.
Consumer behavior╇╇ 45 Table 2.16╇ Estimation results for equations (59) and (60) Equation (59) 60.48 (21.6)
B0 B1
0.4014 (157.7) 38.47 (18.2)
B2 R2
Equation (60) A0 A1 A2
0.997
SE
697.88 (27.4) 0.3932 (30.6) -558.1 (26.2) 0.940
10.69
52.01
2.08
2.15
White test
[0.302]
[0.000]
J-B test
[0.631]
[0.001]
D-W
Note: Figures in parentheses are t-values and those in brackets are P-values.
The data used for estimation is the virtual data set (1) in Table 2.1. Table 2.16 shows the estimation results for the correct specification and the mis-specification. The degree of fit evaluated by the coefficient of determination in Table 2.16 is similar between the two. In the correct specification it is 0.997, and in the mis-specification it is 0.940. In the correct specification, the Durbin–Watson statistic is 2.08, while in the mis-specification it is 2.15. In considering these two statistics, we cannot identify which estimates are better. On the other hand, there are differences in their values for regression coefficients, and there is a large gap between the two when it comes to the test statistics for heteroskedasticity and normality. The P-value for White’s heteroskedasticity test in the correct specification is 0.302, indicating that the null hypothesis of homoskedasticity is not rejected, while the P-value of the mis-specified model is 0.000, indicating that the null hypothesis is rejected. According to the Jarque–Bera test statistics, the null hypothesis of normality is not rejected in the correctly specified model in which the P-value is 0.631, while the null hypothesis is rejected in the mis-specified model in which the P-value is 0.001, as shown in Table 2.16. These results suggest that when the characteristics of disturbance conflict with the assumptions of the model, we should consider the possibility of misspecification of the estimating equations. 2.4.8 Forecasting for correctly specified and mis-specified models In terms of forecasting, the gap between the values predicted from the misspecified model and the true values is large. The predicted values of the correctly specified model are calculated as: x10 = B0 + B1(y0/p10) + B2(p20/p10)
(61)
46╇╇ Consumer behavior where B0, B1, and B2 are regression coefficients obtained from the existing data set, and (y0/p10) and (p20/p10) are the predicted values of the exogenous variables. The variance of the forecasting value is: σ2(1 + 1/100) + (y0/p10 - mean(y/p1)2 var(b1) + 2(y0/p10 - mean(y/p1)(p20/p10 - mean(p2/p1)) cov(b1, b2) + (p20/p10 - mean(p2/p1))2 var(b2)
(62)
Now the predicted value of the mis-specified model is: x10 = A0 + A1(y0/(p10 +p20)/2) + A2(p20/(p10 +p20)/2)
(63)
And the variance of the forecasting value of the mis-specified model is: σ2(1 + 1/100) + (y0/(p10 + p20)/2 - mean(y/(p1 + p2)/2)2 var(a1) + 2(y0/(p10 +p20)/2 - mean(y/(p1 + p2)/2)(p20/(p10 + p20)/2 - mean(p2/(p1+p2)/2)) cov(a1,a2) + (p20/(p10 +p20)/2 - mean(p2/(p1 +p2)/2))2 var(a2)
(64)
The results are shown in Figures 2.10 and 2.11. Figure 2.10 shows the forecasting values and a 95 percent confidence interval when p1 = 1.1 and p2 = 1.1. The forecasting value in equation (61) is better than that in equation (63), but the confidence interval is smaller in equation (61). However, as shown in Figure 2.11, when the prices are far from average, namely p1 = 0.7 and p2 = 1.5, the forecasting value in terms of the true relationship is much better. When the predicted values of the exogenous variables are not far from the average of these values, the forecasting values are not far from the theoretical values in the correctly specified model and in the mis-specified model, though the confidence interval of the mis-specified model is larger than that of the correctly specified model. On the other hand, if the predicted values of the exogenous variables are far from the average of these values, the forecasting values are far from the theoretical values obtained from the correctly specified model. Comparing the results obtained from the correctly specified and mis-specified models, we can appreciate the importance of using a correctly specified model for forecasting. 2.4.9 Elasticity of demand The demand functions of the LES specification for items 1 and 2, respectively, are: p1x1 = β1y + β2α1p1 - β1α2p2 p2x2 = β1y - β2α1p1 + β1α2p2 Therefore, the price elasticity of demand for the first item is:
(65)
Consumer behavior╇╇ 47 1300
Consumption Expenditure
1200 1100 1000 900 800 700 600 1400
1600
1800
2000
2200
2400
2600
Income True value Case1 Estimated value Case1 Lower bound for 95% confidence interval Case1 Upper bound for 95% confidence interval Case2 Estimated value Case2!Lower bound for 95% confidence interval Case2 Upper bound for 95% confidence interval
Figure 2.10╇ Forecasting (p1 = 1.1, p2 = 1.1).
∂log x1/∂log p1 = (∂x1/∂p1)/(p1/x1) = -β1(y - α2p2)/(p1x1)
(66)
and that of second item is: ∂log x2/∂log p2 = (∂x2/∂p2)/(p2/x2) = -β2(y - α1p1)/(p2x2)
(67)
The income elasticity of demand for the first item of the LES demand function is obtained as: ∂log x1/∂log y = (∂x1/∂p)/(y/x1) = β1y/(p1x1)
(68)
and that of the second item is obtained as: ∂log x2/∂log y = (∂x2/∂p)/(y/x2) = β2y/(p2x2)
(69)
We calculated the price elasticity of demand and the income elasticity of demand using the various data sets. We know a priori the parameters in the utility function
48╇╇ Consumer behavior 1200
Consumption Expenditure
1100 1000 900 800 700 600 500 1400
1600
1800
2000
2200
2400
2600
Income True value Case1 Estimated value Case1 Lower bound for 95% confidence interval Case1 Upper bound for 95% confidence interval Case2 Estimated value Case2!Lower bound for 95% confidence interval Case2 Upper bound for 95% confidence interval
Figure 2.11╇ Forecasting (p1 = 0.7, p2 = 1.5).
and that the price elasticity of demand for the first item is price inelastic, while that of the second item is price elastic. The price elasticity and income elasticity are evaluated at the true value of pixi and the estimated value of pixi, respectively, in Table 2.17. As we expected a priori, the price elasticity of demand is inelastic for the first item for all the data sets, while it is elastic for the second item for all the data sets, as indicated in Table 2.17(a). The income elasticity for the first item is stable and about 0.80, and the price elasticity is about -0.88. The income elasticity for the second item is also stable and about 1.2, and the price elasticity is around -1.1. On the other hand, elasticity of demand for the mis-specified model is clearly different from that derived from the true relationship, as indicated in Table 2.17(b). Here the specification of the mis-specified demand function is: x1 = A0 + A1(y/(p1 +p2)/2) + A2(p2/(p1 +p2)/2) x2 = B0 + B1(y/(p1 +p2)/2) + B2(p2/(p1 +p2)/2)
(70)
The price elasticity of demand for items 1 and 2 in the mis-specified model, respectively, is: ∂log x1/∂log p1 = 2(A2p2 - A1y)p1/((p1 + p2)2x1) ∂log x2/∂log p2 = -2(B2p1 - B1y)p2/((p1 + p2)2x2)
(71)
Table 2.17╇ Elasticity of demand (a)╇ True specification Data set Income elasticity ╇╇ of commodity 1: True value Estimated value Price elasticity ╇╇ of commodity 1: True value Estimated value Income elasticity ╇╇ of commodity 2: True value Estimated value Price elasticity ╇╇ of commodity 2: True value Estimated value Data set Income elasticity ╇╇ of commodity 1: True value Estimated value Price elasticity ╇╇ of commodity 1: True value Estimated value Income elasticity ╇╇ of commodity 2: True value Estimated value Price elasticity ╇╇ of commodity 2: True value Estimated value
LESR1
LESR2
LESR11
0.801 0.802
0.817 0.822
0.796 0.804
0.801 0.803
-0.880 -0.879
-0.900 -0.903
-0.878 -0.878
-0.881 -0.880
1.197 1.197
1.223 1.217
1.205 1.198
1.197 1.195
-1.078 -1.077
-1.101 -1.099
-1.082 -1.076
-1.079 -1.076
LESUNIR3
LESUNIR1
LESSER
LESSER1
0.900 0.895
0.801 0.798
0.805 0.815
-0.945 -0.942
-0.881 -0.884
-0.883 -0.890
1.098 1.104
1.198 1.200
1.191 1.181
-1.044 -1.046
-1.080 -1.085
-1.077 -1.070
(b)╇ Elasticity of demand for mis-specified model Data set Income elasticity ╇╇ of commodity 1: True value Estimated value Price elasticity ╇╇ of commodity 1: True value Estimated value
LESUNIR1
Mis-specified
0.801 0.803
0.713 0.713
-0.881 -0.880
-0.147 -0.147 (Continued)
50╇╇ Consumer behavior Table 2.17╇ (Continued) Income elasticity ╇╇ of commodity 2: True value Estimated value Price elasticity ╇╇ of commodity 2: True value Estimated value
1.197 1.195
1.119 1.121
-1.079 -1.076
0.080* 0.080*
Note: * Indicates positive before taking minus sign.
and the income elasticity of demand for items 1 and 2, respectively, is: ∂log x1/∂log y = 2A1y/((p1 + p2)x1) ∂log x2/∂log y = 2B1y/((p1 + p2)x2)
(72)
It is true that the difference in income elasticity between the true and mis-specified models is not large, namely 0.80 versus 0.71 for the first item and 1.2 versus 1.1 for the second item. But the gap in price elasticity between the true and misspecified models is large. The price elasticity for the first item is -0.88 in the true specification, but it is -0.14 in the mis-specified model. Moreover, the price elasticity for the second item is 0.08 in the mis-specified model. In the case of mis-specification, the income elasticity of demand is smaller than the true specification for the first and second items. The price elasticity of demand for the first item is smaller in absolute value than the true specification, and the price elasticity of demand for the second item is positive, which contradicts the downward-sloping demand function. As we don’t know a priori the true relationship in the real world, we have to continue testing the null hypothesis using different kinds of data sets in order to confirm the stability of the estimating relationship.
Bibliography Allen, R. G. D., and A. L. Bowley (1935) Family Expenditure, London: P. Sking. Christensen, R., D. W. Jorgenson, and L. Lau (1975) “Transcendental logarithmic utility fruitions,” American Economic Review, 65, 367–383. Deaton, A., and J. Muellbauer (1980) Economics and Consumer Behavior, New York: Cambridge University Press. Diewert, E. (1976) “Exact and superlative index numbers,” Journal of Econometrics, 4, 115–145. Hicks, J. R. (1939) Value and Capital, Oxford: Oxford University Press. Houthakker, H. S. (1960) “Additive preference,” Econometrica, 28, 244–257. Maddala, G. S. and K. Lahiri (2009) Introduction to Econometrics, Chichester, West Sussex: Wiley. Samuelson, P. A. (1947) Foundations of Economic Analysis, Cambridge, MA: Harvard University Press. Stone, R. (1954) “Linear expenditure systems and demand analysis: an application to the pattern of British demand,” Economic Journal, 64, 511–527. Wold, H. (1953) Demand Analysis, Uppsala: Almquist.
3 Producer behavior
In this chapter, we consider several aspects of producer behavior. It is important to understand producer behavior because along with consumer behavior and market equilibrium it is one of the main fields of microeconomics and constitutes a critical component of economic activity. Section 3.1 explains the theory of producer behavior. Section 3.2 describes a model of producer behavior using the Cobb–Douglas production function. Section 3.3 explains how to generate a data set by the Monte Carlo method. In Section 3.4 we discuss some examples of analyzing producer behavior. Section 3.4.1 considers the problem of multi-collinearity – one of the most important issues related to inference in multiple-regression analysis – illustrated through the estimation of the Cobb–Douglas production function. Here we explain the symptoms of multi-collinearity from a statistical point of view, and check the changes in the values of the estimated parameters in several cases where there are correlations between two independent variables. Section 3.4.2 considers an example of excluding multi-collinearity by applying theoretical restrictions through reducing the number of independent variables. Here we introduce the estimating equations for linear homogeneous production functions. Section 3.4.3 discusses the direct estimation of the parameters of the Cobb–Douglas production function and the indirect estimation method of the production function through the estimation of the cost function. Here we show the difference in causal order for two approaches; the variable of quantity produced becomes a dependent variable in the case of the direct estimation of the production function, while it becomes an independent variable in the case of the estimation of the cost function using the same specification of the production function. Section 3.4.4 describes the estimation of the cost functions derived from cost minimization behavior by producers; Section 3.4.5 considers the economies of scale in the production function; and Section 3.4.6 estimates the parameters of the Cobb–Douglas production function as an approximation of the constant elasticity of substitution (CES) production function and evaluates the usefulness of the Cobb–Douglas production function as an approximation of the CES production function.
52╇╇ Producer behavior
3.1 Theory of producer behavior The fundamental relation guiding producer behavior is the production function, which determines the relation between inputs and output. More specifically, the production function is the relationship between inputs of labor and capital and the amount of production; the causal order is from inputs to output. The cost function is obtained by using the cost minimization principle under the condition of constant production levels. After combining the cost function and sales function, the profit function is defined as the difference between sales and cost functions. Optimal output is determined by applying the profit maximization principle to the profit function. Therefore, cost minimization is a necessary condition for profit maximization. The production function is written as: q = f(x1, x2, …, xm)
(1)
where q is the amount of production and x1, x2, …, xm are factor inputs necessary for production. Factors of production are land, labor, and capital. The definition of the cost equation is: C = ∑irixi
(2)
where C is cost and ri is the unit cost for the i-th factor input. The cost minimization hypothesis stipulates that a firm chooses a combination of factor inputs so as to minimize costs under the condition of constant production levels and fixed factor input prices. To get the optimal amount of factor inputs under the condition of constant levels of production, the Lagrange multiplier method is utilized. The evaluation function is: V = ∑irixi + λ(q0 - f(x1 x2, …, xm))
(3)
The first-order conditions for cost minimization are: ∂V/∂xi = ri - λ∂f/∂xi = 0 (i = 1, 2,…, m) ∂V/∂λ = q0 - f(x1, x2, …, xm) = 0
(4)
From the first equation of (4), we get: 1/λ = (∂f/∂xi)/ri
(i = 1, 2,…, m)
(5)
This is called the law of equal marginal products per unit cost of input (or the least-cost rule). Using equation (5) and the second equation of (4), factor demand functions are obtained as: xi = g(r1, r2, …, rm | q0)
(6)
Producer behavior╇╇ 53 where q0 is a parameter. Substituting xi in equation (2), the cost function is obtained as the function of factor input prices of r1, r2, …, rm and constant quantity of q0: C = ∑i rixi = h(r1, r2, …, rm | q0)
(7)
This is the cost function satisfied by cost minimization under the condition of constant production levels. When we change the levels of production while keeping the least-cost rule, the cost function is changed from equation (7) to equation (8) as: C = H(r1, r2, …, rm, q)
(8)
This is the cost function in which the variables are factor input prices and quantity produced. By applying the profit maximization principle, the optimal production level of the firm can be determined. The profit function is: π = pq - C(q)
(9)
where π is profit and p is the market price. Differentiating both sides by q and setting dπ/dq as zero, we get the following equation: dπ/dq = p + q(dp/dq) - dC/dq = 0
(10)
The solution of the quantity of q satisfies the optimal production level with maximum profit. This equation is the same as the MR (marginal revenue) = MC (marginal cost) condition, namely: MR = p(1 + (q/p)dp/dq) = dC/dq = MC
(11)
When dp/dq in equation (11) is equal to zero, the market becomes a competitive market, and the equilibrium condition of the competitive market becomes p = dC/ dq, where market price is equal to marginal cost. There are three issues to bear in mind here about producer behavior: (1) the profit maximization hypothesis; (2) the difference between “short-run” and “longrun”; (3) the plausibility of the linear homogeneity of the production function. Undergraduate management and accounting majors have asked me about the relevance of the profit maximization principle, arguing that modern corporations usually don’t focus entirely on maximizing profits. Thus, they wonder why economic analysis assumes that firms are engaged solely in maximizing profits. It is true that corporate executives do not publicly state that they only aim to maximize profit. Classes in corporate governance teach that corporations have various functions and responsibilities involving stockholders, employees, society, the environment, consumers, and so on. Hence it may seem that economic analysis is
54╇╇ Producer behavior too narrowly focused on profits. It is important to bear in mind, however, that economic analysis extends beyond the scope of individual firms and managers. It is true that firms may have different strategies and priorities, but a common goal is profit maximization. In economics, ensuring corporate survival is paramount, and this forces firms to adopt profit maximization strategies. We also have another perspective about profit maximization in economic theory. Profit maximization is also treated as an analytical principle. When corporate behavior is explained sufficiently through the profit-maximizing principle, then the principle is relevant even if firms in the real world don’t focus solely on profit maximization. Next let’s consider the difference between the short-run and the long-run. We usually think that the short-run is a short period of time, say one day, one week, or at most one year, and that the long-run is a long period, say five years, ten years or one hundred years. In economics there are concrete definitions of these two concepts. Short-run in microeconomics refers to the period when capital stocks are constant. That is, the capital stock in the production function is treated as being constant. On the other hand, the long-run in microeconomics is defined as the period during which the amount of capital stock varies and a firm must consider the optimal investment behavior. This is because choosing optimal capital stock from the perspective of cost minimization is mathematically identical to choosing optimal allocation of investment in a given period. In macroeconomics, the terms short-run and long-run have different meanings. The short-run is the period when the process of price adjustment continues, and the long-run is the period when price adjustments are completed. Thus, in order to avoid confusion, it is important to bear in mind that the two terms are defined differently in microeconomics and macroeconomics. Let’s now consider the linear homogeneity of the production function. As an example, say there is a factory that produces 10 units per month using 10 laborers and 10 units of capital. When building a new factory that has the same amount of laborers and capital as the existing factory, we might assume that 10 units of output will be produced in the new factory. This illustrates the concept of linear homogeneity: when all the factor inputs are doubled, the amount of production also doubles. It is also necessary to consider the economic meaning of linear homogeneity as a theoretical concept aside from its importance for estimation techniques. Vilfredo Pareto used the example of “Another Paris” to explain linear homogeneity. Consider a person who lives in Paris. If there is another Paris where the same circumstances exist as in the Paris where he currently lives, and he moves to this other Paris, he will continue to have the same lifestyle. We will observe the same behavior in both the true Paris and the alternative Paris because all of the characteristics and circumstances of the former are replicated in the latter. This example suggests the possibility of a linear homogeneous production frontier. When one firm allocates capital stock and human resources just as another firm does, we assume that the amount of output is the same for the two firms. In
Producer behavior╇╇ 55 mathematical form, the production function is a relationship between output and inputs depicted as: Q = f(K, N)
(12)
where Q is the quantity produced, K is the amount of capital stock, and N is the number of laborers. If each input is multiplied by λ as: f(λK, λN)
(13)
the amount of output would become λQ. This is the mathematical representation of the linear homogeneous production function. When labor and capital are increased by the same rate of λ, there are three possibilities. One is “Another Paris,” meaning that the amount of product becomes λQ, as if there are λ firms of similar size. A second is that the amount of production is greater than λQ. Due to the increase in labor and capital, and factors of the increasing returns of scale, the total amount of production increases more than λ times. The third possibility is that the amount of production is less than λQ. Due to the restrictions by inputs other than capital and labor, efficiency is reduced. In this case, none of the factors of production is increased by the rate of λ. The factors of production for the cases other than those with constant returns to scale are not only K and N, but also technological progress, the efficiency parameter (φ(N)), and space (S). The efficiency parameter and space are not increased by λ. The validity of the linear homogeneous production function is testable by analyzing data. Finally, linear homogeneity is based on constant returns to scale. In cases where the amount of production declines, it is called decreasing returns to scale, while additional production is called increasing returns to scale. In general: λkq = f(λx1, λx2, …, λxm)
(14)
This is called k-th homogeneity.
3.2 Models In the case of competitive markets for labor and capital, the behavioral principle for firms is to determine the amount of labor and capital needed so as to minimize the cost of labor and capital while maintaining a constant level of production. In terms of the goods and services market, firms determine the amount of production that will maximize profits. Cost minimization is a necessary condition for profit maximization. We use the Cobb–Douglas production function in our model here: Q = AKβ1Nβ2
(15)
56╇╇ Producer behavior where Q is quantity of output, N is labor input, and K is capital input, and A, β1, and β2 are parameters to be estimated. To estimate A, β1, and β2 there are two methods. The first is to estimate the production function directly, as specified in equation (15). The second is to derive the parameters of the production function from estimates of the cost function. In estimating the cost function, we use the cost minimization principle to include market conditions. The assumption of cost minimization is that the firm determines the combination of labor and capital inputs needed to minimize the cost given a constant level of quantity produced. Therefore, a constant amount of quantity (Q0), wages (w), and interest rates (r) are treated as exogenous variables. The definition of cost is: C = wN + rK
(16)
We describe cost minimization behavior under constant levels of production in mathematical form as: V = wN + rK + λ(Q0 - AKβ1Nβ2)
(17)
To get optimal conditions for N, K, and λ, we derive the first-order condition for minimization through the following equations: ∂V/∂N = w - λβ2AKβ1Nβ2-1 = 0 ∂V/∂K = r - λβ1AKβ1-1Nβ2 = 0 ∂V/∂λ = Q0 - AKβ1Nβ2 = 0
(18)
From the equations in (18), we get the first-order condition as: wN/β2 = rK/β1
(19)
This is the law of equal marginal productivity per dollar for the Cobb–Douglas production function. Labor and capital demand functions are obtained from the equations of (18) and (19) as: N = (1/A)1/(β1 +β2)(β1/β2)-β1/(β1+β2)(w/r)-β1/(β1 + β2)Q01/(β1 + β2) K = (1/A)1/(β1 +β2)(β1/β2)β2/(β1+β2)(w/r)β2/(β1 + β2)Q01/(β1 + β2)
(20)
Estimating equations (20) by the regression method yields the estimates of the Cobb– Douglas production function. The causal order is from w, r, and Q0 to N and K. As described above, the causal order is different for the production and cost functions. We will now specify the cost function and estimate its parameters. In the case of variable levels of production, we use the following equations: N = constant1(w/r)-β1/(β1 + β2)Q1/(β1 + β2) K = constant2(w/r)β2/(β1 + β2)Q1/(β1 + β2)
(21)
Producer behavior╇╇ 57 Taking the logarithms for both sides of equations (21): n = log(constant1) - β1/(β1 + β2) log(w/r) + 1/(β1 + β2)q k = log(constant2) + β2/(β1 + β2) log(w/r) + 1/(β1 + β2)q
(22)
Therefore, when 1/(β1 + β2) = K0, - β1/(β1 + β2) = K1, and β2/(β1 + β2) = K2, we get β1 = -K1/K0 and β2 = K2/K0. The structural parameters β1 and β2 are estimated in the case of variable levels of production. When using the Cobb–Douglas production function, the cost function is written as: C = [(1/A)(β1/β2)(w/r)]1/(β1 + β2) [(β1/β2)(w/r)-β1w + (β1/β2)(w/r)β2 r]Q1/(β1 + β2)
(23)
It is a highly nonlinear function including A, β1, β2, and wage and interest rates. To estimate equation (23), linear approximation is necessary, as it is difficult to estimate equation (23) in the nonlinear form to get the estimates of A, β1, β2. Generally, to estimate the cost function, we specify it as a function of the wage rate, capital costs, and quantity, and then estimate it directly. First, we specify the production function in general as: Q = f(N, K)
(24)
Again, the definition of cost is: C = Nw + Kr
(25)
Then, under the condition of cost minimization, capital demand and labor demand functions are derived as the function of wage, capital costs and quantity as: K = K (w, r, Q) N = N (w, r, Q)
(26)
Adding equations (26) to the cost definition, we get: C = Nw + Kr = C(w, r, Q)
(27)
Thus, cost is a function of wages, unit capital cost and quantity produced. This is the general cost function. In Section 3.4.4 we estimate cost functions.
3.3 How to generate a data set by the Monte Carlo method The Cobb–Douglas-type production function is again specified as: Q = AKβ1Nβ2
(28)
58╇╇ Producer behavior Table 3.1╇ Structural parameters and variables k 1 2 3
n 2
N(15, 4 ) N(15, 42) N(15, 42)
2
N(12, 3 ) N(12, 0.62) N(12, 32)
α
β1
β2
ε
1.0 1.0 1.0
0.2 0.2 0.2
0.8 0.8 0.8
N(0, 22) N(0, 22) N(0, 12)
In order to explain the effect of multi-collinearity later in Section 3.4.1, we will be using this production function. Multi-collinearity is a situation in which there is high correlation between independent variables, and there is a correlation between capital input and labor input in the Cobb–Douglas production function. Let’s consider the following estimating equation: qi = α + β1ki + β2ni + εi
(i = 1, 2, …, n)
(29)
Equation (29) is the logarithmic transformation of equation (28) and adds a disturbance term, namely qi = log(Qi), ni = log(Ni), and ki = log(Ki). Also, εi is assumed to be IIN(0, σ2). To test for the effect of multi-collinearity, we a priori determine the coefficients of α, β1, and β2. We generate the data of ni and ki from random numbers with correlations between ni and ki. Then, using the values of α, β1, and β2 that are the intercept and parameters of ki and ni, respectively, the series of the quantity of qi’s are calculated using equation (29) according to the changes in correlation between ni and ki. The realized value of εi is obtained from normal random numbers of constant mean and variance and denoted as ei. Table 3.1 depicts that α, β1, and β2 are structural parameters, k and n are variables, and ε is a random disturbance. We consider the correlations between labor and capital in 13 cases: from 0 to 0.9 by intervals of 0.1 and those of 0.99, 0.995, and 0.999. A series of qi, the logarithm of Qi, is calculated When we look at the by the following equation: qi = α + β1ki + β2ni + ei
(30)
In Table 3.1, we can see that the parameters α, β1, and β2 are the same for the three cases, ki has the same mean and variance in each case, and the variances of ni and εi are different in each case. We now estimate cost functions by using the virtual data shown in Table 3.2. The two data sets for α, β1, β2, log(r/w), q, ε1, and ε2 are listed. After defining the parameters of production and determining quantity, the relative price of wages and interest rates, and the realized value of random disturbance, we get the following equations for capital and labor: k = - 1/(β1 + β2) logA + β2/(β1 + β2) log(β1/β2)+ β2/(β1 + β2) log(w/r) + 1/(β1 + β2) q + e1 n = - 1/(β1 + β2) logA - β1/(β1 + β2) log(β1/β2) - β1/(β1 + β2)log(w/r) + 1/(β1 + β2) q + e2
(31)
Producer behavior╇╇ 59 Table 3.2╇ List of constants and variables α β1 β2 q lrw ε1 ε2 ρε1ε2 r
(1)
(2)
1.0 0.2 0.8 N(12, 32) N(0, 22) N(0, 12) N(0, 12) 0 [0.01, 0.2]
1.0 0.2 0.8 N(12, 32) N(0, 22) N(0, 22) N(0, 22) 0 [0.01, 0.2]
Table 3.2 depicts the values of the constants and the variables where logA = α and lwr = log(r/w). For the disturbance terms, two cases are considered for the variance of random disturbance for e2. The data sets for q, log(r/w), n, and k are obtained by the above method. Using the data of k and n, the cost data is obtained by the equation C = w exp(n) + r exp(k). The interest rate (r) is obtained from the random number of uniform distribution between 0.01 and 0.2, and the wage rate (w) is obtained using the data of log(r/w), namely the data of lwr in Table 3.2. We explained the process of making virtual data of labor and capital in the case of the cost function. Let us now look at the scatter diagram between cost (C) and quantity (Q). Figure 3.1(a) shows all the data of C and Q whose sample points are 100. In the figure there is one point at the upper-right-hand side and a few points in the lower-left-hand corner. The points near the origin overlap. When we enlarge the scatter near the origin, the scatter is indicated in Figure 3.1(b). There are many points near the lower-left-hand corner. Figure 3.1(c) shows all the points by logarithmic form of C and Q. There, the sample points of 100 are all indicated in the figure. When looking at the points in a real data set from the Census of Industry, sometimes there is one point far from the origin. That point represents a large-scale firm in an industry, while there are many small or medium-size firms near the origin. In the case of the double logarithmic scale of logQ and logC indicated in Figure 3.1(c), it is useful to look at all the scatters in a figure. In the above example, the virtual data is generated under the assumption of a linear homogeneous production function, but we excluded the economies of scale. Now we consider the economies of scale. After assuming economies of scale in the production function, we test whether or not the scale effect is captured by the testing for estimates. Table 3.3 shows the data set. Case (1) sets the parameters β1 and β2 as 0.3 and 0.9, respectively, indicating β1 + β2 = 1.2. This is an example of economies of scale. In case (2), the parameters β1 and β2 are 0.2 and 0.7, respectively, indicating β1 + β2 = 0.9. This is an example of decreasing returns to scale.
(a) 350000000000 300000000000
Cost
250000000000 200000000000 150000000000 100000000000 50000000000
0 20 00 00 00 0 40 00 00 00 0 60 00 00 00 0 80 00 00 00 0 10 00 00 00 00 12 00 00 00 00 14 00 00 00 00 16 00 00 00 00 18 00 00 00 00
0
Production
(b) 1200000000 1000000000
Cost
800000000 600000000 400000000 200000000
00 00
00
00 14
00 12
00 00
00
00
0 10
00
00
0 80
00
00
0 60
00 00 40
20
00
00
0
0
0
Production
(c) 30.000
Cost (Log scale)
25.000 20.000 15.000 10.000 5.000 0.000 0.000
5.000
10.000
15.000
20.000
25.000
Production (Log scale)
Figure 3.1╇(a) Production and cost; (b) production and cost (enlarged map); (c) production and cost (logarithmic form).
Producer behavior╇╇ 61 Table 3.3╇ Virtual data sets α β1 β2 q lrw ε1 ε2 ρε1ε2 r
(1)
(2)
1.0 0.3 0.9 N(12, 32) N(0, 22) N(0, 12) N(0, 12) 0 [0.01, 0.2]
1.0 0.2 0.7 N(12, 32) N(0, 22) N(0, 12) N(0, 12) 0 [0.01, 0.2]
3.4 Examples 3.4.1 Multi-collinearity It is well known that because of multi-collinearity, which occurs when there is high correlation between independent variables, estimates based on regression analysis are unstable. This makes it difficult to determine the influence of these independent variables on the dependent variable. We consider the mathematical background of multi-collinearity using the following model: yi = β0 + β1x1i + β2x2i + εi
(i = 1, 2, …, n)
(32)
The structural parameters β0, β1, and β2 are estimated by the ordinary leastsquares (OLS) method, and the estimates of the structural parameters are denoted by b0, b1, and b2, respectively. The first-order conditions are: ∂∑i ei2⁄∂b0 = 0 ∂∑i ei2⁄∂b1 = 0 ∂∑i ei2⁄∂b2 = 0
(33)
where ei is residual as: ei = yi - (b0 + b1x1i + b2x2i)
(34)
The first-order conditions of (33) become: ∂∑i ei2⁄∂b0 = 2∑ (yi - b0 - b1x1i - b2x2i)(-1) = 0 ∂∑i ei2⁄∂b1 = 2∑ (yi - b0 - b1x1i - b2x2i)(-xi1) = 0 ∂∑i ei2⁄∂b2 = 2∑ (yi - b0 - b1x1i - b2x2i)(-xi2) = 0 In order to solve b0, b1, and b2, we introduce the following equations: Syxk = ∑i yixki - (1/n)∑i yi ∑xki
(k = 1, 2)
(35)
62╇╇ Producer behavior Sxjxk = ∑i xjixki - (1/n)∑i xji ∑ xki
(j, k = 1, 2)
(36)
Now we can get the values of b0, b1, and b2 as: b0 = (1/n)∑i yi - b1(1/n)∑i x1i - b2(1/n)∑i x2i b1 = (Syx1Sx2x2 - Syx2Sx1x2)/(Sx1x1Sx2x2 - Sx1x2Sx1x2) b2 = (Syx2Sx2x2 - Syx2Sx1x2)/(Sx1x1Sx2x2 - Sx1x2Sx1x2)
(37)
When we divide the denominators of b1 and b2 in equations (37) by the products of Sx1x1 and Sx2x2, we rewrite the equations in (37) as: b1 = ((Syx1Sx2x2 - Syx2Sx1x2)/(Sx1x1Sx2x2))/(1 - r122) b2 = ((Syx2Sx2x2 - Syx2Sx1x2)/(Sx1x1Sx2x2))/(1 - r122)
(38)
where r122 is the ratio of Sx1x2Sx1x2 and Sx1x1Sx2x2, and is defined as the square of the correlation coefficient between x1 and x2, namely the coefficient of determination. The value of r122 remains between 0 and 1, and when the square of the correlation coefficient is unity, the correlation coefficient is either -1 or 1. When the denominator of equations (38) becomes 0 (namely the value of r122 becomes unity), the values of b1 and b2 are indeterminable. The variances of b1 and b2, respectively, are written as: var(b1) = σ2/(Sx1x1(1 - r122)) var(b2) = σ2/(Sx2x2(1 - r122))
(39)
When r122 reaches 1, the value of the denominator becomes smaller and the variances of b1 and b2 become larger. Accordingly, the confidence interval becomes broader, the t-value becomes smaller, and the null hypothesis that the estimate is zero becomes difficult to reject. These symptoms are typical of multi-collinearity. The possibility of multi-collinearity is high in cases where the number of independent variables is large, which increases the chance of high correlation between two of them. We explain the symptoms of multi-collinearity using a production function. The production function describes the production process where output is denoted as Q and inputs are labor (N) and capital (K). The independent variables are N and K, and the dependent variable is Q. The causal order is from N and K to Q. Table 3.1 in Section 3.3 depicts the structural parameters α, β1, and β2, the variables k and n, and the random disturbance ε. In Table 3.1, the parameters α, β1, and β2 are the same for the three cases, ki has the same mean and variance, and the variances of ni and εi are different in each case. In accordance with the differences in the variances of ni, we observe differences in the estimates of b1 and b2. In cases 1 and 3, the variance of εi is different, and there are also differences in the estimates of b1 and b2. These results are indicated in Table 3.4.
Producer behavior╇╇ 63 Table 3.4 has two panels in each of cases 1 to 3. The first panel indicates estimates of β1 and β2, their t-values and the coefficient of determination. The second panel indicates the P-values for the four null hypotheses, namely: H0: β1 + β2 = 1 H0: β1 = 0.2 and β2 = 0.8 H0: β1 = 0.2 H0: β2 = 0.8 When we look at the estimates b1 and b2, b2 is negative in case 1 when the correlation between independent variables indicated by ρ is greater than 0.995. In case 2, b1 is negative when the value of ρ is 0.99, and b2 is negative when the value of ρ is greater than 0.995. In case 3, b1 or b2 is negative when the value of ρ is greater than 0.995. Looking at the t-values in case 1, the t-value of b1 is 1.7 when the value of ρ is 0.9 and the t-value of b1 is 0.2 when the value of ρ is 0.99, indicating that b1 is not significant at the 5 percent level of significance. In case 2, the estimate of b2 looks strange even when the value of ρ is 0. In case 3, the t-value looks small when the value of ρ is greater than 0.99. Let’s now consider the relationship between the value of ρ and the t-value, which is smaller than 2 in three cases. In case 1, the t-value of b1 is 1.7 when ρ = 0.9. The t-values are less than 2 for the estimate b1 for ρ = 0.9, b2 for ρ = 0.995 and b2 for ρ = 0.999. In case 2, the t-values of the estimate b2 are less than 2 for ρ = 0.2, for ρ = 0.5, and for ρ = 0.7 and the larger values for ρ’s. In case 3, the t-value of b1 for ρ = 0.99 is 0.1, that of b1 for ρ = 0.995 is 0.6, and that of b1 for ρ = 0.999 is 0.2. Based on these findings, it is reasonable to assume that the possibility of multi-collinearity is small when the correlation coefficient between the two independent variables is smaller than that of the regression equation. This tendency has been proposed by L. R. Klein and is called Klein’s rule. In case 1, the correlation coefficient is about 0.8 because the square of the correlation coefficient, namely the coefficient of determination, is about 0.6. The correlation coefficient is 0.5 in case 2, and 0.95 in case 3. In case 1, the t-value of the estimates becomes smaller when the value of ρ is greater than 0.9. The t-values of b2 are less than 2 for ρ = 0.2 and the values of ρ are greater than 0.2 in case 2. Finally, the t-values of b1 are less than 2 for ρ = 0.99 in case 3. These findings are consistent with Klein’s rule. To evaluate the t-values among these three cases, we can consider another possibility of multi-collinearity as denoted in case 2. When the variance of the logarithm of N, namely n, becomes smaller, the estimates of the structural parameters become unstable even if the correlation between n and the logarithm of K, namely k, is small. Table 3.4(1.2) indicates the P-values for the above four null hypotheses. There are some P-values that are less than 0.05. The higher the possibility that the P-values are less than 0.05, the larger the value of ρ. Regarding the test for linear homogeneity, the P-value is less than 0.05 for ρ = 0.3 in case 2. This finding
Table 3.4╇ Multi-collinearity (1.1)╇Case 1: Estimates of b1 and b2, t-values of the parameters, and adjusted coefficients of determination ρ
b1
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 0.995 0.999
0.154 (2.9) 0.212 (4.5) 0.167 (3.1) 0.225 (4.1) 0.224 (3.0) 0.116 (2.9) 0.258 (3.6) 0.187 (2.5) 0.225 (3.0) 0.190 (1.7) 0.008 (2.5) 1.20 (2.0) 2.67 (2.2)
R2
b2 0.783 (10.0) 0.890 (11.6) 0.848 (11.0) 0.765 (11.3) 0.840 (10.2) 0.833 (10.2) 0.703 (8.3) 0.857 (9.4) 0.628 (6.1) 0.767 (5.5) 0.921 (2.0) -0.55 (0.6) -2.45 (1.5)
0.548 0.610 0.634 0.686 0.582 0.677 0.687 0.722 0.668 0.760 0.747 0.675 0.689
(1.2)╇ Case 1: P-values ρ
H0: β1 + β2 = 1
H0: β1 = 0.2, β2 = 0.8
H0: β1 = 0.2
H0: β2 = 0.8
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 0.995 0.999
[0.476] [0.252] [0.837] [0.899] [0.493] [0.997] [0.563] [0.484] [0.020] [0.484] [0.875] [0.094] [0.047]
[0.655] [0.480] [0.754] [0.842] [0.790] [0.837] [0.524] [0.762] [0.068] [0.740] [0.932] [0.234] [0.113]
[0.396] [0.797] [0.549] [0.638] [0.737] [0.559] [0.414] [0.858] [0.728] [0.931] [0.764] [0.094] [0.038]
[0.828] [0.237] [0.531] [0.611] [0.624] [0.678] [0.257] [0.524] [0.091] [0.815] [0.789] [0.089] [0.040]
(2.1)╇Case 2: Estimates of b1 and b2, t-values of the parameters, and adjusted coefficients of determination ρ 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 0.995 0.999
b1 0.199 (4.1) 0.207 (3.8) 0.127 (2.8) 0.155 (3.0) 0.174 (3.1) 0.215 (3.6) 0.190 (3.1) 0.200 (2.7) 0.347 (4.9) 0.175 (1.6) -0.101 (0.2) 0.933 (1.5) 0.668 (0.6)
b2 1.271 (4.1) 0.784 (2.1) 0.454 (1.5) 1.553 (4.6) 0.893 (2.4) 0.445 (1.0) 0.889 (2.5) 0.919 (1.9) 0.126 (0.2) 0.826 (1.1) 2.40 (1.0) -3.99 (0.9) -2.40 (0.3)
R2 0.259 0.164 0.091 0.251 0.207 0.172 0.231 0.210 0.438 0.281 0.206 0.319 0.274
(2.2)╇ Case 2: P-values ρ
H0: β1 + β2 = 1
H0: β1 = 0.2, β2 = 0.8
H0: β1 = 0.2
H0: β2 = 0.8
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 0.995 0.999
[0.128] [0.983] [0.149] [0.033] [0.845] [0.418] [0.806] [0.779] [0.227] [0.996] [0.506] [0.254] [0.658]
[0.312] [0.989] [0.095] [0.070] [0.895] [0.720] [0.966] [0.947] [0.094] [0.899] [0.350] [0.383] [0.887]
[0.999] [0.886] [0.106] [0.384] [0.640] [0.799] [0.873] [0.993] [0.036] [0.822] [0.383] [0.225] [0.667]
[0.128] [0.966] [0.239] [0.025] [0.798] [0.419] [0.797] [0.799] [0.170] [0.971] [0.485] [0.250] [0.659]
(3.1)╇Case 3: Estimates of b1 and b2, t-values of the parameters, and adjusted coefficients of determination
ρ
b1
b2
R2
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 0.995 0.999
0.167 (7.2) 0.242 (8.7) 0.187 (6.6) 0.234 (7.7) 0.152 (5.7) 0.212 (7.6) 0.191 (7.6) 0.203 (5.9) 0.182 (4.0) 0.147 (2.4) -0.024 (0.1) 0.171 (0.6) -0.121 (0.2)
0.843 (31.1) 0.803 (24.3) 0.767 (20.6) 0.765 (19.8) 0.876 (20.9) 0.799 (21.2) 0.849 (24.1) 0.795 (19.4) 0.823 (14.8) 0.814 (9.5) 1.110 (4.6) 0.814 (2.4) 1.209 (1.5)
0.912 0.891 0.847 0.901 0.891 0.879 0.932 0.900 0.923 0.879 0.893 0.921 0.902
(3.2)╇ Case 3: P-values ρ
H0: β1 + β2 = 1
H0: β1 = 0.2, β2 = 0.8
H0: β1 = 0.2
H0: β2 = 0.8
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 0.995 0.999
[0.771] [0.241] [0.271] [0.979] [0.414] [0.754] [0.141] [0.963] [0.845] [0.346] [0.218] [0.861] [0.659]
[0.096] [0.277] [0.543] [0.500] [0.120] [0.892] [0.294] [0.992] [0.912] [0.257] [0.431] [0.750] [0.722]
[0.155] [0.122] [0.644] [0.259] [0.078] [0.652] [0.736] [0.920] [0.695] [0.381] [0.208] [0.910] [0.581]
[0.108] [0.966] [0.378] [0.364] [0.067] [0.986] [0.156] [0.904] [0.672] [0.862] [0.196] [0.965] [0.599]
66╇╇ Producer behavior suggests that the t-value becomes smaller not only in cases where the correlation coefficient between independent variables becomes higher, but also in cases where the variance of independent variables becomes smaller. 3.4.2 Linear homogeneous production function Now let us look at the results imposed by linear homogeneity. In the Cobb–Douglas specification, the linear homogeneous form becomes: Q = AKβ1L1-β1
(40)
This is then rewritten as: Q/L = A(K/L) β1
(41)
The estimating equation of equation (41) is: qi - ni = α + β1(ki - ni) + εi
(i = 1, 2, …, n)
(42)
The estimate of β1, namely b1, is obtained by the OLS method. Table 3.5 indicates the estimation results imposed by linear homogeneity. When restricting linear homogeneity, the fit becomes poor. Due to a large residual variance, the confidence interval becomes broader. Therefore, there is a possibility that the null hypothesis of linear homogeneity is not rejected. As in Table 3.4, there is a possibility of multi-collinearity in case 2. Although the correlation coefficient between n and k is small, the variance of the logarithm of N, namely n, is small, and the estimate is not stable. From Table 3.5 we can see that the restriction imposed by linear homogeneity is useful. 3.4.3 Estimating factor demand functions Table 3.2 in Section 3.3 shows the values of the constant and the variables where logA = α and lwr = log(r/w). For the disturbance terms, two cases are considered for the variance of random disturbance for e2. The data sets for q, log(r/w), n and k are obtained by the above method. Using the data sets, the labor demand function, the capital demand function and the production function are estimated. The results are shown in Table 3.6. The capital demand function is specified as: k = B0 + B1 log(r/w) + B2q + ε
(43)
where B0 = -α/(β1 + β2) - β2/(β1 + β2)log(β2/β1), B1 = -β2/(β1 + β2), and B2 = 1/(β1 + β2). In Table 3.6(a), the constant is -2.611, the coefficient of relative
Producer behavior╇╇ 67 Table 3.5╇ Linear homogeneous production function ρ
b1
R2
H0: β1 = 0.2
Case 1: 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 0.995 0.999 Case 2:
0.172 (3.6) 0.184 (4.5) 0.163 (3.2) 0.228 (4.5) 0.196 (3.2) 0.166 (2.9) 0.268 (3.8) 0.184 (2.7) 0.212 (2.7) 0.166 (1.6) 0.144 (0.9) 0.266 (1.2) 0.347 (1.5)
0.112 0.169 0.089 0.167 0.085 0.073 0.112 0.060 0.063 0.016 -0.0019 0.0051 0.013
[0.559] [0.692] [0.468] [0.566] [0.960] [0.550] [0.326] [0.815] [0.874] [0.738] [0.731] [0.758] [0.518]
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 0.995 0.999 Case 3:
0.193 (4.0) 0.207 (3.8) 0.126 (2.7) 0.154 (2.9) 0.177 (3.3) 0.202 (3.5) 0.194 (3.28) 0.210 (3.3) 0.282 (6.1) 0.176 (3.3) 0.125 (2.1) 0.247 (4.1) 0.187 (3.2)
0.132 0.122 0.064 0.072 0.094 0.104 0.095 0.091 0.271 0.095 0.035 0.137 0.088
[0.897] [0.885] [0.107] [0.387] [0.666] [0.969] [0.926] [0.863] [0.071] [0.644] [0.196] [0.431] [0.832]
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 0.995 0.999
0.162 (9.4) 0.225 (9.5) 0.202 (8.2) 0.234 (8.0) 0.149 (5.6) 0.209 (8.2) 0.186 (7.4) 0.203 (6.2) 0.182 (4.0) 0.124 (2.2) 0.161 (1.6) 0.131 (1.4) 0.131 (1.2)
0.469 0.478 0.401 0.393 0.238 0.400 0.353 0.276 0.137 0.039 0.018 0.009 0.006
[0.031] [0.276] [0.932] [0.236] [0.058] [0.717] [0.592] [0.907] [0.702] [0.175] [0.686] [0.458] [0.497]
price is -0.76, and the coefficient of production is 1.04 based on the estimates of the capital demand function. When the price of capital increases, capital demand decreases, indicating that the price term is negative, meaning the sign condition is satisfied. The coefficient of
68╇╇ Producer behavior Table 3.6╇ Estimation results (a)╇ Estimation of capital demand function (1) -2.611 (7.0) -0.7642 (16.9) 1.044 (34.7) 0.928 0.938 1.93
Intercept lrw q SE R2 D-W LSQ: a b1 b2 H0: β1 + β2 = 1
1.639 (4.1) 0.225 (5.1) 0.731 (15.1) 2.17 [0.140]
(2) -2.006 (2.0) -0.6524 (5.2) 1.0024 (12.7) 2.28 0.641 2.02 1.591 (1.6) 0.3467 (2.6) 0.6508 (4.6) 0.00093 [0.975]
Note: Figures in parentheses are t-values and those in brackets are P-values.
(b)╇ Estimation of labor demand function (1) Intercept lrw q SE R2 D-W LSQ: a b1 b2 H0: β1 + β2 = 1
(2)
–0.840 (2.0) 0.2274 (4.5) 1.0066 (30.0) 1.03 0.902 1.95 1.111 (2.9) 0.225 (4.4) 0.767 (13.5) 0.0394 [0.842]
-1.378 (1.5) 0.6381 (0.58) 1.0332 (14.9) 2.00 0.693 2.05 1.500 (1.9) 0.0617 (0.7) 0.906 (7.6) 0.230 [0.631]
Note: Figures in parentheses are t-values and those in brackets are P-values.
(c)╇ Direct estimation of the Cobb–Douglas production function
Intercept logK logL SE R2 D-W
(1)
(2)
2.13 (7.3) 0.291 (8.2) 0.622 (15.6) 0.81 0.93 1.90
3.582 (7.9) 0.3024 (7.2) 0.494 (11.2) 1.30 0.80 1.62
Note: Figures in parentheses are t-values.
production is 1.04, indicating it is near unity. The result – that the structural parameters of the production function assume a linear homogeneous relationship – is reasonable. Transforming the regression coefficients of B0, B1, and B2 into the structural parameters for the Cobb–Douglas production function of b1 and b2, b1 = 0.22,
Producer behavior╇╇ 69 b2 = 0.73, and the P-value for the null hypothesis of β1 + β2 = 1 is 0.14, indicating that the null hypothesis for linear homogeneity is not rejected. Similarly, the estimating equation of the labor demand function is: n = C0 + C1 log(r/w) + C2q + ε
(44)
where C0 = -α/(β1 + β2) - β1/(β1 + β2) log(β1/β2), C1 = β1/(β1 + β2), and C2 = 1/(β1 + β2). The estimates of the labor demand function indicate that the intercept is -0.84, the coefficient of the relative price between capital costs and wages is 0.22, and the coefficient of production is 1.00. These results indicate that the sign condition of relative prices is theoretically satisfied, and the coefficient of production is reasonably estimated. Transforming them into the parameters of the production function, we get the estimates of b1 = 0.22 and b2 = 0.76. The null hypothesis of linear homogeneity is not rejected, indicating that the P-value is 0.84. Table 3.6(c) shows the parameters of estimating the production function directly. Next, we examine the second case of Table 3.6. The difference with the first case is that the variances of labor and capital are large, as generated by equation (31) in Section 3.3. Due to this large variance, the standard error and the coefficient of determination are affected. The coefficient of determination is small compared to the first case regarding the capital and labor demand functions. 3.4.4 Estimating cost functions The following regression equations are estimated using a virtual data set: Model I: C = B0 + B1w + B2r + B3wQ + B4rQ + B5Q + B6Q2 Model II: logC = C0 + C1 logQ Model III: logC = C0 + C1 logw + C2 logr + C3 logQ Model I assumes a linear cost function with the variable of the quadratic term Q2 that can test constant returns to scale. If B6 is positive, marginal cost increases. Models II and III specify the cost function in logarithmic form. The difference between II and III consists of whether or not prices (wages and interest rates) are included in the right-hand side. As the parameter C1 in model II and C3 is constant, the elasticity of cost with respect to production is constant. In model III, we explicitly specify wage and interest rates, indicating that cost is affected not only by quantity produced, but also by the relative price of wage and interest rates. Table 3.7 displays the estimation results. In model I, the coefficient of B6 is positive with increasing marginal cost in case (1), and that of B6 is negative with decreasing marginal cost in case (2). And there are symptoms of heteroskedasticity. In model II(1) the estimate of C3 is 1.0096 and the null hypothesis of constant returns to scale cannot be rejected. In model III(1), the coefficient of the scale factor is 1.018, indicating that the null hypothesis of constant returns to scale is not rejected.
70╇╇ Producer behavior Table 3.7╇ Estimation results (1)
(2)
89763000 (2.2) -3242.4 (0.1) -283947000 (0.8) 0.2516 (5.8) 106.21 (3.9) -21.42 (5.6) 0.00000009 (16.1) 0.999 1.82 64.2 [0.000]
-413607000 (0.4) -755117 (0.2) 342555000 (0.4) 0.6058 (1.9) -6038.8 (5.8) 874.8 (6.5) -0.0000031 (6.2) 0.480 2.06 98.9 [0.000]
1.980 (1.8) 1.0096 (11.9) 2.615 0.587 2.13 [0.908]
1.406 (1.2) 1.036 (11.5) 2.607 0.571 2.19 [0.676]
-0.6001 (1.1) 1.139 (22.7) 0.2399 (1.6) 1.0181 (30.4) 1.03 0.935 1.97 [0.581]
-0.3836 (0.3) 0.814 (7.5) -0.253 (0.8) 0.9657 (14.1) 1.96 0.757 2.03 [0.627]
Model I: B0 B1 B2 B3 B4 B5 B6 R2 D-W WH Model II: C0 C3 SE R2 D-W H0: 1/C3 = 1 Model III: C0 C1 C2 C3 SE R2 D-W H0: 1/C3 = 1
Note: Figures in parentheses are t-values and those in brackets are P-values.
3.4.5 Economies of scale Case (1) in Table 3.3 sets the parameters β1 and β2 as 0.3 and 0.9, respectively, indicating β1 + β2 = 1.2. This is an example of economies of scale. In case (2), the parameters β1 and β2 are 0.2 and 0.7, respectively, indicating β1 + β2 = 0.9. This is an example of decreasing returns to scale. The variance of the distributions of ε1 and ε2 is 1, corresponding to case (1) in Section 3.3. There are two estimating models: Model I: Model II:
logC = C0 + C3 logQ logC = C0 + C1 logw + C2 logr + C3 logQ
The scatter in the case of increasing returns to scale is shown in Figure 3.2, and the estimates are displayed in Table 3.8. Looking at model I for increasing returns to scale, the scale elasticity is 0.78, and the null hypothesis of constant returns to
Producer behavior╇╇ 71 25
Cost (Log scale)
20
15
10
5
0 0
5
10
15
20
25
Production (Log scale)
Figure 3.2╇ Production and cost for increasing returns to scale.
Table 3.8╇ Estimation results (1)
(2)
C0 C3 SE R2 D-W H0: 1/C3 = 1
2.454 (2.5) 0.7845 (9.7) 2.57 0.489 2.09 [0.035]
1.5889 (1.6) 1.1087 (13.9) 2.62 0.662 2.01 [0.129]
Model II: C0 C1 C2 C3 SE R2 D-W H0: 1/C3 = 1
0.1157 (0.1) 1.114 (19.0) 0.1953 (1.2) 0.7965 (22.5) 1.14 0.898 2.27 [0.000]
0.1455 (0.2) 1.103 (21.4) 0.3504 (2.1) 1.1017 (33.1) 1.09 0.941 2.38 [0.001]
Model I:
Note: Figures in parentheses are t-values and those in brackets are P-values.
scale (1/C3 = 1) is rejected at the significance level of 5 percent, as the P-value is 0.035. In model II, the scale elasticity is 0.796, indicating the null hypothesis of constant returns to scale is rejected at 5 percent, namely the P-value is 0.000.
72╇╇ Producer behavior Similarly, looking at case 2 of decreasing returns to scale, the cost elasticity with respect to production is 1.10, indicating that the null hypothesis of linear homogeneity is not rejected at the significance level of 5 percent, namely the P-value is 0.129. But, in model II, cost elasticity is 1.10, indicating that the null hypothesis of linear homogeneity of the production function is rejected at the significance level of 5 percent, namely the P-value is 0.001. The reason why model II rejects the null hypothesis is the magnitude of variance. The standard error of model I is 2.62, while it is less than half that – 1.09 – for model II. After including wage and interest rates, the standard error of model II becomes smaller. 3.4.6 The Cobb–Douglas production function as an approximation of the CES production function Arrow et al. (1961) introduced the constant-elasticity-of-substitution (CES) production function. It is a generalization of the Cobb–Douglas production function. The specification of the CES production function is: V = γ[δK-ρ + (1 - δ)L-ρ]-1/ρ
(45)
where V, K, and L in equation (45) indicate value added, capital input, and labor input, respectively. The parameters γ, δ, and ρ are the efficiency parameter, distribution parameter, and substitution parameter, respectively. The elasticity of substitution between labor and capital in the Cobb–Douglas production function is unitary, while that in the CES production function, formulating σ = 1/(1 + ρ), is not necessarily unitary but some constant of σ. When ρ goes to zero in the CES production function, it converges with the Cobb– Douglas production function. Here we introduce the proof of it in two ways. The first is to use L’Hôpital’s rule (cf. Borowski and Borwein 1991, p. 339), and the second is to apply the integration method in accordance with Arrow et al. (1961). From Theorem 3 by Hardy et al. (1952), we get the following transformation: V/γ = exp[ρ log{1 - ρ(δ logK + (1 - δ) logL) + O(r)}] → exp (δ logK + (1 - δ) logL) = KδL1-δ
(46)
Therefore, the original CES production function has changed to the Cobb–Douglas production function specified as: V = γ K δL1-δ
(47)
The second way to introduce the proof is to use the differential equation (9) in the original paper by Arrow et al. (1961, p. 230):
Producer behavior╇╇ 73 logy = loga + log(y - x(dy/dx))
(48)
where y = V/L and x = K/L. In this equation we extend equation (48) into: y = a(y - x(dy/dx)) This is then transformed to: (1/a - 1)y = - x(dy/dx) Now we take the integral in both sides as: 1/(1/a - 1) ∫dy/y = ∫dx/x and we solve this integration as: a/(a - 1) logy = logx + C
(49)
where C is the integration constant. We arrange equation (49) as: y = cx(a-1)/a Then, when we change y and x into y = V/L and x = K/L, it becomes: V/L = c(K/L)(a-1)/a Finally, we get a familiar specification: V = cK(a-1)/aL1/a = γ KδL1-δ
(50)
where c = γ and (a - 1)/a = δ. The Cobb–Douglas production function is frequently used for several types of neoclassical growth models, particularly endogenous growth theory. In this section we will check the usefulness of the Cobb–Douglas production function as an approximation of the CES production function. The procedure is: 1 2 3
to assume that the CES production function is a true production function in an economy; to generate a series of capital stock and labor by random numbers after deciding the parameters for γ, δ, and ρ in the CES production function; to calculate the series of V’s using equation (45) with random errors; the CES production function with an error term is: V = γ[δK-ρ + (1 - δ)L-ρ]-1/ρ + ε
(51)
74╇╇ Producer behavior Table 3.9╇ Parameters for CES production function K L σ σε ρL,K
N(100, 202) N(100, 202) 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 1.05, 1.1 (8 cases) 0.2, 0.5, 1, 2, 5, 10, 20 (7 cases) 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.99, 0.995, 0.999 (13 cases)
4
to estimate the Cobb–Douglas production function using the series of V, K, and L as the specifications of equations (52) and (53) below.
Table 3.9 shows the virtual data for estimating the CES production function. The data for capital and labor are obtained from the normal random numbers. Regarding the substitution parameter σ, we considered eight cases; we considered seven cases for the variance of distribution terms; and we considered thirteen cases for the correlation coefficient between capital and labor, as indicated in Table 3.9. After calculating a series of V, K, and L, we conducted two types of estimation for the Cobb–Douglas production function. The first was to estimate the parameters of the Cobb–Douglas production function without restricting linear homogeneity by applying a multiple regression as: logV = a + b1 logK + b2 logL
(52)
The second was to impose linear homogeneity a priori in the estimating equation as: log(V/L) = a + b3 log(K/L)
(53)
We conducted two types of hypothesis testing regarding the first estimating equation, namely: H0: b1 + b2 = 1 H0: b1 = 0.3 The first null hypothesis tests linear homogeneity of the Cobb–Douglas production function. The second checks the accuracy of the distribution parameter δ. Regarding the second estimating equation indicated in equation (53), we set the following null hypothesis: H0: b3 = 0.3 The third null hypothesis tests the possibility of linear homogeneity and the accuracy of the distribution parameter simultaneously. Table 3.10 indicates the degree of approximation of the Cobb–Douglas form to the CES production function. Table 3.10 reports the simulation results without
Producer behavior╇╇ 75 Table 3.10╇Degree of approximation of the magnitude of the standard error of the production function and the elasticity of substitution: P-values – case for no correlation between capital and labor σ
Standard error 20
10
5
2
1
0.5
0.2
(a)╇ H0: b1 + b2 = 1 (linear homogeneous) 0.5 0.6 0.7 0.8 0.9 0.95 1.05 1.1
0.78 0.11 0.11 0.64 0.78 0.49 0.80 0.81
0.65 0.08 0.08 0.17 0.09 0.17 0.51 0.27
0.52 0.85 0.85 0.83 0.06 0.47 0.16 0.65
0.36 0.45 0.45 0.95 0.82 0.31 0.86 0.17
0.00* 0.33 0.22 0.05 0.09 0.24 0.07 0.11
0.00* 0.09 0.09 0.01 0.41 0.11 0.28 0.85
0.00* 0.00* 0.00* 0.45 0.16 0.04* 0.21 0.00*
0.99 0.31 0.17 0.20 0.78 0.54 0.49 0.80
0.87 0.46 0.83 0.17 0.27 0.22 0.77 0.94
0.56 0.33 0.99 0.22 0.33 0.09 0.50 0.03*
0.75 0.61 0.58 0.95 0.91 0.60 0.32 0.59
0.74 0.00* 0.66 0.50 0.97 0.37 0.25 0.45
0.00* 0.00* 0.00* 0.00* 0.00* 0.39 0.12 0.72
0.00* 0.10 0.03* 0.00* 0.11 0.10 0.54 0.52
(b)╇ H0: b1 = 0.3 0.5 0.6 0.7 0.8 0.9 0.95 1.05 1.1 (c)╇ H0: b3 = 0.3 0.5 0.6 0.7 0.8 0.9 0.95 1.05 1.1
0.82 0.89 0.14 0.83 0.48 0.84 0.30 0.89
0.84 0.38 0.35 0.15 0.76 0.78 0.88 0.20
0.13 0.21 0.86 0.20 0.74 0.10 0.07 0.00*
0.13 0.18 0.36 0.60 0.69 0.81 0.26 0.07
0.00* 0.00* 0.22 0.95 0.07 0.94 0.95 0.01*
0.00* 0.00* 0.00* 0.01* 0.00* 0.79 0.27 0.46
0.00* 0.00* 0.03* 0.11 0.41 0.70 0.78 0.04*
correlation between capital and labor. The cases for σ are 0.5 to 1.1 (in the case of Cobb–Douglas the σ is 1.0), and the standard error of the CES production function ranges from 0.2 to 20. The figures in Table 3.10(a) are the P-values regarding the null hypothesis that linear homogeneity specified by the Cobb–Douglas production function using the data derived from the CES production function is not rejected; those in Table 3.10(b) are the P-values regarding the null hypothesis that the distribution parameter of b1 specified by the Cobb–Douglas production function using the data derived by the CES production function is the same as the distribution parameter of the CES production function; and those in Table 3.10(c) are the P-values
Table 3.11╇Degree of approximation regarding the magnitude of the standard error of the production function, the elasticity of substitution, and the correlation between capital and labor: P-values – case for multi-collinearity between capital and labor Case: standard error = 10 ρL,K
σ 0.5
0.6
0.7
0.8
0.9
0.95
1.05
1.1
(a)╇ H0: b1 + b2 = 1 (linear homogeneous) 0.0 0.38 0.1 0.51 0.2 0.00* 0.3 0.12 0.4 0.00* 0.5 0.97 0.6 0.35 0.7 0.05 0.8 0.74 0.9 0.37 0.99 0.76 0.995 0.33 0.999 0.14 (b)╇ H0: b1 = 0.3
0.90 0.35 0.49 0.05 0.35 0.00* 0.01* 0.16 0.15 0.79 0.55 0.47 0.35
0.00* 0.15 0.69 0.21 0.70 0.11 0.80 0.71 0.69 0.94 0.74 0.31 0.13
0.04* 0.23 0.97 0.61 0.10 0.67 0.64 0.83 0.29 0.96 0.82 0.36 0.06
0.14 0.14 0.15 0.08 0.34 0.97 0.68 0.12 0.69 0.04* 0.81 0.23 0.03*
0.63 0.47 0.18 0.94 0.06 0.62 0.17 0.42 0.71 0.47 0.85 0.71 0.97
0.81 0.00* 0.21 0.53 0.71 0.18 0.07 0.28 0.97 0.92 0.00* 0.11 0.16
0.63 0.45 0.97 0.26 0.96 0.65 0.44 0.03* 0.14 0.27 0.90 0.50 0.75
0.0 0.77 0.1 0.89 0.2 0.29 0.3 0.47 0.4 0.20 0.5 0.07 0.6 0.20 0.7 0.20 0.8 0.07 0.9 0.50 0.99 0.13 0.995 0.57 0.999 0.42 (c)╇ H0: b3 = 0.3
0.83 0.70 0.55 0.21 0.48 0.62 0.06 0.03* 0.00* 0.19 0.39 0.37 0.11
0.37 0.24 0.93 0.51 0.65 0.51 0.35 0.64 0.26 0.12 0.76 0.37 0.48
0.91 0.31 0.29 0.56 0.45 0.03* 0.92 0.72 0.90 0.73 0.25 0.63 0.01*
0.06 0.75 0.08 0.26 0.67 0.27 0.65 0.15 0.82 0.30 0.37 0.40 0.06
0.97 0.50 0.09 0.12 0.18 0.72 0.35 0.24 0.16 0.43 0.02* 0.16 0.92
0.63 0.16 0.88 0.70 0.89 0.32 0.78 0.14 0.24 0.30 0.01* 0.17 0.40
0.84 0.23 0.90 0.75 0.09 0.63 0.76 0.47 0.38 0.57 0.38 0.37 0.97
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99 0.995 0.999
0.85 0.15 0.81 0.92 0.91 0.00* 0.52 0.08 0.00* 0.20 0.38 0.41 0.11
0.02* 0.73 0.59 0.91 0.76 0.17 0.26 0.74 0.29 0.11 0.76 0.20 0.84
0.02* 0.77 0.17 0.73 0.85 0.02* 0.68 0.76 0.85 0.73 0.21 0.66 0.01*
0.26 0.37 0.32 0.97 0.82 0.22 0.77 0.44 0.83 0.24 0.36 0.49 0.07
0.70 0.78 0.28 0.07 0.82 0.42 0.71 0.12 0.10 0.50 0.02* 0.15 0.92
0.67 0.60 0.57 0.38 0.69 0.61 0.28 0.22 0.20 0.27 0.08 0.27 0.43
0.45 0.05 0.88 0.62 0.04 0.45 0.99 0.84 0.72 0.91 0.36 0.35 0.94
0.58 0.38 0.16 0.64 0.89 0.03* 0.06 0.62 0.04* 0.46 0.13 0.52 0.39
Case: standard error = 0.2 ρL,K
σ 0.5
0.6
0.7
0.8
(a)╇ H0: b1 + b2 = 1 (linear homogeneous) 0.0 0.00* 0.01* 0.00* 0.01* 0.1 0.00* 0.00* 0.00* 0.00* 0.2 0.00* 0.02* 0.03* 0.08 0.3 0.14 0.01* 0.00* 0.62 0.4 0.00* 0.53 0.01* 0.00* 0.5 0.00* 0.00* 0.00* 0.16 0.6 0.00* 0.00* 0.52 0.61 0.7 0.00* 0.50 0.00* 0.08 0.8 0.96 0.01* 0.00* 0.54 0.9 0.00* 0.11 0.19 0.29 0.99 0.76 0.16 0.04* 0.47 0.995 0.46 0.05 0.00* 0.44 0.999 0.57 0.34 0.13 0.59 (b)╇ H0: b1 = 0.3 0.0 0.00* 0.00* 0.83 0.16 0.1 0.00* 0.35 0.14 0.94 0.2 0.11 0.00* 0.01* 0.10 0.3 0.00* 0.13 0.00* 0.03* 0.4 0.00* 0.33 0.60 0.05 0.5 0.02* 0.40 0.02* 0.09 0.6 0.00* 0.00* 0.22 0.07 0.7 0.01* 0.02* 0.00* 0.02* 0.8 0.00* 0.73 0.05 0.26 0.9 0.40 0.15 0.07 0.21 0.99 0.23 0.23 0.63 0.65 0.995 0.69 0.70 0.82 0.40 0.999 0.72 0.36 0.68 0.02* (c)╇ H0: b3 = 0.3 0.0 0.00* 0.00* 0.00* 0.65 0.1 0.44 0.00* 0.75 0.00* 0.2 0.00* 0.06 0.20 0.52 0.3 0.00* 0.99 0.01* 0.00* 0.4 0.39 0.44 0.03* 0.81 0.5 0.35 0.00* 0.51 0.00* 0.6 0.20 0.03* 0.06 0.01* 0.7 0.13 0.00* 0.00* 0.07 0.8 0.00* 0.69 0.32 0.19 0.9 0.37 0.07 0.12 0.29 0.99 0.24 0.22 0.80 0.66 0.995 0.74 0.80 0.56 0.34 0.999 0.78 0.44 0.46 0.02*
0.9
0.95
1.05
1.1
0.00* 0.01* 0.08 0.12 0.21 0.00* 0.02* 0.08 0.12 0.86 0.04* 0.03* 0.09
0.66 0.67 0.85 0.35 0.38 0.83 0.06 0.89 0.01* 0.00* 0.28 0.39 0.22
0.05 0.13 0.00* 0.17 0.30 0.38 0.37 0.80 0.04* 0.65 0.73 0.62 0.56
0.00* 0.05 0.01* 0.00* 0.00* 0.75 0.47 0.06 0.75 0.01* 0.74 0.44 0.42
0.06 0.00* 0.01* 0.00* 0.00* 0.00* 0.15 0.53 0.79 0.19 0.12 0.91 0.46
0.71 0.76 0.69 0.79 0.51 0.58 0.00* 0.95 0.09 0.21 0.35 0.18 0.19
0.89 0.21 0.08 0.37 0.66 0.28 0.57 0.61 0.08 0.90 0.49 0.21 0.35
0.01* 0.49 0.85 0.00* 0.00* 0.49 0.47 0.59 0.49 0.01 0.81 0.98 0.05
0.46 0.00* 0.09 0.00* 0.02* 0.10 0.56 0.98 0.43 0.17 0.09 0.91 0.40
0.38 0.35 0.70 0.40 0.77 0.41 0.00* 0.98 0.55 0.51 0.38 0.19 0.27
0.20 0.84 0.99 0.96 0.31 0.44 0.87 0.56 0.21 0.83 0.46 0.20 0.38
0.72 0.53 0.01* 0.00* 0.00* 0.53 0.22 0.77 0.54 0.00* 0.75 0.90 0.03*
78╇╇ Producer behavior regarding the null hypothesis that the distribution parameter of the linear homogeneous specification of the Cobb–Douglas production function is the same as the parameters of the CES production function. When the P-value is greater than 0.05, it is difficult to consider the Cobb–Douglas production function as an approximation of the CES production function. The distinction is clear when the value of the distribution parameter becomes smaller or the standard error of the disturbance term becomes smaller. Generally speaking, though, we find that the Cobb–Douglas specification well approximates the CES production function. Table 3.11 tested the availability of the parameters of the Cobb–Douglas production function, including the correlation between labor and capital. The findings in the previous table are similar to those in this table, indicating that the usefulness of the Cobb–Douglas production function is high. Thus it is reasonable to use this production function in growth theory and other fields of empirical analysis.
Bibliography Arrow, K. J., H. B. Chenery, B. S. Minhas, and R. M. Solow (1961) “Capital–labor substitution and economic efficiency,” Review of Economics and Statistics, 63, 225–247. Borowski, E. J., and J. M. Borwein (1991) The HarperCollins Dictionary of Mathematics, New York: HarperCollins. Christensen, L. R., and W. H. Greene (1976) “Economies of scale in US electric power generation,” Journal of Political Economy, 84, 655–676. Christensen, L. R., D. W. Jorgenson, and L. J. Lau (1973) “Transcendental logarithmic production function,” Review of Economics and Statistics, 55, 28–45. Douglas, P. E. (1948) “Are there laws of production?,” American Economic Review, 38, 1–41. Fuss, M. A., and D. McFadden (1978) Production Economics: A Dual Approach to Theory and Applications, Amsterdam: North-Holland. Hardy, G., J. E. Littlewood, and G. Polya (1952) Inequalities, second edition, Cambridge: Cambridge University Press. Nerlove, M. (1965) Estimation and Identification of Cobb–Douglas Production Function, Chicago: Rand McNally.
4 Market equilibrium models
Market analysis in economics is typically informed by the idea of market equilibrium. We define market equilibrium as the point where transactions occur that are satisfactory to both consumers and producers – satisfactory in the sense that consumers are willing to pay a price at which producers are willing to supply a particular good or service. In constructing a competitive-market equilibrium model, we specify the market demand and market supply functions in order to determine the equilibrium point numerically. In the goods and services market, the market demand function is obtained as a result of utility maximization by households, and the market supply function is obtained as a result of profit maximization by firms. We establish different market equilibrium conditions for competitive, oligopolistic, and monopolistic markets so that we can a priori decide the market characteristics that we will focus on when analyzing a particular market. In a competitive market, market equilibrium is indicated at the intersection of the market demand and market supply curves. In a monopolistic market, quantity and price are not determined by the intersection of the market demand and market supply curves. Rather, under monopoly conditions, the equilibrium quantity is less than that in a competitive market and the equilibrium price is higher. The equilibrium condition for suppliers is not p = MC (market price = marginal cost) but MR = MC (marginal revenue = marginal cost), satisfying the profit maximization condition for the firm. In a monopolistic market, when demand is inelastic, it is possible that equilibrium does not exist. When estimating a model in a competitive market, a simultaneous-equations model is necessary because the model includes demand, supply, and other equations. Therefore, the identification problem in the model plays an important role in determining whether or not the parameters in the simultaneous-equation system can be estimated from the observed data on price, quantity, and other variables. The identification problem involves identifying the market demand and market supply functions separately utilizing the variables in the functions. The first and modern aspect of the identification problem focuses on the variables in the model; these are classified as exogenous or endogenous variables. After assuming the exogenous and endogenous variables in the model, the identification problem is solved. The second and classical aspect of the identification
80╇╇ Market equilibrium models problem focuses on both the market demand and market supply functions, which are specified as a function of quantity and price; additional information on the magnitude of the variance of the market demand and market supply functions is used for specifying disturbance terms. This is a special topic in econometric methods for simultaneous-equations estimation. After both the market demand and market supply functions are identified in the model, we check the differences in the values of the parameters derived from different estimation methods. These include a single-equation estimation by the ordinary least-squares (OLS) method, simultaneous-equations estimations by the 3SLS (three-stage least-squares) method, and the FIML (full-information maximum-likelihood) method. Section 4.1 explains market theory on competitive, oligopolistic, and monopolistic markets, focusing on the goods and services market. In the goods and services market, numerous consumers constitute a group of demanders and are considered as price-takers, while firms are suppliers. We classify competitive, oligopolistic and monopolistic markets according to the number of suppliers. When the number of suppliers is numerous, the market is called a competitive market; when the number of suppliers is small, the market is called an oligopolistic market; and when there is only one supplier, we define that market as a monopolistic market. There is a continuum from competitive to monopolistic markets. Section 4.2 explains the identification problem. If a model does not satisfy the identification condition, we cannot estimate the parameters included in the model. Section 4.3 describes the models for competitive, oligopolistic, and monopolistic markets. Section 4.4 explains how to construct a data set using the Monte Carlo method. In Section 4.5, we discuss some examples of estimation of market equilibrium models. In Section 4.5.1, the simultaneous-equations system is estimated considering external information on the differences in variances for error terms, including the market demand and market supply functions. Section 4.5.2 explains simultaneous-equations estimation for market demand and market supply functions with exogenous variables in a competitive market. Section 4.5.3 looks at simultaneous-equations estimation for an oligopolistic market including conjectural variation. Section 4.5.4 estimates linear demand and supply functions in a monopolistic market. Section 4.5.5 considers the simultaneous-estimations system in a mono� polistic market where demand and supply functions are specified as log-linear.
4.1 Theory of market equilibrium Participants in a goods and services market are consumers and producers. The assumptions for a competitive market are: (1) both consumers and producers are small scale and there are large numbers of consumers and producers; (2) commodities in the market are homogeneous; (3) perfect information among consumers and producers prevails; (4) free entry and exit for consumers and producers is
Market equilibrium models╇╇ 81 permitted for both potential and existing participants. The market price is determined through competition among consumers and producers over prices. Both consumers and producers behave as price-takers in a competitive market. In the market there are numerous consumers and producers whose consumption or production is relatively small compared to the overall amount of transactions in the market. The individual demand schedule regarding the price and quantity of a commodity is derived from a consumer’s utility maximization behavior. The market demand schedule is derived from the sum of individual demands that result from each consumer’s utility maximization behavior. Therefore, a market demand curve illustrates where consumers are satisfied with the combination of the market price and quantity of a commodity in a transaction due to utility-maximizing behavior. The individual supply schedule for price and quantity is derived from a producer’s profit maximization behavior. The market supply schedule is derived from the sum of the supplies provided by individual producers, based on each producer’s profit maximization behavior. As we explained in Chapter 3, the marginal cost curve of the producer becomes the individual supply schedule from the relationship p = MC. Therefore, a supply curve in a competitive market illustrates where producers are satisfied according to their profit maximizing behavior. At the intersection of the market demand and market supply schedules, the equilibrium market price and quantity are determined, as indicated by the intersection of the point (p*, q*) in Figure 4.1. In an imperfect market, which would include both oligopolistic and monoÂ� polistic markets, the market equilibrium is different from that in a competitive market. In the extreme case of a monopolistic market, there is only one producer in the market. This sole producer easily notices that the market equilibrium price fluctuates according to changes in the amount of his own supply. If he reduces supply, the market price increases, and vice versa. In this situation he decides the optimal quantity to produce in order to maximize his profit. Let’s now consider profit maximizing behavior for both an oligopolistic producer and a monopolist. Profit is the difference between sales and cost as: π = pq - C(q)
(1)
where π is profit, p is market price, q is quantity produced, and C(q) is cost of the quantity produced. The first-order condition for profit maximization is: dπ/dq = p + q(dp/dq) - dC/dq = 0
(2)
namely MR = MC, where: MR = p(1 + (q/p)dp/dq) MC = dC/dq
(3)
As dp/dq is not zero in imperfect markets, the market clearing condition is not p = MC, but rather MR = MC.
82╇╇ Market equilibrium models Market supply schedule
Price D
S
individual supply p*
p0 S1
individual demand
S2 S D1
0
D
Market demand schedule
D2
q1
q*
q0
Quantity
Figure 4.1╇ Individual demand and supply curves and market demand and supply curves.
In an oligopolistic market, the equilibrium condition for a producer is determined by considering market demand, marginal cost (MC) functions, and the market clearing condition, including conjectural variation, as: pd = pd(qd, …) MC = MC(qs, …) qd = ∑k qsk = q MR = MC
(4)
where MR for firms in the oligopolistic market is: MR = d(pdqs)/dqs = pd + (dpd/dqd)(dqd/dqs)qs
(5)
The dqd/dqs in equation (5) is called conjectural variation and concerns the producer’s decision about the amount of his production relative to the production of other producers in an oligopolistic market. In a monopolistic market, where there is only one supplier, the firm’s equilibrium condition is MR = MC (marginal revenue = marginal cost). That is, the equilibrium condition for the firm is different from that for firms in a competitive market, and there is a gap between supply- and demand-based prices at the equilibrium quantity. The market price is determined by the market demand price. This is because after the quantity is determined by the condition of MR = MC for the producer, the corresponding price of pd is imposed on consumers who have no power to control the market price and are thus price-takers. At the price level of p*, the
Market equilibrium models╇╇ 83 p Marginal cost curve
p*
Demand curve Marginal revenue curve q*
q
Figure 4.2╇ Equilibrium in an imperfect market.
supplier’s profit is maximized because he chooses the optimal quantity to produce in order to maximize his profit, and price-taking consumers have to allocate their categories of consumption expenditure based on utility maximization under the condition of given market prices. The relationship between market price (or demand price) and supply price is shown in Figure 4.2. The market price is denoted by p*; it is different from the price indicated on the marginal cost curve for the transaction quantity of q*. Figure 4.3 introduces the equilibrium price path under the condition that the MC curve is stable, while market demand curves shift due to, for example, increases in income. From Figure 4.3, we notice that the MR curve is below the market demand curve and that MC is below the equilibrium price path for the firm. The market data observed are the equilibrium quantity and the equilibrium demand price, pd. The marginal cost schedule, ps, on the marginal cost curve cannot be obtained from the market data, but can be obtained by estimating the MC function. The equilibrium condition, including the demand and MC functions, and the market clearing condition in a monopolistic market are: pd = pd(qd, …) MC = MC(qs, …) qd = qs = q MR = MC
(6)
84╇╇ Market equilibrium models Equilibrium price path due to income increase
Price
MC p2* p1* D2
D1 MR1 0
q1 *
MR2
q2*
Quantity
Figure 4.3╇ Equilibrium price path in an imperfect market.
where MR in the monopolistic market is: MR = dpdqs/dqs = pd + (dpd/dqs)qs This is different from MR in an oligopolistic market.
4.2 Identification problem Before estimating the parameters of the market demand and market supply functions in a simultaneous-equations model, we have to consider the identification problem of the model. This is because we have to check whether or not the parameters in the simultaneous-equations model can be estimated through observation. The essence of the identification problem consists of whether or not the parameters in the model can be estimated from the observed data. In a competitive market, there are market demand and market supply functions. The market price and quantity are determined by the intersection of both functions. We specify the demand and supply relations as: qd = qd(p, …) qs = qs(p, …) qd = qs = q
(7)
The first equation is the market demand function, the second is the market supply function, and the third is the market clearing condition, namely, the amount of
Market equilibrium models╇╇ 85 market demand and market supply that is equal to the quantity of the transactions. For simplicity, each equation in the system is assumed to be linear. The simplest case for these equations (7) is the following: A model includes only price and quantity. The market demand function is downward sloping because a decreasing quantity is demanded when there is a price increase, while the market supply function is upward sloping because the quantity supplied increases as the price increases. Thus the model is: qdt = α0 + α1pt qst = β0 + β1pt qdt = qst = qt
(α1 < 0) (β1 > 0)
(8)
where the suffix t indicates time, and α0, α1, β0, and β1 are constants (or parameters to be estimated using observation data). It is easy to imagine obtaining the prices and quantities used for the estimation from time-series observation. Such observation yields a series of market prices and market quantities at equilibrium, i.e., pt and qt are obtained at the intersection of the market demand and market supply functions. Given the downward-sloping demand function (α1 < 0) and the upward-sloping supply function (β1 > 0) in equations (8), we may think that we can get the parameters α1 and β1 by using data for a demand schedule and a supply schedule. However, this is not true. The data obtained is a series of realized quantities and prices in equilibrium. In the competitive market model specified by the equations (8), if the model is correct, we can get only one set of data for price and quantity, indicated by the parameters as: pt = -(β0 - α0)/(β1 - α1) qt = -β0α1/(β1 - α1)
(9) (10)
even though there are many time periods. We get that set of data for the equilibrium price and quantity based on the intersection of the market demand and market supply schedules. Accordingly, if the model is correct, the realized observation in equations (8) is only one point of price (p*) and quantity (q*), as indicated in Figure 4.4. In this case we cannot determine the intercept and slope coefficient for the demand and supply functions uniquely; that is, we cannot uniquely determine the structural parameters α0, α1, β0, and β1 in the model using the realized observation. There are many possibilities for a downward-sloping demand schedule and an upwardsloping supply schedule. Thus, in this model both the demand and supply functions are not identifiable. This is the nature of the identification problem. The modern problem for identification in econometrics concerns not a priori information about random variables, but a priori information about variables included in the model. A detailed discussion of identification using mathematical formulas is presented in Section 4.3.
86╇╇ Market equilibrium models p
D′ D
S (Supply curve) S′ p* S′
S
D′
0
D (Demand curve)
q*
q
Figure 4.4╇ Under-identifiable demand and supply curves.
We will now consider a variant of equation (8) including income in the market demand function. Such a model is described as: qd = α0 + α1p + α2y qs = β0 + β1p qd = qs = q
(11)
Here there are two categories of variables in the equations. One is a set of endogenous variables and the other is a set of exogenous variables. Endogenous variables are determined simultaneously in the model of a linear equations system. In the above example, there are four endogenous variables: p, qd, qs, and q. The exogenous variables are determined outside of the model and their values are not affected by changes in the endogenous variables. Pre-determined endogenous variables are included among the exogenous variables. In the above example, an exogenous variable is income (y). To explain the equations (11) in economic terms, the market demand function is a function of price and income. Income is assumed to be an exogenous variable obtained from outside of the model, indicating there is no impact of the endogenous variables of price and quantity on the exogenous variable of income. Also, the market supply function is a function of price. In this situation, the market demand function shifts with changes in the income level, while the
Market equilibrium models╇╇ 87 P D′2
D2 S
D1 D′1 P2
P1
0
q1
q2
q
Figure 4.5╇ Identifiable supply curve and non-identifiable demand curve.
market supply function is stable in the quantity and price plane. This is indicated in Figure 4.5. In the model, the identification problem is: Can the parameters included in the demand and supply functions in the simultaneous-equations system be determined uniquely? To solve the problem, we introduce two observed data sets, (p1, q1, y1) and (p2, q2, y2), where the suffixes 1 and 2 indicate two time periods. From Figure 4.5 the supply function is identifiable because the parameters β0 and β1 are determined uniquely by the shift in the demand function due to different levels of income. Therefore, the parameters β0 and β1 in the supply function are obtained using observed data; concretely, these are obtained from the observation of the set of (p1, q1) and (p2, q2) as: β1 = (q2 - q1)/(p2 - p1) β0 = q1 - β1p1
(12)
On the other hand, we cannot obtain the parameters included in the demand function. In Figure 4.5 we drew two different sets of demand schedules whose equilibrium points for price and quantity are the same. Similarly, when the demand function includes price and quantity, and the supply function includes not only price and quantity but also the amount of rain (R) as an exogenous variable, then the model is formulated as: qd = α0 + α1p qs = β0 + β1p + β2R qd = qs = q
(13)
88╇╇ Market equilibrium models (y C = y D)
p
(R D = R A) D
(y A = y B) S1
(R B = R C) A C β1
S2
D2
D1
B α1
q
0
Figure 4.6╇ Identifiable demand and supply curves.
In this case, the demand function is identifiable while the supply function is not identifiable. Finally, when the demand function includes income as an exogenous variable and the supply function includes rain as an exogenous variable, the model is: qd = α0 + α1p + α2y qs = β0 + β1p + β2R qd = qs = q
(14)
We need four data sets to consider the identification problem of the demand and supply functions, namely (qA, pA, yA, RA), (qB, pB, yB, RB), (qC, pC, yC, RC), and (qD, pD, yD, RD). Figure 4.6 shows four curves. As indicated in the figure, yA = yB, yC = yD, RB = RC and RA = RD. Crossing at A and B, the tangency of the demand curve, α1, or of the estimate, a1, is determined in the following manner: qA = α0 + α1pA + α2yA qB = α0 + α1pB + α2yB
(15)
Then we take the difference between the two equations as: qA - qB = a1(pA - pB)
(16)
because yA = yB. Finally we get a1 as: a1 = (qA - qB)/(pA - pB)
(17)
Market equilibrium models╇╇ 89 From the intersections A and D: qA - a1pA = α0 + α2yA qD - a1pD = α0 + α2yD
(18)
To solve equations (18), a0, the estimate of α0, and a2, the estimate of α2, are determined as: a0 = (yD(qA - a1pA) - yA(qD - a1pD))/(yD - yA) a2 = (-(qA - a1pA) + (qD - a1pD))/(yD - yA)
(19)
Also, using the points B and C, the tangency of the supply function β1 is determined as: b1 = (qC - qA)/(pC - pA)
(20)
where the estimate of β1 is written as b1. From the two points A and B after fixing the value b1, we get the following equation: qA - b1pA = β0 + β2RA qB - b1pB = β0 + β2RB
(21)
To solve equations (21), b0, the estimate of β0, and b2, the estimate of β2, are determined as: b0 = (RB(qA - b1pA) - RA(qB - b1pB))/(RB - RA) b2 = (-(qA - b1pA) + (qB - b1pB))/(RB - RA)
(22)
This allows us to confirm that all the parameters in the equation system (14) are determined uniquely. The classical problem for identification in econometrics concerns a priori information about random variables. Now error terms are included in equations (8) as: qd = α0 + α1p + ε1 qs = β0 + β1p + ε2 qd = qs = q
(23)
where ε1 and ε2 are random variables. It is possible to observe data sets for quantity and price where two random variables are bounded by the 95 percent confidence interval, as shown in (a), (b), and (c) of Figure 4.7. When we solve p and q simultaneously, p is written as: p = -(β0 - α0)/(β1 - α1) + (ε1 - ε2)/(β1 - α1)
(24)
90╇╇ Market equilibrium models (a)
p S
D
D
S
q
0 (b)
p
D S
S
0
D
q
and q is written as: q = α0 - α1((β0 - α0)/(β1 - α1) + (ε1 - ε2)/(β1 - α1)) + ε1
(25)
In accordance with the effect of the random variables, ε1 and ε2, the scatter is dispersed as indicated in Figure 4.7(a)–(c). In model (23), when there is no a priori information on disturbances of random variables, it is not possible to identify the
Market equilibrium models╇╇ 91 (c)
p
D
S
S
0
D
q
Figure 4.7╇Demand and supply curves with disturbance terms. (a) Same variance for market demand and market supply curves; (b) variance of market supply curve is larger than that of market demand curve; (c) variance of market demand curve is larger than that of market supply curve.
parameters from the observed data. However, when there is a priori information on disturbances of random variables, it is possible to identify the parameters from the observed data. We will explain this situation later in Section 4.5.1.
4.3 Models: competitive, oligopolistic, and monopolistic markets We will now introduce several kinds of estimable models that we’ll be using in Section 4.5 below. The first is a competitive market model with endogenous variables and stochastic terms without exogenous variables: qdi = α0 + α1pi + ε1i qsi = β0 + β1pi + ε2i qdi = qsi = qi (i = 1, 2,…, n)
(26)
The first equation is the market demand function. As a fundamental characteristic of the market demand function, the quantity decreases with price increases. On the other hand, for the market supply function, the quantity increases as the price increases. In the first equation of (26), α1 is negative, and in the second equation β1 is positive. The ε1i and ε2i are the stochastic disturbances for the market demand and market supply functions, respectively. The third equation of (26) is the market clearing condition. To solve the identification problem, we test for the parameters of the market demand function and for those of the market supply function (see Maddala (1977)
92╇╇ Market equilibrium models and Maddala and Lahiri (2009) for more detailed discussion). We also check the rank condition. The rank condition determines the necessary and sufficient condition for solving the identification problem. If the matrix is full rank, the corresponding equation is identifiable, and vice versa. The most useful and simplest method for solving the identification problem concerning the rank condition is explained by Maddala and Lahiri (2009, p. 363). Their explanation is: Suppose that the equation system is the following: Equation
y1
y2
y3
z1
z2
z3
1 2 3
× × 0
0 0 ×
× 0 ×
× × ×
0 0 ×
× × 0
The rules for identification of any equation are as follows: 1. Delete the particular row. 2. Pick up the columns corresponding to the elements that have zeros in that row. 3. If from this array of columns we can find (g - 1) rows and columns that are not all zeros, where g is the number of endogenous variables and no column (or row) is proportional to another column (or row) for all parameter values, then the equation is identifiable. This condition is called the rank condition for identification and is a necessary and sufficient condition. Let us check the identification problem using the equation system (26). Equations (26) are extended as: qdi = α0 + α1pi + ε1i qsi = β0 + β1pi + ε2i qdi = qi qsi = qi (i = 1, 2,…, n)
(27)
The relationship between the equations and variables excluding disturbance terms is indicated as follows: Variables Equations
qd
qs
q
p
Intercept
1 2 3 4
-1 0 -1 0
0 -1 0 -1
0 0 1 1
α1 β1 0 0
α0 β0 0 0
To identify the market demand function (the first equation in equation system (27)), the rank of the related matrix is:
Market equilibrium models╇╇ 93
([
ρ -1 0 -1
])
0 1 1
= 2 ≠ 3 (No. of endogenous variables - 1)
To identify the market supply function (the second equation in the system (27)), the rank of the related matrix is:
([
ρ -1 -1 0
])
0 1 1
= 2 ≠ 3 (No. of endogenous variables - 1)
After checking the rank condition for the market demand and supply functions, we found out that the two functions have no full rank and are not identifiable. The second model that we’ll be using in Section 4.5 is a competitive market model with endogenous variables, exogenous variables, and stochastic terms specified as: qdi = α0 + α1pi + α2Yi + ε1i qsi = β0 + β1pi + β2Zi + ε2i qdi = qsi = qi (i = 1, 2,…, n)
(28)
where we have no a priori information on the variance of the disturbance terms for the market demand and supply functions. We will now explain the identification problem of this model. There are four endogenous variables qdi, qsi, qi, and pi, and two exogenous variables Yi and Zi. Thus, the equations (28) are extended as: qdi = α0 + α1pi + α2Yi + ε1i qsi = β0 + β1pi + + β2Zi + ε2i qdi = qi qsi = qi (i = 1, 2,…, n)
(29)
The whole system is based on two behavioral equations and two identities. To consider the identification problem for equations system (29), the relationship between equations and variables excluding disturbance terms is indicated below: Variables Equations
qd
qs
q
p
Y
Z
Intercept
1 2 3 4
-1 0 -1 0
0 -1 0 -1
0 0 1 1
α1 β1 0 0
α2 0 0 0
0 β2 0 0
α0 β0 0 0
To identify the market demand function (the first equation in equation system (29)), the rank of the related matrix is:
94╇╇ Market equilibrium models
([
ρ -1 0 -1
])
0 1 1
β2 0 0
= 3 = 3 (No. of endogenous variables - 1)
In the matrix, the value of the determinant is β2. As β2 ≠ 0 in the model’s specification, the matrix is full rank and the market demand function is proven to be identifiable. The above identification problem is based on a logical concept among variables, but not on the stochastic concept related to the disturbance terms. Thus, there is no relationship in the hypothesis testing H0: β2 = 0. To identify the market supply function (the second equation in the system (29)), the related matrix is:
([
ρ -1 -1 0
])
0 1 1
α2 0 0
= 3 = 3 (No. of endogenous variables - 1)
The value of the determinant in this matrix is -α2. As α2 ≠ 0 in the model, the market supply function is identifiable. In the equation system (29) we find out that the market demand and supply functions are both identifiable. In his analysis of oligopolistic markets, Iwata (1974) introduced the concept of conjectural variation empirically. The model for an oligopolistic market including conjectural variation is: pdi = α0 + α1(qi + qqi) + α2Yi + ε1i MCi = β0 + β1qi + β2Zi + ε2i MR = MC : α0 + α1(2 + γ)qi + α1qqi + α2Yi = β0 + β1qi + β2Zi + ε3i (i = 1, 2,…, n)
(30)
where qqi is the amount of production of other firms in the oligopolistic market and is assumed to be an exogenous variable, and the parameter γ is dqqi/dqi and is assumed to be constant. The endogenous variables in (30) are pdi, qi, and MCi, while the exogenous variables are qqi, Yi, and Zi. The MR = MC condition includes an error term because both the demand and supply functions include disturbance terms as a shock. Let’s now examine the identification problem for the above system of three equations. The system is: Variables Equations 1 2 3
pd -1 0 0
q
MC
qq
Y
Z
Intercept
α1 β1 (2 + γ)α1 -β1
0 -1 0
α1 0 α1
α2 0 α2
0 β2 -β2
α0 β0 0
Here the rank condition of the market demand function (the first equation in the system) is:
Market equilibrium models╇╇ 95
([
r -1 0
])
β2 -β2
= 2 = 2 (No. of endogenous variables - 1)
The marginal cost function (the second equation in the system) is:
([
r -1 0
α1 α1
])
α2 α2
= 2 = 2 (No. of endogenous variables - 1)
And the matrix for checking the MR = MC condition (the third equation) is:
([
r -1 0
])
0 -1
= 2 = 2 (No. of endogenous variables - 1)
From the above findings, we can see that all the equations are identifiable. The theoretical model of a monopolistic market is: pdi = α0 + α1qi + α2Yi + ε1i MCi = β0 + β1qi + β2Zi + ε2i Ri = pdiqi MR = MC : α0 + 2α1qi + α2Yi = β0 + β1qi + β2Zi + ε3i (i = 1, 2,…, n)
(31)
The MR = MC condition includes an error term because both the demand and supply functions include disturbance terms as a shock. The endogenous variables in equations (31) are pdi, qi, Ri, and MCi, while the exogenous variables are Yi and Zi. With the inclusion of the equation Ri = pdiqi, the identification condition explained by the linear equation system cannot be applied. Therefore, by excluding Ri = pdiqi from the equation system (31), we change the model to include three endogenous variables, pdi, qi, and MCi, without changing the model itself: pdi = α0 + α1qi + α2Yi + ε1i MCi = β0 + β1qi + β2Zi + ε2i MR = MC: α0 + 2α1qi + α2Yi = β0 + β1qi + β2Zi + ε3i
(32)
Let’s examine the identification problem for these three equations. The system of equations is: Variables Equations 1 2 3
pd -1 0 0
q
MC
Y
Z
Intercept
α1 β1 2α1 - β1
0 -1 0
α2 0 α2
0 β2 -β2
α0 β0 0
The rank of the first equation (the market demand function) is:
96╇╇ Market equilibrium models
([
r -1 0
])
β2 -β2
= 2 = 2 (No. of endogenous variables - 1)
The rank of the second equation (the marginal cost function) is:
([
r -1 0
])
α2 α2
= 2 = 2 (No. of endogenous variables - 1)
And the rank of the third equation (MR = MC condition) is:
([
r -1 0
])
0 -1
= 2 = 2 (No. of endogenous variables - 1)
These findings indicate that all the equations are identifiable. As a variant of the previous model, the market demand and supply functions can be specified as logarithmic instead of in linear form. The fifth model is: pdi = α0qiα1 Yiα2ε1i psi = β0 qiβ1 Ziβ2ε2i α0(α1 + 1)qiα1Yiα2 = β0qiβ1Ziβ2ε3i
(i = 1, 2,…, n)
(33)
The first equation is the market demand function, the second is the MC curve, and the third is the market clearing condition. When we take the logarithm for these three equations, the identification problem is solved, similar to the previous model.
4.4 How to generate a data set by the Monte Carlo method We will now explain the virtual-data-generating process for two of the systems of equations and estimate the parameters of the models using the virtual data set. The first example that we’ll be discussing in Section 4.5 is: qdi = α0 + α1pi + ε1i qsi = β0 + β1pi + ε2i qdi = qsi = qi (i = 1, 2,…, n)
(34)
The structural parameters in the model are initially determined. Here the set of parameters are α0 = 100, α1 = -0.6, β0 = 10, and β1 = 1.5. Though there are four endogenous variables in the model, namely pi, qdi, qsi, and qi, these are reduced to the two variables of price, pi, and quantity, qi, using the relationship qdi = qsi = qi. That is: qi = α0 + α1pi + ε1i and qi = β0 + β1pi + ε2i
(i = 1, 2,…, n)
(35)
Market equilibrium models╇╇ 97 Table 4.1╇ Virtual data set for competitive market (unidentifiable case)
α0 α1 β0 β1 ε1 ε2 ρ
(a)
(b)
(c)
100 -0.6 10 1.5 N(0, 202) N(0, 202) 0
100 -0.6 10 1.5 N(0, 0.52) N(0, 202) 0
100 -0.6 10 1.5 N(0, 202) N(0, 0.52) 0
at the equilibrium point. Then, the endogenous variables pi and qi are solved as: pi = (β0 - α0)/(α1 - β1) + (ε2i - ε1i)/(α1 - β1) qi = β0 + β1pi + ε2i = β0 + β1((β0 - α0) + (ε2i - ε1i))/(α1 - β1) + ε2i
(36)
As α0, α1, β0, and β1 are determined a priori, pi and qi are calculated by equations (36) after giving concrete values to ε1i and ε2i. Now, ε1i and ε2i are assumed to have a bivariate normal distribution with a mean vector of µ and a covariance matrix of Σ. In the present case, the means of ε1i and ε2i are zero, the variance of ε1i is σ12, that of ε2i is σ22, and the covariance between ε1i and ε2i is zero. After determining σ12 and σ22, the random numbers for ε1i and ε2i are derived from the normal random number generator. The realized values, e1i and e2i, are obtained by this process. The data set of pi and qi is calculated by equation (36). The set of virtual data is presented in Table 4.1. In case (a) of Table 4.1, the standard errors for the market demand and market supply functions are both 20. In case (b), the standard error for ε1i is 5 while that for ε2i is 20, indicating that the variance of the market demand function is small compared to that of the market supply function. On the other hand, case (c) is the opposite of case (b), as the variance of the market demand function is large compared to that of the market supply function. From (a) through (c), after determining e1i and e2i by generating normal random numbers, we get 100 sets of pi and qi for a set of virtual data of price, pi, and quantity, qi, in order to estimate the parameters of the market demand and market supply functions in the model. In every case we calculated pi and qi using the following equations: pi = (β0 - α0)/(α1 - β1) + (e2i - e1i)/(α1 - β1) qi = β0 + β1pi + e2i = β0 + β1((β0 - α0) + (e2i - e1i))/(α1 - β1) + e2i
(37)
Now the 100 sets of prices and quantities are determined numerically. Figure 4.7 shows the scatter for cases (a), (b), and (c). As expected from Figure 4.7(a), both
98╇╇ Market equilibrium models Table 4.2╇ Competitive model (identifiable case) for parameters
α0 α1 α2 β0 β1 β2 Y Z ε1 ε2
(a)
(b)
(c)
(d)
100 -0.6 2 10 1.5 1.5 [5, 15] [5, 15] N(0, 102) N(0, 102)
100 -0.6 2 10 1.5 1.5 [5, 15] [5, 15] N(0, 52) N(0, 52)
100 -0.6 2 10 1.5 1.5 [40, 80] [5, 45] N(0, 102) N(0, 102)
100 -0.6 2 10 1.5 1.5 [40, 80] [5, 45] N(0, 52) N(0, 52)
the market demand and market supply functions are not determined by the data, i.e., the market demand and market supply functions are both under-identifiable in case (a). The market demand function is identifiable in case (b) and the market supply function is identifiable in case (c). In Section 4.5.1 we explain the results by using numerical examples. Next, we explain the data-generating mechanism of the second example that we discuss in Section 4.5. The exogenous variables Yi and Zi are obtained from uniform random numbers. The realized value of ε1i, namely e1i, and that of ε2i, namely e2i, are determined by generating normal random numbers for ε1i and ε2i. Then, the endogenous variable pi is obtained by the following equation: pi = ((β0 - α0) + β2Zi - α2Yi + (e2i - e1i))/(α1 - β1)
(38)
The value of pi is plugged into pi in the market demand function as: qi = qdi = α0 + α1pi + α2Yi + e1i
(39)
If pi is plugged into pi in the market supply function as: qi = qsi = β0 + β1pi + β2Zi + e2i
(40)
then both qdi and qsi are mathematically identical to qi. By this procedure, the data set of pi, qi, Yi, and Zi is obtained in the present model. To construct the four cases of the virtual data set, we assume that the structural parameters α0, α1, α2, β0, β1, and β2 are similar in the four cases and that α0 = 100, α1 = - 0.6, α2 = 2, β0 = 10, β1 = 1.5 and β2 = 1.5. For Yi, Zi, ε1i, and ε2i there are some variations. For the exogenous variables we consider the different range of Yi and Zi for the four cases, and also consider the different set of standard errors for the four cases. The correlation coefficient between ε1i and ε2i is assumed to be zero. The set of structural parameters and disturbance terms ε1i and ε2i are indicated in Table 4.2.
Market equilibrium models╇╇ 99 Table 4.3╇ Virtual data set for conjectural variation
α0 α1 α2 β0 β1 β2 γ Y Z qq ε1 ε2 ε3
(a)
(b)
(c)
100 -0.1 3 10 0.2 2 -0.3 N(50, 152) N(40, 122) N(100, 102) N(0, 102) N(0, 102) N(0, 12)
100 -0.1 3 10 0.2 2 -0.5 N(50, 152) N(40, 122) N(100, 102) N(0, 102) N(0, 102) N(0, 12)
100 -0.1 3 10 0.2 2 -0.5 N(50, 152) N(40, 122) N(300, 502) N(0, 102) N(0, 102) N(0, 12)
Looking at the four cases, we see that cases (a) and (b) are included in the same interval regarding the range of the variables Yi and Zi, and that cases (c) and (d) become another interval for Yi and Zi. The ranges of Yi and Zi in cases (a) and (b) are 10, while in cases (c) and (d) the ranges of Yi and Zi are 40. The variance of Yi and Zi is large in cases (c) and (d). The characteristics of the random variables ε1i and ε2i are the same in cases (a) and (c). In cases (b) and (d), the characteristics of the random variables are also the same. The variance in (a) and (c) is larger than that in (b) and (d). Now we’ll explain the data-generating mechanism for the model for oligopolistic markets that includes conjectural variations. We need the observation data for the five variables pdi, qi, qqi, Yi, and Zi in order to estimate the parameters of the model. First we define the values of the parameters including the demand and supply functions. The exogenous variables qqi, Yi, and Zi are obtained by generating normal random numbers. We get the realized values for e1i , e2i, and e3i by generating normal random numbers of ε1i, ε2i, and ε3i. The variables qi and pdi are obtained from the following equations: qi = (β0 - α0 + β2Zi - α1qqi - α2Yi)/((2 + γ)α1 - β1) + e3i pdi = α0 + α1(qi + qqi) + α2Yi + e2i
(41)
Now the data set of qi, pdi, qqi, Yi, and Zi has been obtained. Table 4.3 presents sets of virtual data for an oligopolistic market. We now explain the data-generating mechanism for the linear-equations monopolistic market model that we use in Section 4.5. We need the observation data for the four variables pdi, qi, Yi, and Zi in order to estimate the parameters of the model. To generate a data set, at first we define the values of the parameters including the demand and supply functions.
100╇╇ Market equilibrium models Table 4.4╇ Virtual data (monopoly, linear model) α0 α1 α2 β0 β1 β2 ε1 ε2 ε3 Y Z
100 -0.1 3 10 0.2 2 N(0, 102) N(0, 102) N(0, 12) N(50, 152) N(40, 122)
Table 4.5╇ Virtual data (monopoly, log-linear model) α0 α1 α2 β0 β1 β2 ε1 ε2 ε3 Y Z
1 -0.5 0.8 2 1.5 2.5 N(0, 0.12) N(0, 0.12) N(0, 0.012) [0.3, 1] [0.3, 5]
The exogenous variables Yi and Zi are obtained by generating normal random numbers. Then we get the realized values for e1i and e3i by generating normal random numbers of ε1i and ε3i. The variables qi and pdi are obtained from the following equations: qi = (β0 - α0 + β2Zi + α2Yi)/(2α1 - β1) + e3i pdi = α0 + α1qi + α2Yi + e1i
(42)
Now the data set of qi, pdi, Yi, and Zi has been obtained. Table 4.4 presents the sets of virtual data. Finally, we explain the log-linear monopolistic market model used in Section 4.5. The data-generating system is the same as outlined in the previous section. However, because the functional form is log-linear, creating a suitable data set is different from the linear case. The set of virtual data is shown in Table 4.5.
Market equilibrium models╇╇ 101
4.5 Examples 4.5.1 Simultaneous estimation utilizing external information on the variances of a distribution Let us assume that we have a series of price and quantity levels at hand and know a priori information on disturbances, particularly the relative magnitudes of variances between the market demand and market supply functions. The model of a competitive market including endogenous variables and stochastic terms only is: qdi = α0 + α1pi + ε1i qsi = β0 + β1pi + ε2i qdi = qsi = qi (i = 1,2,…,n)
(43)
The first equation is the market demand function, the second equation is the market supply function, and the third equation is the market clearing condition. In the first equation of (43), α1 is negative, and in the second equation β1 is positive. The ε1i and ε2i are the stochastic disturbances for the market demand and market supply functions, respectively. The results of regression analysis using these virtual data sets are shown in Table 4.6. The estimating equation is: qi = γ0 + γ1pi + εi (i = 1, 2,…, 100)
(44)
The regression results in Table 4.6(a) show that qi = 55.3 + 0.41pi and that the coefficient of determination after adjusting the degrees of freedom is 0.134, indicating a problem with the goodness of the fit. The hypothesis testing for the four cases tested the following null hypotheses: (1) H0: γ0 = 100, (2) H0: γ1 = -0.6, (3) H0: γ0 = 10, and (4) H0: γ1 = 1.5. The P-values for the four cases are all zero, indicating that in all four cases the null hypothesis is rejected. We also know that the market demand and market supply functions are not identifiable from the above data set of pi and qi. On the other hand, in Table 4.6(b) the estimating equation is qi = 99.7 - 0.59pi and the coefficient of determination is 0.99. When we conducted hypothesis testing for the three cases in (b), namely H0: γ0 = 100, H0: γ1 = -0.6, and H0: γ0 = 100 and γ1 = -0.6, we obtained 0.23, 0.22, and 0.41 as the P-values for the three tests, respectively, indicating that the null hypothesis for them was not rejected at the significance level of 5 percent. Table 4.6(c) also shows that the structural parameters for the market supply function are identifiable. Thus we can derive estimates of the parameters β0 and β1. In Chapter 1 we stressed that the introduction of the stochastic concept is important in applied econometric analysis and econometric methodology. The stochastic concept enables us to handle random variables while conducting estimation and hypothesis testing. We now explain the role of random disturbance in estimation and hypothesis testing. The model used for the following estimation is the same as in the previous model (43) except ε2i = 0. That is, the market supply function does not include a disturbance term and thus shows no movement related to a disturbance term. The model is:
102╇╇ Market equilibrium models Table 4.6╇Estimates and hypothesis testing (competitive market, unidentifiable model) (a) γ0 γ1 R2 SE D-W Hypothesis testing: H0: γ0 = 100 H0: γ1 = -0.6 H0: γ0 = 10 H0: γ1 = 1.5 (b) γ0 γ1 R2 SE D-W Hypothesis testing: H0: γ0 = 100 H0: γ1 = -0.6 H0: γ0 = 100 γ1 = -0.6 (c) γ0 γ1 R2 SE D-W Hypothesis testing: H0: γ0 = 10 H0: γ1 = 1.5 H0: γ0 = 10 γ1 = 1.5
55.34 (12.1) 0.4179 (4.0) 0.134 13.39 2.13 [0.000] [0.000] [0.000] [0.000] 99.71 (1414.4) -0.5932 (106.8) 0.9914 0.49 2.01 [0.237] [0.225] [0.478] 10.21 (48.2) 1.4956 (323.4) 0.9990 0.424 2.15 [0.315] [0.350] [0.585]
qdi = α0 + α1pi + ε1i qsi = β0 + β1pi qdi = qsi = qi (i = 1,2,…,n)
(45)
To solve pi and qi: pi = (β0 - α0)/(α1 - β1) - ε1i/(α1 - β1) qi = β0 + β1pi = β0 + β1((β0 - α0) - ε1i)/(α1 - β1) = (α1β0 - α0β1 - β1ε1i)/(α1 - β1)
(46)
Market equilibrium models╇╇ 103 p S D The observations appear on the supply curve
D S q
0
Figure 4.8╇ The influence of no disturbance term on the supply curve.
From equations (46), we know that the price, pi, and the quantity, qi, undergo change caused only by the common disturbance term ε1i. We obtain the data set of pi and qi in the following manner. After determining α0, α1, β0, and β2, and generating e1i from normal random numbers of ε1i, the data set of pi and qi is obtained by equation (46). To get the series of pi and qi, we used the set of parameters and the disturbance term ε1i in case (a) of Table 4.1, with e2i = 0. The price series is: pi = (β0 - α0 - e1i)/(α1 - β1)
(47)
We derive quantity as: qi = β0 + β1 pi = (α1β0 - α0β1 - β1e1i)/(α1 - β1)
(48)
The realized values for the set of pi and qi are depicted in the market supply schedule as indicated in Figure 4.8. Applying the ordinary least-squares (OLS) method to the data set and estimating the parameters, we get the following regression equation: qi = 10.0 + 1.5pi (6.5 × 106) (4.1 × 107) R2 = 1.0
(49)
There is an exact relationship between price and quantity. The t-values in the parentheses were obtained from the computer output sheet, but theoretically t-values are infinite because there is no room for fluctuation in the disturbance
104╇╇ Market equilibrium models term in the market supply function. In this case, although the estimated values coincide with the true values, hypothesis testing using econometric methods is impossible because the shock is zero and the standard error is also zero. Consequently, the t-value becomes infinity because it is the ratio between the regression coefficient and the standard error, and it is meaningless to conduct hypothesis testing. This example demonstrates that disturbance terms play an important role in hypothesis testing in applied econometric analysis. If the disturbance term is not included, hypothesis testing for the simultaneous estimation is meaningless. 4.5.2 Competitive market: example of an identifiable case After constructing an identifiable model for the market demand and market supply functions, the simultaneous-estimation method is applied to the data set. The model is specified as: qdi = α0 + α1pi + α2Yi + ε1i qsi = β0 + β1pi + β2Zi + ε2i qdi = qsi = qi (i = 1, 2,…, n)
(50)
where we have no a priori information on the variance of the disturbance terms for the demand and supply functions. We estimate the parameters of equations (50) by regression analysis. First, we estimate the following equations directly for models (a)–(d) by the OLS method: qi = γ0 + γ1pi + γ2Yi + ε1i
(51)
qi = δ0 + δ1pi + δ2Zi + ε2i
(52)
As the covariance of pi and ε1i and that of pi and ε2i is not zero, the estimates have a bias toward the true values. The results are indicated in Table 4.7. Particularly in case (a) of equation (51), where the estimate of γ1, namely c1, is positive at 0.31, the result is theoretically unreasonable. When we estimate the parameters equation-by-equation in the simultaneous-equation system, the estimates have a bias toward the true values in regression analysis. We show the results obtained by simultaneous estimation in Table 4.8. The results are quite interesting. First, we compare the results using the same range of exogenous variables and different characteristics of random variables. Here we are comparing (a) and (b), or (c) and (d). With the hypothesis testing, we a priori assumed that the confidence level is fixed at 5 percent. In case (a), a1 and b0 are not statistically significant judging from the t-values; in case (b), b0 is insignificant; in case (c), b0 is insignificant; and in case (d), every parameter is statistically significant. Our results indicate that the smaller the variance in the disturbance terms, the stronger the significance of the parameters.
Market equilibrium models╇╇ 105 Table 4.7╇ Estimation results (competitive market, identifiable model) (a)
(b)
(c)
(d)
69.30 (16.2) 0.317 (3.1) 0.942 (3.7) 0.279 6.66 1.97
79.56 (21.6) -0.048 (0.5) 1.64 (12.0) 0.664 3.44 2.03
97.01 (15.3) -0.246 (2.6) 1.57 (13.9) 0.725 10.0 2.00
97.80 (36.5) -0.512 (9.5) 1.90 (30.3) 0.940 4.09 1.74
62.09 (9.7) 0.565 (5.3) 0.523 (2.1) 0.212 6.96 1.78
39.78 (6.5) 0.908 (8.1) 1.22 (7.0) 0.446 4.42 1.78
40.01 (4.9) 1.19 (14.7) 1.33 (14.4) 0.739 9.74 2.07
20.50 (4.9) 1.40 (34.3) 1.39 (27.5) 0.929 4.46 2.02
Equation (51): c0 c1 c2 R2 SE D-W Equation (52): d0 d1 d2 R2 SE D-W
Table 4.8╇ Estimates and hypothesis testing (competitive market) (a) a0 a1 a2 b0 b1 b2 Hypothesis testing: H0: α0 = 100 H0: α1 = -0.6 H0: α2 = 2 H0: β0 = 10 H0: β1 = 1.5 H0: β2 = 1.5 c2(6)
(b)
(c)
(d)
78.81 (9.6) 0.048 (0.2) 1.219 (3.6) 25.06 (1.5) 1.22 (4.4) 1.18 (3.0)
102.5 (13.5) -0.667 (3.3) 2.13 (10.2) -9.47 (0.7) 1.86 (8.0) 1.84 (7.2)
119.6 (14.0) -0.861 (5.6) 2.05 (12.9) -1.07 (0.09) 1.61 (14.0) 1.58 (14.1)
101.5 (35.8) -0.657 (10.9) 2.04 (29.7) 12.57 (2.8) 1.48 (34.3) 1.44 (27.7)
[0.009] [0.003] [0.016] [0.330] [0.316] [0.404] [0.010]
[0.733] [0.733] [0.510] [0.103] [0.112] [0.167] [0.162]
[0.019] [0.084] [0.727] [0.326] [0.325] [0.442] [0.347]
[0.567] [0.338] [0.525] [0.552] [0.759] [0.295] [0.645]
Note: The null hypothesis of c2(6) is H0: α0 = 100, α1 = - 0.6, α2 = 2, β0 = 10, β1 = 1.5, and β2 = 1.5.
Next, we compare the cases with the same range of exogenous variables and different random variables. In other words, cases (a) and (c) and cases (b) and (d). In cases (a) and (b), the range of Yi and Zi is 10, while in cases (c) and (d) it is 40.
106╇╇ Market equilibrium models Thus, the range of Yi and Zi is greater in (c) and (d). This result is indicated in Table 4.8. The numbers in the brackets indicate the P-values. In case (a), the null hypothesis that the regression coefficients are equal to the true values is rejected for the estimates a0, a1, and a3. For example, the P-value is 0.009 for the null hypothesis H0: α0 = 100; the null hypothesis for the joint test H0: α0 = 100, α1 = -0.6, α2 = 2, β0 = 10, β1 = 1.5, and β2 = 1.5 is rejected because the P-value is 0.01 in case (a). When we compare (c) with (a), we see that in case (c) the disturbances ε1i and ε2i are the same as in case (a), and that the range of the variables Yi and Zi is large compared to case (a). As the range of Yi and Zi becomes larger, all the estimates except a0 are the same as the true values statistically. The P-value for the joint hypothesis that the estimated parameters are the same as the true values is 0.347, indicating that the null hypothesis is not rejected at the 5 percent level. In case (b), the P-value for the hypothesis test that every parameter is the same as the true value exceeds 0.1, indicating that the null hypothesis (every estimate is the same as the true value) is not rejected at the 5 percent level of significance. Finally, in case (d), where the exogenous variables are widely dispersed and the variance of the disturbance term is small, all the estimates are near the true values. This indicates that when a researcher wants to obtain good and stable estimates, it is important that the variance of the exogenous variable is large, and/ or that the variance of the disturbance term is small. 4.5.3 Oligopolistic market: conjectural variation Based on Iwata (1974), we now consider an estimation of conjectural variation in an oligopolistic market. There are several approaches to analyzing oligopolistic markets in economics. Recently, in the empirical study of industrial organizations, scholars have analyzed oligopolistic markets using calibration or game theory. The conjectural variation approach, however, is one of the most common ways to empirically analyze oligopolistic markets. Again, the model for an oligopolistic market including conjectural variation is: pdi = α0 + α1(qi + qqi) + α2Yi + ε1i MCi = β0 + β1qi + β2Zi + ε2i MR = MC : α0 + α1(2 + γ)qi + α1qqi + α2Yi = β0 + β1qi + β2Zi + ε3i (i = 1, 2,…, n)
(53)
where qqi is the amount of production of other firms in the oligopolistic market and is assumed to be an exogenous variable, and the parameter γ is dqqi/dqi and is assumed to be constant. Table 4.9 shows the estimation results including conjectural variation. As far as the accuracy of the model goes, both the estimation results and the hypothesis testing look good. We can see from Table 4.9 that the estimation of the conjectural variation model was successful. We took two values of the conjectural variation,
Market equilibrium models╇╇ 107 Table 4.9╇ Estimates and hypothesis testing (oligopoly) (a) a0 a1 a2 b0 b1 b2 γ Hypothesis testing: H0: α0 = 100 H0: α1 = -0.1 H0: α2 = 3 H0: β0 = 10 H0: β1 = 0.2 H0: β2 = 2 H0: γ = -0.3 H0: γ = -0.5 c2(7)
(b)
(c)
97.55 (35.8) -0.0964 (16.7) 2.996 (38.1) 7.884 (1.6) 0.2061(28.8) 1.999 (37.1) -0.3064(5.2)
98.51 (31.2) -0.1027 (18.6) 3.071 (33.5) 6.283 (1.2) 0.2030 (25.9) 2.046 (33.5) -0.4873 (11.3)
98.10 (35.7) -0.1021 (35.3) 3.048 (35.3) 6.240 (1.2) 0.2092 (28.2) 2.032 (34.9) -0.5608 (16.1)
[0.368] [0.542] [0.962] [0.649] [0.389] [0.998] [0.912]
[0.638] [0.613] [0.438] [0.490] [0.697] [0.443]
[0.490] [0.467] [0.575] [0.469] [0.212] [0.571]
[0.768] [0.836]
[0.081] [0.189]
[0.920]
Note: The null hypothesis of c2(7) on case (a) is H0: α0 = 100, α1 = - 0.1, α2 = 2, β0 = 10, β1 = 0.2, β2 = 2 and γ = -0.3. The null hypothesis of c2(7) on cases (b) and (c) is H0: α0 = 100, α1 = -0.1, α2 = 2, β0 = 10, β1 = 0.2, β2 = 2 and γ = -0.5.
-0.3 and -0.5, and two ranges of the amount of production by the other firms. The model estimated reasonable parameters, even for the set of parameters that are different. 4.5.4 Monopoly: linear market demand function Now we consider the case where the market demand function is linear. As we explained in Chapter 3, the condition of MR = MC is satisfied when MR is positive. This means that the price elasticity of demand is elastic. That is, there is no equilibrium point when the price elasticity of demand is inelastic. The theoretical model is: pdi = α0 + α1qi + α2Yi + ε1i MCi = β0 + β1qi + β2Zi + ε2i Ri = pdiqi MR = MC : α0 + 2α1qi + α2Yi = β0 + β1qi + β2Zi + ε3i (i = 1, 2,…, n)
(54)
The MR = MC condition includes an error term because both the demand and supply functions include disturbance terms as a shock. After estimating b0, b1, and b2, the supply function psi (or MC curve) is calculated ex-post facto. The estimated results by the three-stage least-squares (3SLS) method and by the full-information maximum-likelihood (FIML) method are
108╇╇ Market equilibrium models Table 4.10╇ Monopoly (linear model) 3SLS
a0 a1 a2 b0 b1 b2 c2(6)
FIML
Estimate
Testing
Estimate
Testing
95.38 (27.2) -0.141 (6.1) 3.225 (21.0) -1.675 (2.0) 0.2069 (36.9) 2.156 (21.1)
[0.187] [0.427] [0.141] [0.066] [0.976] [0.125] [0.290]
95.38 (27.8) -0.115 (6.3) 3.225 (24.6) -1.675 (0.3) 0.2006 (9.1) 0.2156 (25.0)
[0.178] [0.416] [0.085] [0.024] [0.974] [0.070] [0.173]
Note: The null hypothesis of c2(6) is H0: α0 = 100, α1 = 0.1, α2 = -0.12, β0 = 10, β1 = 0.2, and β2 = 2.
indicated in Table 4.10. We can see in the table that the estimated values follow the true values. 4.5.5 Monopoly: log-linear market demand function In the previous section, the market demand function was specified in linear form. In this section we specify the log-linear market demand function as: pdi = α0qiα1 (α1 0, 0 < β < 1)
(2)
where β is called the marginal propensity to consume (MPC). Using equation (2) to calculate Y in Y = C + I, we get: Y = α/(1 - β) + I/(1 - β)
(3)
Macroeconomic models╇╇ 113 The investment multiplier effect is calculated from equation (3). It stipulates that increases in investment by 1 unit produces more than 1 unit of national income. The investment multiplier is denoted by dY/dI: dY/dI = 1/(1 - β)
(4)
The β in equation (4) has a positive value between 0 and 1. If β = 0.5, then the investment multiplier becomes 2, e.g., an investment of 1 billion dollars produces 2 billion dollars of national income. The specification of the consumption function is very important in considering the multiplier effect. Mathematically, it is necessary for the multiplier effect to specify the consumption function as a function of current income. Imagine that the consumption function is not a function of current income, Y, but of some other factor that has no relationship to current income. As an example, we could consider the amount of consumption that is determined by a random walk (cf. Hall 1978). Then there is no relation between current consumption and current income, and consumption is thus determined as C0, independent of the level of income. The identities and consumption function in the previous model become: Y=C+I C = C0
(5)
As the consumption function is a constant of C0: Y = C0 + I
(6)
In this case, the investment multiplier becomes unity, namely dY/dI = 1. When the consumption function is not specified as a function of current income, the investment multiplier is unity, and the increases in investment and national income are the same. Now, we explain the IS-LM model, the AD-AS model, and international macroeconomic models after extending the identities to include government and foreign sectors. The variables are Y (national income), C (consumption), I (investment), G (government expenditure), X (net export, i.e., export minus import), A (government transfer income), N (debt by government), T (tax), Sp (private savings), Sg (government savings), and Sr (savings abroad). The identity between national income on production and that on expenditure is: Y = C + I + G + X
(7)
The identity between national income on production and that on distribution is: Y = C + Sp + T - A - N
(8)
114╇╇ Macroeconomic models And government savings is: Sg = (T - A - N) - G
(9)
If Sg is positive it indicates a fiscal surplus, while if Sg is negative it indicates a fiscal deficit. Finally, savings abroad is: Sr = -X
(10)
where capital outflow exists when Sr is positive and capital inflow exists when Sr is negative. The saving and investment balance is: Sp + Sg + Sr = (Y + A + N + T) - C + (T - A - N - G) - X = Y - C - G - X = I
(11)
From now on, for simplicity, we assume that A and N are zero, and Sp will be rewritten as S. Now, the fundamental macroeconomic identities are: Y=C+I+G+X Y = C + S + T
(12)
Let’s next consider the IS-LM and AD-AS models while ignoring the international sector. The IS-LM model examines the simultaneous determination of income and interest rates considering the goods and services market and the asset market. The IS-LM model is: Y=C+I+G Y=C+S+T C = C(Y - T) I = I(r) i = r + πe M/P = L(Y, i)
(13)
There are six equations and six endogenous variables (Y, C, I, S, r, and i) in the model where r is the real rate of interest and i is the nominal rate of interest. The exogenous variables are G, T, M/P, and πe, where G is government expenditure, T is tax, M is the money supply, P is the price index, and πe is the expected inflation rate. The first equation in (13) is the identity between national income on production and that on expenditure, the second is the identity between national income on production and that on distribution, the third is the consumption function, the fourth is the investment function, and the fifth is the Fisher equation, which is the nominal interest rate divided into the real interest rate and the expected inflation rate. The sixth equation is the equilibrium condition between the real money supply and the liquidity preference (or money demand) function in the monetary
Macroeconomic models╇╇ 115 market. The variables related to the government sector and central bank are treated as policy variables. A balanced fiscal budget is assumed under the condition of T = G. T and G are exogenous variables, and therefore equations (13) don’t include the identity of G = T. The number of exogenous variables is reduced by one in the case of a balanced fiscal budget. The IS curve in the goods and services market is derived from the above four equations as: IS: I(r) = Y - C(Y - T) + T - G
(14)
The LM curve in the asset market is: LM: M/P = L(Y, i)
(15)
When the expected inflation rate is assumed to be zero, the nominal interest rate and the real interest rate are equivalent, and we can draw the IS and LM curves in the i (or r) - Y axis. In the AD-AS model, the aggregate demand and aggregate supply functions are included, and the labor market is implicitly considered. The model is: Y=C+I+G Y=C+S+T C = C(Y - T) I = I(r) i = r + πe M/P = L(Y, i) Y = Y(Pe, P)
(16)
There are seven equations and seven endogenous variables, Y, C, I, S, r, I, and P. The exogenous variables are G, T, M, πe, and Pe, where Pe is the expected price level. The relation between Pe and the expected inflation rate (πe) is πe = (Pe P-1)/P-1. Changing the aggregate supply function of Y = Y(Pe, P) to the inflation supply function, it becomes: π = π(πe, Y)
(17)
When we consider the relation between inflation and the unemployment rate in the Philips curve, the inflation supply function (17) is transformed into the unemployment function. The unemployment rate is a function of π and πe as: u = u(π, πe)
(18)
With the Mundell–Fleming model, we consider the foreign sector in cases of both a small economy and a large economy. The definition of a small economy
116╇╇ Macroeconomic models is that the interest rate is exogenous and is determined by international capital markets, while in a large economy the interest rate is an endogenous variable and is determined uniquely through both the domestic and international capital markets. Under a fixed foreign exchange regime, the money supply, or M, becomes an endogenous variable, but under a flexible foreign exchange rate regime, the exchange rate rather than the money supply becomes an endogenous variable. Let’s now look at the Mundell–Fleming model of a small economy. In the case of a flexible foreign exchange rate regime, the fundamental eight equations are: Y=C+I+G+X Y=C+S+T C = C(Y - T) I = I(r) X = X(E) i = r + πe M/P = L(Y, i) r = r*
(19)
There are eight endogenous variables – Y, C, I, S, r, i, E, and X – where E denotes the foreign exchange rate and X is net exports. The exogenous variables are G, T, πe, M/P, and r*. In the case of a fixed foreign exchange rate regime, there is no change in the eight equations, but the endogenous variable E becomes exogenous, while the exogenous variable M/P becomes endogenous. Now consider the Mundell–Fleming model in the case of a large economy. The equations system includes the following nine equations: Y=C+I+G+X Y=C+S+T C = C(Y - T) I = I(r) X = X(E) i = r + πe M/P = L(Y, i) F = F(r) F = -X
(20)
There are nine endogenous variables, namely Y, C, I, S, r, i, E, X, and F, where F is foreign assets and/or liabilities. The exogenous variables are G, T, πe, and M/P. We next examine a neoclassical growth model in macroeconomics. This model considers production technology, the utility function for households and several types of constraints, such as the budget constraint for households, the law of motion for capital stock, and identities regarding national income accounts. According to Hayashi and Prescott (2002), a neoclassical growth system is specified as:
Macroeconomic models╇╇ 117 Aggregate production function: Y = f(A, K, h, E) Household utility function: u = g(C, h, E, N) Budget constraint: Ct + Xt ≤ wthtEt + rtKt - τ(rt - δ)Kt + πt Capital accumulation: Kt+1 = (1 - δ)Kt + Xt Identity on national income: Ct + Xt + Gt = Yt Here Y is aggregate output, A is TFP (total factor productivity), K is aggregate capital, E is aggregate employment, h is hours per employee, N is the working age population, C is consumption, X is investment, w is real wages, π is lump-sum taxes, r is the real rate of capital, and δ is the depreciation rate of capital. In the simplest model of the above five equations, the endogenous variables are Y, K, E, X, and C, and the exogenous variables are h, A, N, and G. When we want to change the exogenous variable h to an endogenous variable, we formulate the relationship regarding h explicitly and apply cost minimization, profit maximization or utility maximization conditions in the system so that the number of endogenous variables corresponds with the number of equations in the system.
5.2 Empirical models We now examine Klein’s two-equation system of a macroeconomic model including the consumption function and national income identity specified as: C = α0 + α1Y + ε Y = C + I
(21)
where C is consumption, Y is national income, and I is investment. The first equation is the consumption function; the relationship between Y and C is assumed to be linear. To reflect real-world conditions, a random disturbance term, ε, is introduced as a gauge of shock. It is assumed to have a mean of zero (E(ε) = 0) and a variance of σ2 (V(ε) = σ2). The second equation is the identity of national income, which is the sum of consumption and autonomous investment. This model excludes governmental and international sectors for the sake of simplicity. The endogenous variables of the model are consumption and national income, while the exogenous variable is investment. As for the identification problem of the model, the matrix of variables is: Variables Consumption function Identity
Y
C
I
Intercept
α1 -1
-1 1
0 1
α0 0
The identification condition of the rank condition for the consumption function is:
118╇╇ Macroeconomic models ρ [1] = 1 = 1 = g - 1 Therefore, the parameters of the consumption function are identifiable, where g is the number of endogenous variables. In the second equation, the identification condition need not be considered because it is the identity, and the coefficients of the identity are determined a priori, either 1 or -1. Next, consider the relationship between the endogenous variable national income, Y, and the disturbance term, ε, which is included in the right-hand side of the consumption function. If Y and ε are mutually independent, due to the Gauss–Markov theorem, the estimates for the consumption function by regression analysis have the property of BLUE (best linear unbiased estimator). In other words, the OLS estimates are characterized by minimum variance within linear unbiased estimators. (For a detailed discussion of the Gauss–Markov theorem, see Theil 1971 or Greene 2008.) Assuming normal distribution for ε, when Y and ε are mutually independent, the covariance between Y and ε is 0. After solving Y using the identity of Y = C + I, we get: Y = α0/(1 - α1) + I/(1 - α1) + ε/(1 - α1)
(22)
Therefore: E [Y - E(Y)][ε - E(ε)] = E [ε2/(1 - α1)] = σ2/(1 - α1) ≠ 0,
(23)
indicating that Y and ε are not independent of each other. When national income, Y, and the disturbance term, ε, are not mutually independent, the estimates obtained by OLS estimation have a bias toward the true values. Figure 5.1 illustrates the relationships between Y, C, and I. In the figure, the consumption function is in the Y-C plane. The scatter of observations does not appear on the true line of C-C due to the existence of the random disturbance with a mean of zero and a constant variance. In the figure, the shaded gray area is bounded by the two sets of parallel lines; one set of the parallel line is the range of random variables included in the consumption function, C = α + βY, bounded within ±2σ (a 95 percent confidence interval). The other set of the parallel line shows the identity of national income, consumption and investment as C = Y - I0 and C = Y- I1. Autonomous investment (I) is indicated on the minus side of the vertical axis. Here we show the relationship when I = I0. The data of Y and C is on the 45-degree line extending from point I0 because of the identity of C = Y - I0. When the amount of investment increases continuously, there is a diamond shape on the Y–C plane. That is, under the restriction of Y = C + I, the figure in the C–Y plane is aBCd, not ABCa. When the figure is in the shape of ABCa, the direction between Y and ε is orthogonal, and thus Y and ε are statistically independent of each other. However, when the shape is denoted by aBCd, the direction between
Macroeconomic models╇╇ 119
C Z
A
d a
C C=α+βY
C Z
C
B 45°
0
Y
45°
l0
45°
l1
I
Figure 5.1╇ Haavelmo bias.
Y and ε is not orthogonal and they are not mutually independent. As indicated in the graph, the estimated line for Z–Z is obtained when the OLS method is applied to the scatter of aBCd. The Z–Z line is different from that of C–C; specifically, the tangency of Z–Z is greater than that of C–C (i.e., the estimate a1 is biased and overestimated), and the intercept of Z–Z is smaller than that of C–C (i.e., the estimate a0 is underestimated). This is called the Haavelmo bias. We now extend the previous model to the IS-LM model. The fundamental model of IS-LM analysis is: I = α0 + α1r + α2Z + ε1 C = β0 + β1(Y - T) + ε2 Y=C+I+G Y=C+S+T M/P = γ0 + γ1r + γ2Y + ε3
(24)
120╇╇ Macroeconomic models Here there are five endogenous variables – namely I (investment), r (the interest rate), C (consumption), Y (national income), and S (savings) – while the four exogenous variables are Z (expectations about future business conditions), M/P (the real money supply), G (government expenditure) and T (tax). The sign condition is α1 < 0, α2 > 0, 0 < β1 < 1, γ1 < 0, and γ2 > 0. Let us now check the identification problem of the system. The matrix of variables is: Variables (1) (2) (3) (4) (5)
I -1 0 1 0 0
r α1 0 0 0 γ1
C 0 -1 1 1 0
Y 0 β1 -1 -1 γ2
S 0 0 0 1 0
Z α2 0 0 0 0
M/P 0 0 0 0 -1
G 0 0 1 0 0
T 0 -β1 0 1 0
Intercept α0 β0 0 0 γ0
For the first equation (the investment function) of the system, the rank of the corresponding matrix is:
([
ρ -1 β1 1 -1 1 -1 0 γ2
0 0 0 0 1 0 0 -1
])
0 -β1 1 0 0 1 0 0
= 4 = No. of endogenous variables - 1
For the second equation (the consumption function) of the system, the rank of the corresponding matrix is:
([
ρ -1 1 0 0
α1 0 0 γ1
0 α2 0 0 0 0 1 0 0 0 0 -1
])
0 1 0 0
= 4 = No. of endogenous variables - 1
For the fifth equation (the money demand function) of the system, the rank of the corresponding matrix is:
([
ρ -1 0 1 0
0 -1 1 1
0 α2 0 0 0 0 1 0
])
0 0 0 -β1 1 0 0 1
= 4 = No. of endogenous variables - 1
Accordingly, all the behavioral equations are identifiable. (For a detailed discussion of the identification problem, see Maddala and Lahiri 2009.) This model assumes that the expected inflation rate is 0, and therefore the real interest rate included in the investment equation and the nominal interest rate in
Macroeconomic models╇╇ 121 r LM
r*
IS 0
Y*
Y
Figure 5.2╇ IS-LM curves.
the money demand function are the same. The IS-LM curves are indicated in the Y-r plane in Figure 5.2. The IS curve is obtained as follows: Y=C+I+G = β0 + β1(Y - T) + ε2 + α0 + α1r + α2Z + ε1 + G
(25)
Rearranging equation (25) and solving with r: r = (-β0 - α0 + (1 - β1)Y + β1T - G - α2Z - ε1 - ε2)/α1
(26)
The coefficient of Y is (1 - β1)/α1. Because α1 < 0 and 1 - β1 > 0, the value of (1 - β1)/α1 is negative, indicating that the IS curve is downward sloping in the Y-r plane. Macroeconomics principles tell us that the shift of the IS curve is due to fluctuations in T, Z, and G. The LM curve is: M/P = γ0 + γ1r + γ2Y + ε3
(27)
The LM curve in the Y-r plane is specified as: r = (M/P - γ0 - γ2Y - ε3)/γ1
(28)
The shift of the LM curve is caused by movements in the money supply. As γ1 < 0 and γ2 > 0, the value of -γ2/γ1 is positive. Hence, the LM curve is upward sloping in the Y-r plane.
122╇╇ Macroeconomic models It is important to bear in mind that endogenous variables are simultaneously determined in the model while exogenous variables are predetermined outside of the model. And as it is necessary to determine the endogenous variables, the number of equations in the model equals the number of endogenous variables. Even if the number of endogenous variables is large (in the present case there are five), after replacing the variables and eliminating the same number of equations, we are left with two variables and two equations. Here the two variables are national income and the interest rate, and the two equations are the IS and LM curves. Based on the present model, although there are five endogenous variables, we can calculate the two equations of the IS and the LM curves. The two variables Y and r are drawn in the Y-r plane, as indicated in the IS-LM diagram. That is, the five endogenous variables are arranged into two variables, namely r and Y, which are determined by the intersection of the IS and LM curves. The number of exogenous variables remains four: Z, M/P, T, and G. Conversely, after determining the relationship of the two variables r and Y, it is not difficult to extend the model to include investment, consumption and savings. That is, considering endogenous and exogenous variables, new endogenous variables are added with the same number of equations. The starting point lies in the IS and LM equations and the two endogenous variables Y and r. The exogenous variables are Z, M/P, T, and G. The new endogenous variable, investment (I), is determined by the relation between the variables r and Z. The consumption function, C, is determined by the already existing Y and T, and S is determined by using the predetermined C and Y and the identity of S = Y - C - T. In the IS-LM analysis, we can determine interest rates and national income simultaneously under the condition of assuming the price level at P. In this section, the relationship between the price level and national income is explicitly considered through AD-AS analysis. In AD-AS analysis, the relationship between the inflation rate and national income is one of the main topics. The fundamental model of the AD-AS analysis is: I = α0 + α1r + α2Z + ε1 C = β0 + β1(Y - T) + ε2 Y=C+I+G Y=C+S+T M/P = γ0 + γ1r + γ2Y + ε3 i = r + πe P = P0 + δ1(Y - Y0)) + ε4
(29)
Y0 is the natural level of output and is constant, the endogenous variables are I, S, Y, C, r, i, and P, and the exogenous variables are Z, M, πe, T, and G. On the identification problem, though the matrix of variables becomes larger than that of the IS-LM model, the procedure for checking the rank of the equations in the system is the same. We can easily verify that the behavioral equations of the model are all identifiable.
Macroeconomic models╇╇ 123 We now extend the scope of analysis to include both the domestic and international markets. In the Mundell–Fleming model of a small open economy, the interest rate, r, in the domestic capital market is the same as the interest rate, r*, in international capital markets, which is exogenously determined by prevailing supply and demand conditions. In contrast to the previous market model focusing only on the domestic market, the net export function is included and the foreign exchange rate is introduced as a new variable. The model is specified as: I = α0 + α1r + α2Z + ε1 C = β0 + β1(Y - T) + ε2 NX = γ0 + γ1E + γ2W + ε3 Y = C + I + G + NX Y=C+S+T M/P = δ0 + δ1r + δ2Y + ε4 r = r*
(30)
There are seven endogenous variables, namely I, r, C, NX, E, S, and Y, and six exogenous variables, namely Z, W, r*, M/P, T, and G. NX is net export, E is the exchange rate, and W is the amount of world trade. The above model is based on a flexible exchange rate regime. In the case of a fixed exchange rate regime, E becomes an exogenous variable and M/P becomes an endogenous variable. This affects the estimated values for the two models when applying simultaneousequation estimation to them. The estimated values for the fixed exchange rate regime and the flexible exchange rate regime are not necessarily equal. In the Mundell–Fleming model of a large open economy, as opposed to that of a small open economy, the interest rate becomes an endogenous variable. The model is: I = α0 + α1r + α2Z + ε1 C = β0 + β1(Y - T) + ε2 NX = γ0 + γ1E + γ2W + ε3 Y = C + I + G + NX Y=C+S+T M/P = δ0 + δ1r + δ2Y + ε4 NFI = φ0 + φ1r +ε5 NX = NFI
(31)
There are eight endogenous variables – I, r, C, NX, E, S, Y, and NFI – where NFI is foreign assets and/or liabilities. We now introduce a neoclassical growth model that is a modified version of the original model proposed by Hayashi and Prescott (2002). It includes a production function, a utility maximizing condition, and three identities. The production function is the Cobb–Douglas type: Y = Akθ(hE)1-θ
(32)
124╇╇ Macroeconomic models where Y is aggregate output, A is TFP, K is aggregate capital, E is aggregate employment, and h is hours per employee. The inter-temporal equilibrium condition for households is: Uct/Uct+1 = ct+1/ct = β[1 + (1 - τ)(rt+1 - δ)]
(33)
where the utility function is specified as: ∑t βtNtU(ct, ht, et)
(34)
and the temporal utility function is: U(ct, ht, et) = log ct - g(ht)et,
(35)
Nt is the working-age population, Ct is aggregate consumption, ct = Ct/Nt (permember consumption), and et = Et/Nt (fraction of household members that work). The three constraints are the budget constraint, capital accumulation path, and national income identity, respectively: Ct + Xt ≤ wthtEt + rtKt - τ(rt - δ)Kt + πt
(36)
Kt+1 = (1 - δ)Kt + Xt
(37)
Ct + Xt + Gt = Yt
(38)
X is investment and G is government expenditure. The endogenous variables are Y, K, E, C, and X, while the exogenous variables are A, h, N, w, r, i, and π. The constants are θ, δ, β, and τ.
5.3 How to generate a data set by the Monte Carlo method The data sets necessary for explaining the Haavelmo bias are those of consumption (C), income (Y), and investment (I). The endogenous variables are C and Y, while the exogenous variable is I. The realized value of ε, namely the residual of e, for random disturbance is obtained by generating normal random numbers. The process of making the data set involves, first, determining the constants α0 and α1. Then, a series of exogenous variables for investment (I) and a series of the realized values of the disturbance term e are generated by random numbers. In the next step, the series of consumption, C, is obtained by the equation C = (α0 + α1I + e)/(1 - α1). Finally, Y is calculated by using the identity of Y = C + I. The coefficients are fixed as α0 = 100 and α1 = 0.6, and the range of investment is from 10 to 100 as generated by uniform random numbers. For the disturbance term ε, the mean is 0. For standard errors of the disturbance, two cases are considered, 20 and 40. Table 5.1 presents the conditions for making a data set.
Macroeconomic models╇╇ 125 Table 5.1╇ Haavelmo bias
α0 α1 ε I
(a)
(b)
100 0.6 N(0, 202) [10, 100]
100 0.6 N(0, 402) [10, 100]
The data sets for consumption, national income and investment each totaled 100 sample points. We next explain how to generate the data set for I, r, C, Y, S, Z, M/P, T, and G for the IS-LM model. According to the usual process, the first step is to determine the values of the structural parameters α0, α1, α2, β0, β1, γ0, γ1, and γ2. The second step is to calculate the series of exogenous variables Z, T, and G and the series of random disturbances, ε1, ε2, and ε3. The third step is to calculate the values of the endogenous variables Y, C, I, and S utilizing information obtained from the above two steps. In the present analysis, the endogenous variable r is determined before generating the exogenous variable M/P. The reason is that when M/P is determined a priori, the variation of r is larger and r is sometimes negative. Imagine if the nominal interest rate were negative: Who would want to save? Using r, T, G, Z, e1, and e2, Y is determined by applying equation (26) in the previous section. Using r, Y, and e3, M/P is calculated from equation (27) in the previous section. Investment, I, is calculated from the first equation of (24) in the previous section using r, Z, and e1. Consumption, C, is calculated by the second equation of (24) in the previous section using Y and T. Finally, S is calculated from the identity of S = Y - C -T. The constants, the range of exogenous variables, and the standard errors for random disturbances using the IS-LM model are as follows: the parameters for the investment function are α0 = 400, α1 = -100, and α2 = 300; those for the consumption function are β0 = 100 and β1 = 0.6; and those for the money demand function are γ0 = 180, γ1 = -50, and γ2 = 0.6. A data set for I is generated by using the variables r, Z, and e1 as I = α0 + α1r + α2Z + e1. Creating a data set for C involves using Y, T, and e2 as C = β0 + β1(Y - T) + e2. A data set for Y is generated by using the relationship in (26) in the previous section: Y = (α1r - (-β0 - α0 + β1T - G - α2Z - ε1 - ε2))/(1 - β1)
(39)
The data for M/P is determined by using r, Y, and e3. Since we assume that the expected inflation rate is zero, the real interest rate in the investment function and the nominal interest rate in the money demand function are the same due to the Fisher equation. Finally, S is obtained by using Y, C, and T.
126╇╇ Macroeconomic models Table 5.2╇ IS-LM model: virtual data -
(a)
(b)
(c)
α0 α1 α2 β0 β1 γ0 γ1 γ2 r Z T G ε1 ε2 ρ ε3
400 -100 300 100 0.6 180 -50 0.6 [4.5, 5.5] [0.8, 1.2] [40, 50] [40, 50] N(0, 202) N(0, 202) 0 N(0, 202)
400 -100 300 100 0.6 180 -50 0.6 [4, 6] [0.8, 1.2] [40, 50] [40, 50] N(0, 202) N(0, 202) 0 N(0, 202)
400 -100 300 100 0.6 180 -50 0.6 [4, 6] [0.8, 1.2] [40, 50] [40, 50] N(0, 102) N(0, 102) 0 N(0, 102)
Consider the range of r for two cases: one in which r is generated from a uniform random number between 4.5 and 5.5, and the other in which r is generated from a uniform random number between 4 and 6. The figures for the exogenous variable Z are uniform random numbers between 0.8 and 1.2. Regarding the standard errors for ε1, ε2, and ε3, three cases are considered. In cases (a) and (b), the residuals are generated from a normal random number with a mean of zero and a standard error of 20. In case (c), they are generated from a normal random number with a standard error of 10. The virtual data set is shown in Table 5.2. For cases (a) through (c), the structural parameters of the model are the same, but the range of r and the standard errors for the disturbance terms are different. But in case (c), the standard error is different from that in cases (a) and (b). This allows us to check the accuracy of the estimates’ values. The data-generating mechanism for the AD-AS model is similar to that for the IS-LM model. But in the AD-AS model we include the expected rate of inflation, πe. The series of πe is obtained by generating random numbers from a uniform distribution. And the natural level of output is Y0, which is constant in the AD-AS model. The set of structural parameters and the range of variables are shown in Table 5.3. In the AD-AS model, there are two alternatives for the parameter δ1: its value is either 0.01 or 0.001. In the Mundell–Fleming model of a small open economy, new variables are introduced – the foreign exchange rate, the amount of world trade, and net exports. The data-generating mechanism is similar to those for the previous models. The set of coefficients and variables is indicated in Table 5.4. As for the Mundell–Fleming model for a large open economy, the data set is shown in Table 5.5.
Table 5.3╇ AD-AS analysis: virtual data
α0 α1 α2 β0 β1 γ0 γ1 γ2 δ1 Y0 r Z T G ε1 ε2 ρ ε3 ε4 pe
(a)
(b)
400 -100 300 100 0.6 180 -30 0.6 0.01 200 [4, 6] [0.8, 1.2] [40, 50] [40, 50] N(0,202) N(0,202) 0 N(0,202) N(0,22) [-1, 2]
400 -100 300 100 0.6 180 -30 0.6 0.001 200 [4, 6] [0.8, 1.2] [40, 50] [40, 50] N(0,202) N(0,202) 0 N(0,202) N(0,22) [-1, 2]
Table 5.4╇ Mundell–Fleming model (small open economy): virtual data α0
α1
α2
β0
β1
γ0
γ1
γ2
δ0
δ1
δ2
400
-100
300
100
0.6
150
-400
0.2
180
-30
0.6
Z
r*
E
W
ε1
[0.8, 1.2]
[4, 6]
[0.5, 1]
[1000, 1500]
N(0, 202)
ε2
ρe1e2
ε3
ε4
G
T
N(0,202)
0
N(0, 302)
N(0, 202)
[40, 50]
[40, 50]
Table 5.5╇ Mundell–Fleming model (large open economy): data sets α0
α1
α2
β0
β1
γ0
γ1
γ2
δ0
δ1
δ2
400
-100
300
100
0.6
150
-400
0.2
180
-30
0.6
f0
f1
Z
r
E
W
ε1
200
-35
[0.8, 1.2]
[4, 6]
[0.5, 1]
[1000, 1500]
N(0, 202)
ε2
ρe1e2 2
N(0, 20 )
0
ε3
ε4 2
N(0,30 )
2
N(0, 20 )
G
T
ε5
[40, 50]
[40, 50]
N(0, 202)
128╇╇ Macroeconomic models Finally, we explain the data-generating mechanism for the neoclassical growth model. The model includes five equations: Y = Akθ(hE)1-θ + ε1
(40)
Uct/Uct+1 = Ct+1/Ct = β[1 + (1 - τ)(rt+1 - δ)] + ε2
(41)
Ct + Xt ≤ wthtEt + rtKt - τ(rt - δ)Kt + πt
(42)
Kt+1 = (1 - δ)Kt + Xt
(43)
Ct + Xt + Gt = Yt
(44)
For purposes of simplification, A, Nt, and h are assumed to be constant, and we exclude one equation from the original equations shown in the previous section. To generate the data set, we first set up the parameters of the model as A = 1, δ = 0.06, θ = 0.35, β = 0.98, and τ = 0.4. Next a set of the exogenous variables, r and w, is generated by random numbers. Then a set of investment (X), ε1, and ε2 is generated by random numbers. We fix the initial value of K0 as 3,000, and that of C0 as 500. After deciding the above series of variables, the series of capital is obtained by equation (43), the series of consumption is calculated by equation (41), the series of employment is calculated by equation (42), and the series of Y is obtained by equation (40) using K and E. This data set is displayed in Table 5.6.
5.4 Examples 5.4.1 The Haavelmo bias Now consider a two-equation system for a macroeconomic model including the consumption function and national income identity specified as: C = α0 + α1Y + ε Y = C + I
(45)
where C is consumption, Y is national income, and I is investment. Here we applied three methods to estimate the structural parameters. The first is the estimation of the consumption function directly by the ordinary leastsquares (OLS) method: C = α0 + α1Y + ε
(46)
The second method of estimating the parameters is the indirect least-squares (ILS) method. That is, the endogenous variables C and Y are regressed on the exogenous variable, I, as:
Macroeconomic models╇╇ 129 Table 5.6╇ Neoclassical growth model: data set
β θ δ τ A K0 C0 X r w π ε1 ε2
(1)
(2)
0.98 0.35 0.06 0.4 1.0 3000 500 [150, 250] [0.07, 0.13] [1.0, 1.4] [200, 300] N(0, 102) N(0, 102)
0.98 0.35 0.06 0.4 1.0 3000 500 [150, 250] [0.07, 0.13] [1.0, 1.4] [200, 300] N(0, 102) N(0, 502)
C = (α0 + α1I + ε)/(1 - α1) = α0/(1 - α1) + α1I/(1 - α1) + ε/(1 - α1) = γ0 + γ1I + u Y = (α0 + I + ε)/(1 - α1) = α0/(1 - α1) + I/(1 - α1) + ε/(1 - α1) = δ0 + δ1I + v
(47)
where u = ε/(1 - α1) and v = ε/(1 - α1). Also, α0 and α1 are recalculated using the regression coefficients γ0, γ1, δ0, and δ1. The third method of estimating the parameters is the two-stage least-squares (2SLS) method: C = α0 + α1Y + ε Y=C+I
(48)
where α0 and α1 are parameters to be estimated. The present model is just identifiable and therefore the estimates by the ILS and 2SLS methods have the same values. Table 5.7 shows the estimation results for two cases with different standard errors of random disturbance, namely 20 and 40. In Table 5.7, we can see that the estimates produced by direct estimation of the consumption function by the OLS method that includes the Haavelmo bias, and those produced by the simultaneous-equation estimation, are different. Better estimates of the true values were obtained using the simultaneous-equation estimation because there is no bias in the parameters for this estimation. When the consumption function of C = α0 + α1Y is estimated directly by the OLS method, the intercept is 66.97 and the tangency is 0.684 in the case of (a). As shown in Figure 5.1, the estimate of the tangency, 0.684, is an overestimate of the true value of 0.6. The degree of overestimation is large in case (b), in which the standard error is larger than in case (a), with the estimate of the intercept being 28.45 and that of the tangency 0.785. The tendency to overestimate the marginal propensity to consume (α1) vanishes when using the ILS method. The estimates a0 (the intercept) and a1 (the
130╇╇ Macroeconomic models Table 5.7╇ Haavelmo bias: estimation (a)
(b)
Direct estimation of C = α0 + α1Y a0 a1 R2 Se D-W ILS (C = γ0 + γ1I) c0 c1 R2 Se D-W Estimated structural parameters: a0 a1 ILS (Y = γ0 + γ1I) c0 c1 R2 Se D-W Estimated structural parameters: a0 a1 2SLS a0 a1
38.84 (5.7) 0.7565 (44.6) 0.953 15.0 1.85 232.8 (19.5) 1.786 (9.21) 0.464 50.9 1.65 83.54 0.6411 232.8 (19.5) 2.786 0.678 50.9 1.65
-9.75(1.2) 0.8844(47.1) 0.957 22.6 2.08 255.5 (10.7) 1.414 (3.6) 0.117 103.4 1.84 105.8 0.5858 255.5 (10.7) 2.414 (6.15) 0.279 103.4 1.84
83.54 0.6411
105.8 0.5858
83.54 (8.49) 0.6411 (25.6)
105.8 (4.0) 0.5858 (8.71)
tangency), whose structural parameters are α0 and α1, respectively, are 89.70 in case (a) and 103.45 in case (b) for a0, and 0.625 in case (a) and 0.591 in case (b) for a1. This is because with the ILS method, investment (I) and disturbance (ε) are mutually independent, and therefore there is no bias in the estimates. As u = ε/(1 - α1) and 1/(1 - α1) = 2.5, the standard error of estimation of u by the ILS method is 2.5 times the standard error of ε in both cases. With the 2SLS method, by using the estimated value of Y instead of the observed value of Y, independence between the estimated Y (really, I) and εi is maintained, and the structural parameters are estimated without bias. Table 5.8 shows the results of hypothesis testing. The P-value for the null hypothesis H0: α0 = 100 and α1 = 0.6 is 0.276 in (a) and 0.973 in (b), indicating that the null hypothesis is not rejected in either case at the significance level of 5 percent. This shows that when the simultaneous-equation system is estimated by 2SLS, the estimates are near the true values.
Macroeconomic models╇╇ 131 Table 5.8╇ Haavelmo bias: hypothesis testing 2SLS
(a)
(b)
H0: α0 = 100 H0: α1 = 0.6 H0: α0 = 100 ╛╛╇╛α1 = 0.6
[0.094] [0.099]
[0.825] [0.833]
[0.246]
[0.974]
We have estimated the parameters of the consumption function by the OLS and ILS methods. Even with a common data set of observations, we see that different estimation methods generate different results, which in turn have different policy implications. Let us further explore this point by considering a set of observations about C (consumption), I (investment), and Y (income). Imagine that there are two researchers, A and B, who have the same data set C, I, and Y. A ignores the national income identity of Y = C + I, and estimates only one equation of the consumption function. A estimates the structure of C = α0 + α1Y + ε by the OLS method, while B estimates the same equation by the ILS method or the 2SLS method. Although their data sets are the same, the models that A and B consider are different. A considers one equation as a structure in his model: C = α0 + α1Y + ε
(49)
B, on the other hand, considers a two-equation system as the structure: C = α0 + α1Y + ε Y=C+I
(50)
Therefore, if A calculates the investment multiplier of dY/dC = (1/(1 - α1)) by using his estimate a1, this is logically inconsistent with his model. That is because A did not describe explicitly the relationship between Y and I in the model. Thus, even if two researchers use the same data, their methods influence their results. It is therefore important to ascertain how models can affect the empirical results. 5.4.2 IS-LM analysis In the previous section we considered the fundamental two-equation system. This section extends the previous model to the IS-LM model. The fundamental model of IS-LM analysis is: I = α0 + α1r + α2Z + ε1 C = β0 + β1(Y - T) + ε2 Y=C+I+G Y=C+S+T M/P = γ0 + γ1r + γ2Y + ε3
(51)
132╇╇ Macroeconomic models Here there are five endogenous variables, namely I (investment), r (interest rate), C (consumption), Y (national income), and S (savings). The four exogenous variables are Z (expectations about future business conditions), M/P (the real money supply), G (government expenditure), and T (tax). The sign condition is α1 < 0, α2 > 0, 0 < β1 < 1, γ1 < 0, and γ2 > 0. For this estimation, the three-stage least-squares (3SLS) and full-information maximum-likelihood (FIML) methods are used. The results are shown in Table 5.9. Looking at cases (a)–(c), we see that the 3SLS and FIML methods obtained similar results in every case; there is no difference in the sign for the parameters or in the magnitude of the parameters between the two estimation methods. In case (a) the result is the same for the 3SLS and FIML estimation methods. In comparing (a) and (b), or (b) and (c), however, we can observe that there are distinct differences in the estimation results. The range of interest rates differs between (a) and (b); specifically, in case (a) the range is between 4.5 and 5.5, while in case (b) it is between 4 and 6. The standard error terms in cases (b) and (c) also differ; namely, in case (b) the standard error is 20 while in case (c) it is 10. In comparing (a) and (b), we can see that the difference in the range of the interest rates affects both the investment function and the money demand function. The coefficient of the interest rate in the investment function is -153 in (a) and -106 in (b), while the coefficient in the money demand function is -39 in (a) and -56 in (b). Comparing (b) and (c), we see no difference in the magnitude of the estimated values, but the t-value is higher in (a) than in (b). Table 5.10 shows the results of hypothesis testing. When we compare (a) and (b), we notice a difference in the estimation results due to the change in the range of interest rates. The effect of this difference on the range of interest rates diffuses throughout the model. The interest rate is included in the investment and money demand functions. In case (a), when using both the 3SLS and the FIML methods, the P-value of a1, the coefficient of the interest rate in the investment function, is 0.000. This indicates that the null hypothesis that the coefficient of r is equal to -100 is rejected. On the other hand, when we evaluate the estimation results in case (a), we see that the P-value of c1, the coefficient of the interest rate, in the money demand function is 0.373 using 3SLS and 0.575 using FIML. This indicates that there is no gap between the results and the true value of -50. In case (b), there is no gap between the true values and the estimates, except for one parameter, α1, in the investment function, where the P-value for H0: α1 = 100 using the 3SLS method is 0.046. This indicates that the null hypothesis of α1 being equal to 100 is rejected at the significance level of 5 percent. In case (c), when the standard errors become smaller in the model, it is easier to estimate the true values. Now let us conduct joint-hypothesis testing on each equation. For the investment function, the hypotheses are H0: α0 = 400, α1 = -100, and α2 = 300; for the consumption function, they are H0: β0 = 100 and β1 = 0.6; and for the money demand function, they are H0: γ0 = 180, γ1 = -50, and γ2 = 0.6. In case (a) the P-value of the investment function and the money demand function is 0.000,
Macroeconomic models╇╇ 133 Table 5.9╇ IS-LM model: estimation
Case (a) a0 a1 a2 b0 b1 c0 c1 c2 Case (b)
3SLS
FIML
661.8 (10.9) -153.75 (14.1) 307.17 (12.9) 95.45 (20.1) 0.6049 (96.0) 105.87 (1.3) -39.04 (3.1) 1.026 (45.3)
671.8 (7.9) -155.70 (9.7) 306.91 (13.6) 95.53 (16.0) 0.6048 (76.3) 119.66 (1.2) -41.35 (2.6) 1.0233 (36.8)
a0 a1 a2 b0 b1 c0 c1 c2 Case (c)
434.3 (21.1) -106.7 (24.2) 299.86 (19.3) 93.40 (26.0) 0.6087 (129.2) 216.74 (4.7) -56.38 (9.0)
435.7 (18.1) -107.0 (24.2) 300.15 (16.5) 93.66 (24.7) 0.6084 (128.0) 211.52 (3.6) -55.71 (22.6)
a0 a1 a2 b0 b1 c0 c1 c2
395.53 (28.8) -99.33 (53.9) 299.08 (37.3) 99.03 (55.0) 0.6009 (256.3) 200.31 (7.6) -53.49 (14.5) 0.5979 (54.4)
395.51 (30.2) -99.323 (48.3) 299.09 (35.4) 99.04 (47.6) 0.6009 (234.4) 0.6009 (6.6) -53.42 (12.6) 0.5982 (48.1)
indicating that the null hypothesis (H0: γ0 = 180, γ1 = -50 and γ2 = 0.6 simultaneously) is rejected. But in cases (b) and (c), every null hypothesis is not rejected. This finding indicates that it is easier for us to get the true values from the observations when they are distributed widely and when the shock of the equation (as indicated by the standard error) is small. Next, rather than considering the linear equation system of (51), we consider the nonlinear form of the money demand function because we want to specify the liquidity trap regarding how interest rates and full-employment affect the level of national income. Thus, although the money demand function in equation (51) is specified as: M/P = γ0 + γ1r + γ2Y + ε3
(52)
the present equation is modified to: (r - r0)(Y - (Y0 +γ1(M/P)) = -k
(53)
134╇╇ Macroeconomic models Table 5.10╇ IS-LM model: hypothesis testing Case (a)
H0: α0 = 400 H0: α1 = -100 H0: α2 = 300 c2(3)I H0: β0 = 100 H0: β1 = 0.6 c2(2) H0: γ0 = 180 H0: γ1 = -50 H0: γ2 = 0.6 c2(3)L c2(8)
Case (b)
Case (c)
3SLS
FIML
3SLS
FIML
3SLS
FIML
[0.000] [0.000] [0.762] [0.000] [0.338] [0.433] [0.424] [0.327] [0.373] [0.000] [0.000] [0.000]
[0.001] [0.000] [0.759] [0.003] [0.453] [0.542] [0.510] [0.522] [0.575] [0.000] [0.000] [0.000]
[0.095] [0.046] [0.993] [0.237] [0.066] [0.063] [0.176] [0.421] [0.303] [0.724] [0.563] [0.209]
[0.137] [0.110] [0.993] [0.409] [0.095] [0.077] [0.203] [0.589] [0.476] [0.846] [0.688] [0.259]
[0.744] [0.716] [0.909] [0.184] [0.591] [0.697] [0.730] [0.440] [0.341] [0.852] [0.269] [0.297]
[0.732] [0.743] [0.914] [0.264] [0.644] [0.725] [0.829] [0.509] [0.418] [0.885] [0.474] [0.291]
Note: The null hypothesis of c2(3)I is H0: α0 = 400, α1 = -100, and α2 = 300. The null hypothesis of c2(2) is H0: β0 = 100, and β1 = 0.6. The null hypothesis of c2(3)L is H0: γ0 = 180, γ1 = - 50, and γ2 = 0.6. The null hypothesis of c2(8) is H0: α0 = 400, α1 = - 100, α2 = 300, β0 = 100, β1 = 0.6, γ0 = 180, γ1 = - 50, and γ2 = 0.6.
Looking at Figure 5.3, we can see that the LM curve has two asymptotic lines on the r-axis and the Y-axis whose asymptotic values are r0 on the r-axis and Y0 + γ1(M/P) on the Y-axis. The LM curve is characterized by the right-angled hyperbola. Here, r0 is in the lower range of interest rates, indicating the liquidity trap, and Y0 + γ1(M/P) is the national income at the full-employment level. Equation (53) is solved by M/P and includes the random disturbance, ε3, as: M/P = k /((γ1(r - r0)) + (Y - Y0)/γ1 + ε3
(54)
The range of exogenous variables and the standard errors of the true equations are the same as in case (b); the consumption and investment functions are also the same. The values of r0 = 1, Y0 = 400, and γ1 = 2 are a priori fixed in the money demand function. Table 5.11 presents the estimation results, which look plausible. Regarding the hypothesis testing, the P-value is low in the consumption function. And the null hypothesis for testing the model is rejected at the 5 percent level. 5.4.3 AD-AS analysis The fundamental model of AD-AS analysis is: I = α0 + α1r + α2Z + ε1 C = β0 + β1(Y - T) + ε2 Y=C+I+G Y=C+S+T M/P = γ0 + γ1r + γ2Y + ε3 i = r + πe P = P0 + δ1(Y - Y*)) + ε4
(55)
r
LM
IS r0
0
M Y0 + γ1 P
Y
Figure 5.3╇ The liquidity trap and full employment level Table 5.11╇ IS-LM model: liquidity trap and full employment
Estimates: a0 a1 a2 b0 b1 c1 Hypothesis testing:
3SLS
FIML
417.93 (16.9) -107.9 (28.1) 321.38 (17.9) 89.77 (24.7) 0.6147 (128.1) 1.9875 (121.6)
420.99 (16.8) -108.67 (27.9) 321.70 (14.6) 89.76 (24.6) 0.6147 (127.8) 1.9866 (103.2)
H0: α0 = 400 H0: α1 = -100 H0: α2 = 300 c2(3) H0: β0 = 100 H0: β1 = 0.6 c2(2)
[0.468] [0.037] [0.233] [0.135] [0.005] [0.002] [0.006]
[0.402] [0.026] [0.322] [0.142] [0.005] [0.002] [0.007]
H0: γ1 = 2 c2(6)
[0.445] [0.010]
[0.489] [0.015]
Note: The null hypothesis of c2(3) is H0: α0 = 400, α1 = -100, and α2 = 300. The null hypothesis of c2(2) is H0: β0 = 100, and β1 = 0.6. The null hypothesis of c2(6) is H0: α0 = 400, α1 = -100, α2 = 300, β0 = 100, β1 = 0.6, and γ1 = 2
136╇╇ Macroeconomic models Table 5.12╇ AD-AS analysis: estimation and hypothesis testing Case (a) 3SLS a0 a1 a2 b0 b1 c0 c1 c2 d1
296.0 (2.3) -84.2 (3.7) 328.3 (16.8) 110.2 (16.8) 0.5871 (70.7) 179.95 (2732.2) -29.99 (5066.9) 0.60001 (10819.6) 0.00996 (32.4)
Hypothesis testing H0: α0 = 400 α1 = -100 α2 = 300 β0 = 100 β1 = 0.6 γ0 = 180 γ1 = -30 γ2 = 0.6 δ1 = 0.01 ╇╇╇╇╇╛╛c2(3)I ╇╇╇╇╇ c2(2) ╇╇╇╇╇╇ c2(3)L ╇╇╇╇╇ c2(9)
[0.403] [0.480] [0.227] [0.117] [0.112] [0.521] [0.780] [0.780] [0.902] [0.534] [0.290] [0.793] [0.710]
Note: The null hypothesis of c2(3)I is H0: α0 = 400, α1 = -100, and α2 = 300. The null hypothesis of c2(2) is H0: β0 = 100, and β1 = 0.6. The null hypothesis of c2(3)L is H0: γ0 = 180, γ1 = - 30, and γ2 = 0.6. The null hypothesis of c2(9) is H0: α0 = 400, α1 = -100, α2 = 300, β0 = 100, β1 = 0.6, γ0 = 180, γ1 = - 50, γ2 = 0.6 and δ1 = 0.01.
Case (b) 3SLS a0 a1 a2 b0 b1 c0 c1 c2 d1
461.01 (7.8) -119.05 (9.5) 334.44 (16.4) 99.77 (16.7) 0.6008 (78.1) 180.00 (3311.2) -29.99 (4970.6) 0.5997 (11789.1) 0.0109 (33.9)
Hypothesis testing H0: α0 = 400 α1 = -100 α2 = 300 β0 = 100 β1 = 0.6 γ0 = 180 γ1 = -30 γ2 = 0.6 δ1 = 0.01 ╇╇╇╇╇╛╛c2(3)I ╇╇╇╇╇ c2(2) ╇╇╇╇╇╇ c2(3)L ╇╇╇╇╇ c2(9)
[0.299] [0.125] [0.090] [0.970] [0.908] [0.913] [0.729] [0.590] [0.521] [0.215] [0.825] [0.671] [0.689]
Note: The null hypothesis of c2(3)I is H0: α0 = 400, α1 = - 100, and α2 = 300. The null hypothesis of c2(2) is H0: β0 = 100, and β1 = 0.6. The null hypothesis of c2(3)L is H0: γ0 = 180, γ1 = -30, and γ2 = 0.6. The null hypothesis of c2(9) is H0: α0 = 400, α1 = -100, α2 = 300, β0 = 100, β1 = 0.6, γ0 = 180, γ1 = -30, γ2 = 0.6 and δ1 = 0.01.
where the endogenous variables are I, S, Y, C, r, I, and P, and the exogenous variables are Z, M, πe, T, and G. The results of the estimation and hypothesis testing are presented in Table 5.12. In Table 5.12, the hypothesis testing of several null hypotheses for parameter restrictions is judged by the P-values. Looking at the P-values for the model, we
Macroeconomic models╇╇ 137 Table 5.13╇ AD-AS analysis: estimation and hypothesis testing (balanced budget) Case (a)
a0 a1 a2 b0 b1 c0 c1 c2 d1
3SLS
Hypothesis testing
420.2 (12.3) -107.7 (13.8) 318.1 (17.0) 92.5 (11.8) 0.6104 (60.2) 179.8 (2644.2) -29.99 (4602.9) 0.60007 (8262.7) 0.00970 (26.9)
H0: α0 = 400 α1 = -100 α2 = 300 β0 = 100 β1 = 0.6 γ0 = 180 γ1 = -30 γ2 = 0.6 δ1 = 0.01 ╇╇╇╛c2(3)I ╇╇╇╛c2(2) ╇╇╇╛c2(3)L ╇╇╇╛c2(9)
[0.552] [0.319] [0.332] [0.337] [0.302] [0.129] [0.267] [0.308] [0.416] [0.725] [0.491] [0.417] [0.715]
Note: The null hypothesis of c2(3)I is H0: α0 = 400, α1 = -100, and α2 = 300. The null hypothesis of c2(2) is H0: β0 = 100, and β1 = 0.6. The null hypothesis of c2(3)L is H0: γ0 = 180, γ1 = -30, and γ2 = 0.6. The null hypothesis of c2(9) is H0: α0 = 400, α1 = -100, α2 = 300, β0 = 100, β1 = 0.6, γ0 = 180, γ1 = -50, γ2 = 0.6 and δ1 = 0.01.
Case (b)
a0 a1 a2 b0 b1 c0 c1 c2 d1
3SLS
Hypothesis testing
429.0 (7.2) -101.4 (9.5) 281.2 (13.9) 103.1 (13.9) 0.5978 (65.9) 179.8 (3569.9) -29.98 (6063.0) 0.60004 (15558.8) 0.00987 (29.9)
H0: α0 = 400 α1 = -100 α2 = 300 β0 = 100 β1 = 0.6 γ0 = 180 γ1 = -30 γ2 = 0.6 δ1 = 0.01 ╇╇╇╛╛c2(3)I ╇╇╇╛╛c2(2) ╇╇╇╛╛c2(3)L ╇╇╇╛╛c2(9)
[0.624] [0.892] [0.352] [0.673] [0.816] [0.015] [0.001] [0.238] [0.698] [0.393] [0.248] [0.014] [0.053]
Note: The null hypothesis of c2(3)I is H0: α0 = 400, α1 = - 100, and α2 = 300. The null hypothesis of c2(2) is H0: β0 = 100, and β1 = 0.6. The null hypothesis of c2(3)L is H0: γ0 = 180, γ1 = - 30, and γ2 = 0.6. The null hypothesis of c2(9) is H0: α0 = 400, α1 = - 100, α2 = 300, β0 = 100, β1 = 0.6, γ0 = 180, γ1 = - 50, γ2 = 0.6 and δ1 = 0.01.
see that every estimate, every equation, and indeed the model as a whole are not different from the true relationships. The results of the AD-AS analysis are robust. The estimates and the results of the hypothesis testing are indicated in Table 5.13. They assume a balanced fiscal budget, with government expenditure and tax
138╇╇ Macroeconomic models revenue balanced, as indicated by G = T. Regarding the money demand function in (b), the P-value is 0.014, indicating that the null hypothesis H0: γ0 = 180, γ1 = -30, and γ2 = 0.6 is rejected at the 5 percent level, while the model as a whole is not rejected. Readers can understand that even if the system as a whole does not reject the joint null hypothesis, a part of the system does. 5.4.4 Mundell–Fleming model for a small open economy The Mundell–Fleming model, which is a typical macroeconomic model, is specified as: I = α0 + α1r + α2Z + ε1 C = β0 + β1(Y - T) + ε2 NX = γ0 + γ1E + γ2W + ε3 Y = C + I + G + NX Y=C+S+T M/P = δ0 + δ1r + δ2Y + ε4 r = r*
(56)
There are seven endogenous variables – I, r, C, NX, E, S, and Y , and six exogenous variables – Z, W, r*, M/P, T, and G. NX is net exports, E is the exchange rate, and W is the amount of world trade. This model is based on a flexible exchange rate regime. In the case of a fixed exchange rate regime, E becomes an exogenous variable and M/P becomes an endogenous variable. The estimation results are shown in Table 5.14. Comparing flexible and fixed exchange rate regimes, we notice that the estimates for the net export function and the money demand function are different. However, for the hypothesis testing, the P-value of the money demand function in the flexible exchange rate regime is 0.083, indicating that the null hypothesis is not rejected at the 5 percent significance level. The null hypothesis that the structural parameters are the same as the true variables is also not rejected. 5.4.5 Mundell–Fleming model for a large open economy This model is: I = α0 + α1r + α2Z + ε1 C = β0 + β1(Y - T) + ε2 NX = γ0 + γ1E + γ2W + ε3 Y = C + I + G + NX Y=C+S+T M/P = δ0 + δ1r + δ2Y + ε4 NFI = φ0 + φ1r +ε5 NX = NFI
(57)
Table 5.14╇ Mundell–Fleming model (small open economy): estimation Case: floating exchange rate regime
3SLS
a0 a1 a2 b0 b1 c0 c1 c2 d0 d1 d2
406.2 (14.7) -102.91 (27.1) 304.44 (17.6) 102.43 (29.2) 0.5974 (60.2) 176.18 (5.3) -449.2 (21.1) 0.2055 (9.0) 171.06 (5.6) -31.03 (6.7) 0.6114 (59.8)
Hypothesis testing H0: α0 = 400 α1 = -100 α2 = 300 β0 = 100 β1 = 0.6 γ0 = 150 γ1 = -400 γ2 = 0.2 δ0 = 180 δ1 = -30 δ2 = 0.6 ╇╇╇╇ c2(3)I ╇╇╇╛c2(2) ╇╇╇╇╛c2(3)X ╇╇╇╛╛╛╛c2(3)I ╇╇╇╇╛╛c2(11)
[0.820] [0.563] [0.796] [0.486] [0.447] [0.430] [0.020] [0.805] [0.768] [0.822] [0.262] [0.924] [0.736] [0.083] [0.375] [0.485]
Note: The null hypothesis of c2(3)I is H0: α0 = 400, α1 = - 100, and α2 = 300. The null hypothesis of c2(2) is H0: β0 = 100, and β1 = 0.6. The null hypothesis of c2(3)X is H0: γ0 = 150, γ1 = - 400, and γ2 = 0.2. The null hypothesis of c2(3)I is H0: δ0 = 180, δ1 = - 30, and δ2 = 0.6. The null hypothesis of c2(8) is H0: α0 = 400, α1 = - 100, α2 = 300, β0 = 100, β1 = 0.6, γ0 = 150, γ1 = - 400, γ2 = 0.2, δ0 = 180, δ1 = - 30, and δ2 = 0.6.
Case: fixed exchange rate regime
3SLS
a0 a1 a2 b0 b1 c0 c1 c2 d0 d1 d2
402.7 (14.6) -101.97 (27.0) 306.92 (17.8) 105.87 (28.4) 0.5941 (167.4) 120.38 (4.1) -376.2 (23.0) 0.2070 (7.1) 225.71 (7.1) -37.02 (7.9) 0.5878 (53.2)
Hypothesis testing H0: α0 = 400 α1 = -100 α2 = 300 β0 = 100 β1 = 0.6 γ0 = 150 γ1 = -400 γ2 = 0.2 δ0 = 180 δ1 = -30 δ2 = 0.6 ╇╇╇╇╛c2(3)I ╇╇╇╛╛╛c2(2) ╇╇╇╇╛╛c2(3)X ╇╇╇╇╛c2(3)L ╇╇╇╇╛╛c2(11)
[0.919] [0.600] [0.687] [0.114] [0.097] [0.303] [0.147] [0.728] [0.148] [0.134] [0.269] [0.910] [0.247] [0.307] [0.380] [0.611]
Note: The null hypothesis of c2(3)I is H0: α0 = 400, α1 = - 100, and α2 = 300. The null hypothesis of c2(2) is H0: β0 = 100, and β1 = 0.6. The null hypothesis of c2(3)X is H0: γ0 = 150, γ1 = - 400, and γ2 = 0.2. The null hypothesis of c2(3)I is H0: δ0 = 180, δ1 = - 30, and δ2 = 0.6. The null hypothesis of c2(11) is H0: α0 = 400, α1 = - 100, α2 = 300, β0 = 100, β1 = 0.6, γ0 = 150, γ1 = - 400, γ2 = 0.2, δ0 = 180, δ1 = - 30, and δ2 = 0.6.
140╇╇ Macroeconomic models Table 5.15╇Mundell–Fleming model (large open economy): estimation and hypothesis testing Case: floating exchange rate regime
3SLS
Hypothesis testing
a0 a1 a2 b0 b1 c0 c1 c2 d0 d1 d2 f0 f1
441.27 (17.0) -110.03 (27.1) 308.59 (17.6) 94.86 (30.3) 0.6067 (60.2) 191.42 (5.3) -482.70 (21.1) 0.2276 (9.0) 186.71 (5.6) -30.61 (6.7) 0.5941 (59.8) 238.51 (12.0) -42.49 (10.9)
H0: α0 = 400 α1 = -100 α2 = 300 β0 = 100 β1 = 0.6 γ0 = 150 γ1 = -400 γ2 = 0.2 δ0 = 180 δ1 = -50 δ2 = 0.6 φ0 = 200 φ1 = -35 ╇╇╇╛c2(13)
[0.111] [0.008] [0.632] [0.100] [0.076] [0.351] [0.263] [0.472] [0.918] [0.948] [0.795] [0.052] [0.054] [0.160]
Note: The null hypothesis of c2(13) is H0: α0 = 400, α1 = - 100, α2 = 300, β0 = 100, β1 = 0.6, γ0 = 150, γ1 = - 400, γ2 = 0.2, δ0 = 180, δ1 = - 30, δ2 = 0.6, φ0 = 200 and φ1 = -35.
The estimation results are presented in Table 5.15. The sign condition is reasonable, but the P-value is low for the coefficient of the interest rate, a1, included in the investment function. The P-value as a whole is 0.160, indicating that the null hypothesis is not rejected at the 5 percent level of significance. 5.4.6 Neoclassical growth model The model is: Y = Akθ(hE)1-θ + ε1 Uct/Uct+1 = Ct+1/Ct = β[1 + (1 - τ)(rt+1 - δ)] + ε2 Ct + Xt ≤ wthtEt + rtKt - τ(rt - δ)Kt + πt Kt+1 = (1 - δ)Kt + Xt Ct + Xt + Gt = Yt
(58)
The estimation results for this model are presented in Table 5.16. Based on the hypothesis testing, it appears that there is a small range of disturbance terms, which is good. On the other hand, when the range of the disturbance term is large, the estimated value of the time discount rate exceeds unity, while the depreciation rate becomes larger than the true value. In order to conduct empirical analysis using macro-econometric models, researchers must be careful about the characteristics of the model they choose,
Macroeconomic models╇╇ 141 Table 5.16╇ Estimation results of a neoclassical growth model (1) A β θ δ τ Hypothesis testing: H0: β = 0.98 H0: θ = 0.35 H0: δ = 0.06 H0: τ = 0.4
0.9987 (93.7) 0.9810 (80.2) 0.3510 (52.3) 0.0588 (3.0) 0.4311 (6.4) [0.929] [0.877] [0.952] [0.641]
(2) 1.067 (37.8) 1.051 (49.3) 0.3042 (16.7) 0.1857 (5.5) 0.3042 (16.7) [0.001] [0.012] [0.000] [0.185]
particularly the endogenous and exogenous variables and the estimation method. The differences in results generated by different models and estimation methods can have profound policy implications.
Bibliography Abel, A. B., and B. S. Bernanke (2008) Macroeconomics, Boston, MA: Pearson. Blanchard, O. J., and S. Fisher (1989) Lectures on Macroeconomics, Cambridge, MA: MIT Press. Greene, W. H. (2008) Econometric Analysis, Upper Saddle River, NJ: Prentice Hall. Hayashi, F., and E. C. Prescott (2002) “The 1990s in Japan: a lost decade,” Review of Economic Dynamics, 5, 206–235. Maddala, G. S., and K. Lahiri (2009) Introduction to Econometrics, Chichester, West Sussex: Wiley. Mankiw, N. G. (2007) Macroeconomics, New York: Worth Publishers. Romer, D. (2006) Advanced Macroeconomics, Boston, MA: McGraw Hill. Theil, H. (1971) Principles of Econometrics, New York: Wiley.
6 Microeconomic analysis using micro-data Qualitative-response models
Econometric methodology has developed rapidly in the field of qualitativeresponse models. The development of these models reflects the availability of micro-data sets, which provide details on individual households and firms. Imagine that we are interested in household automobile demand, and that we want to estimate automobile demand by income class using a published household expenditure survey such as the Consumer Expenditure Survey (CES) in the United States. When households purchase an automobile, this expenditure is reported in the micro-data of the CES, but the published CES only includes the average amount of expenditure for automobiles in each income class. There is no information in the published CES on the household characteristics of individual households in this income class. Thus, if an income class includes 500 households, and one household purchased an automobile for $15,000, but no other households purchased an automobile, then the average automobile expenditure indicated for this income class is $30. Obviously, this creates a misleading impression. In contrast, when we utilize a micro-data set, we can get information on 499 zero-expenditure households and one positive-expenditure household, indicating the purchase of one automobile for $15,000. We can also get information on the locations of the households, the ages of family members, the families’ sizes, the amount of their assets, and other characteristics for all the households. With such information, it is possible to estimate automobile demand – or demand for other products – for individual households. The methodology for analyzing such microdata employs qualitative-response models. Here we consider the following qualitative-response models: probit models, logit models, tobit models, Heckman’s two-step method, and double-hurdle models. Qualitative-response models are much more complex than usual regression analysis. When we use these models, data analysis is related directly to stochastic distribution, and we introduce latent variables. The objective of using a probit model is to analyze a discrete choice, such as whether or not a household member works, or whether or not a household purchases an automobile. The dependent variable in the second example is the decision whether or not to purchase an automobile, and the independent variables include household and socio-economic characteristics. The tobit model, on the
Microeconomic analysis using micro-data╇╇ 143 other hand, analyzes not only the discrete choice of, say, whether or not a household wants to purchase consumer durables, but how much the household decides to spend on them. Section 6.1 explains the structure of the probit, logit, and tobit qualitativeresponse models, as well as variants of the tobit model such as Heckman’s twostep model and the double-hurdle model. Section 6.2 explains how to generate a data set by the Monte Carlo method. In this section, the relationship between a latent variable and an observed dependent variable becomes clear. Section 6.3 discusses some examples of estimating qualitative-response models. Section 6.3.1 considers the probit and logit models as examples of discrete choice models. Section 6.3.2 explains the tobit model that treats discrete and continuous choice simultaneously, e.g., whether or not a household purchases an automobile, and if the household decides to purchase an automobile, how much it pays for the automobile. Section 6.3.3 explains the truncated tobit model. Section 6.3.4 discusses Heckman’s two-step method to avoid sample selection bias. Finally, Section 6.3.5 explains the double-hurdle model as a variant of the tobit model.
6.1 Qualitative-response models The difference between the usual regression models and qualitative response models can be illustrated through an analysis of household labor supply. Say we have a stable relationship between the income of the head of the household (the husband) and the housewife’s labor supply: when the income of the head of the household increases, the housewife’s labor supply decreases. This is called the income effect of the husband on the housewife’s labor supply. Thus, when the husband’s income increases, the probability of the housewife not working also increases. If we put the probability of non-labor supply by the housewife on the y-axis and the income of the head of the household on the x-axis, there is an upward-sloping line on the x–y-axis. Regression analysis can be conducted with the probability of non-labor supply as the dependent variable and the husband’s income as the independent variable. This is shown in Figure 6.1. The crosses in the figure indicate the probability of the wife working in each income class (I, II, and III). The circles in the figure indicate whether or not the housewife works; if the housewife is not working (non-labor) the circle is on 1, and if the housewife is working (labor) the circle is on 0. In income class I, there are five households; two out of the five are non-labor, and so the probability of non-labor is 0.4. In income class III, 4 out of 5 housewives are non-labor, and thus the aggregate probability of non-labor is 0.8. In the figure there are two regression lines. One is linear and the other is S-shaped. The linear line is called the linear probability line and is derived from the usual linear regression analysis of Pi = a + bxi, where Pi and xi are non-labor probability and the income of the head of the household at the i-th income class, respectively. The non-labor probability is negative in some areas and above unity in other areas.
144╇╇ Microeconomic analysis using micro-data Non-labor supply probability for housewives
Linear probability function
p
1 × S-shaped probability function
×
×
0
x (income) , class
,, class
,,, class
Figure 6.1╇ Linear probability and S-shaped probability curves.
The S-shaped regression line, on the other hand, indicates a probability between 0 and 1 that is derived from a qualitative-response model utilizing normal distribution. Thus, the qualitative-response model imposes a theoretical restriction that the probability is between 0 and 1. The qualitative-response model accommodates such a restriction by introducing the probability distribution function with a range between 0 and 1. This function is represented explicitly as: P = F(x)
(1)
where F is a stochastic distribution function. In the qualitative-response model, the concept of latent variables is introduced, which is different than theoretical variables and observational variables. Generally, the latent variable, y*, is assumed to be a linear function of the independent variable, xi, as: yi* = β0 + β1xi + εi
(i = 1, 2, …, n)
(2)
where εi is an i.i.d. (independently and identically distributed) random variable of normal distribution with a mean of 0 and a variance of σ2. The observational value of y relative to y* is:
Microeconomic analysis using micro-data╇╇ 145 yi = 1 if yi* > 0 yi = 0 if yi* ≤ 0
(3)
From the equation yi* = β0 + β1xi + εi, we know that the feasible value of y* extends from minus infinity to plus infinity. In order to transform the continuous variable yi* to the discrete-choice variable yi, it is important to delineate the region restricted by the inequality constraint. The variable yi* is related to the parameters β0 and β1, the independent variable xi, and the random variable εi. The case of yi = 1 means: yi = 1 → yi* > 0 → β0 + β1xi + εi > 0 → εi > -(β0 + β1xi)
(4)
Accordingly, the probability that yi = 1 is: P(yi = 1) = P(εi > -(β0 + β1xi))
(5)
That is, the probability that yi = 1 corresponds to the probability that εi is greater than -(β0 + β1xi). Now εi is assumed to be an i.i.d. (independently, identically distributed) normal distribution with a mean of 0 and a variance σ2. The meaning of P(εi > - (β0 + β1xi)) is explained graphically in Figure 6.2. When we define zi as zi = β0 + β1xi for simplicity, then equation (5) becomes: P(εi > -zi) = P(εi < zi) = ∫-∞zi/σf(t)dt = F(zi/σ)
(6)
where f(t) is the standard normal density function and F is the cumulative standard normal distribution function. When we specify that εi is normal and symmetric at 0, the first term of equation (6) is indicated by the hatched area in Figure 6.2(a), and the equivalent probability is indicated in (b). When (b) is transformed into a standard normal distribution function, this becomes F(zi/σ), as indicated in (c). Then the probability that yi = 1 is: P(yi = 1) = F(zi/σ) = F(β0/σ + (β1/σ) xi)
(7)
Accordingly, in the probit analysis, we can estimate the parameters of β0/σ and β1/σ, but cannot estimate β0, β1, and σ separately. The difference between the probit and logit models lies in the form of the distribution function. Probit assumes a normal distribution, while logit assumes the following distribution function: F(zi/σ) = exp(zi/σ)/(1 + exp(zi/σ))
(8)
To solve equation (8) with respect to zi/σ, we obtain: zi/σ = log(F(zi/σ)/(1 - F(zi/σ))
(9)
(a)
εi
−zi
(b)
zi
(c)
εi
1
F
0
zi σ
zi σ
Figure 6.2╇ Graphical illustration of the probit model.
ui
Microeconomic analysis using micro-data╇╇ 147 This is because 1 - F(zi/σ) = 1 - exp(zi/σ)/(1 + exp(zi/σ)) = 1/(1 + exp(zi/σ)), and therefore F(zi/σ)/(1 - F(zi/σ)) = exp(zi/σ). When we take the log for both sides, we get equation (9). Denoting F(zi/σ) as Pi , we get: log(Pi /(1 - Pi)) = (β0/σ) + (βi /σ)xi
(10)
This is the estimating equation of the logit model, with the parameters for estimation β0/σ and β1/σ, but not β0, β1, and σ separately, which is the same as for the probit model. The tobit model also introduces the latent variable y*. The latent variable y* is specified in linear form: yi* = β0 + β1xi + εi
(i = 1, 2, …, n)
(11)
where εi is IIN(0, σ2). The correspondence between the observed variable and the latent variable is: yi = yi* if yi* > 0 yi = 0 if yi* ≤ 0
(12)
The relationship between the observed variable and the latent variable in the tobit model is as follows: yi = 0 → yi* ≤ 0 → β0 + β1xi + εi ≤ 0 → εi ≤ -(β0 + β1xi) yi = yi* → yi* = yi → β0 + β1xi + εi = yi → εi = yi - (β0 + β1xi)
(13)
We now consider the probability that yi = 0 and the probability that yi = yi*. The probability that yi = 0 is: P(yi = 0) = P(εi ≤ -zi)
(14)
where εi is assumed to be IIN(0, σ2). Then equation (14) becomes: P(εi ≤ - zi) = ∫-∞-zi/σf(t)dt = F(- zi/σ)
(15)
We will also consider the probability that yi = yi*: P(yi = yi*) = P(εi = yi - zi) = f(εi)
(16)
where εi is IIN(0, σ2). The probability density function of P(yi = yi*) is denoted as: f(yi) = f(t) | J |
(J = 1/σ)
(17)
where J is the Jacobian. Therefore, the probability density function at yi = yi* is: f((yi - zi)/σ)(1/σ)
(18)
Next let’s look at Heckman’s two-step method and the double-hurdle model as variants of tobit models. Heckman’s two-step method is used for studying labor
148╇╇ Microeconomic analysis using micro-data supply behavior. Usually we estimate the supply schedule using market wages, hours of work, household characteristics, and socio-economic variables. The market wage and hours of work of a household member are obtained from survey data. When we look at such data, we notice that there are many households that report zero hours worked and no wage rates. For households reporting zero hours worked, we consider two explanations: one is that a household member does not want to work at any market wage rate; the other is that he or she does not work because the reservation wage (y* in equation (11)) of the household member exceeds the market wage. In the latter case, when the market wage increases, the household member will probably work in the labor market. If we estimate the labor supply function using the data on market wage or hours worked, however, this does not yield a realistic supply schedule. This is because we would only be estimating the labor supply behavior of households whose reservation wage is less than the market wage and would ignore information on the household distribution of the reservation wage. Let us assume that the reservation wage is a function of the number of children under the age of six in the nuclear family, the family’s income, and the number of elderly people that the household has to take care of. The reservation wage of the spouse is high if he or she is engaged in significant child or elderly care or if the breadwinner earns a high income. But when the breadwinner becomes unemployed, or household care responsibilities decline, the reservation wage of the spouse decreases and the spouse is more likely to work. If we use only the market wage and ignore the relationship between the market wage and the reservation wage, we a priori exclude the sub-population whose reservation wage is more than the market wage. This is a source of sample selection bias in the model. For this reason, we cannot get consistent estimates from the model. To address such selection bias, Heckman (1979) suggests the following model: y1i = x1iβ1 + u1i y2i = x2iβ2 + u2i
(19)
where E(uji) = 0, E(uji, uml) = σjm, j = m and E(uji, uml) = 0, j ≠ m. Then: E(y1i│x1i, y2i ≥ 0) = X1iβ1 + E(u1i│u2i, u2i ≥ - x2iβ2) = X1iβ1 +σ12/σ221/2λi
(20)
where λi = φ(x2iβ2/σ221/2)/Φ(-x2iβ2/σ221/2), which is the inverse of Mill’s ratio. As the population regression for the first equation of (19) is: E(y1i│x1i) = x1iβ1,
(21)
and the regression equation for the sub-sample of available data is: E(y1i│x1i, y2i ≥ 0) = x1iβ1 + σ12/σ221/2λi
(22)
Microeconomic analysis using micro-data╇╇ 149 we can determine the gap between these two regressions. To get a consistent estimator of β1, we have to specify the term σ12/σ221/2λi explicitly. Heckman proposed the two-step method in order to estimate β1 consistently. Now we explain the double-hurdle model, which includes two latent variables. As proposed by Deaton and Irish (1984), the model can be used to deal with misreporting. There are many zero-expenditure households in the microdata sets of the Family Expenditure Survey in the United Kingdom, particularly households that report the purchase of no consumer durables. Deaton and Irish argued that there are two types of households reporting zero expenditure on consumer durables. One purchased no consumer durables in the sample survey period and correctly reported zero expenditure in the survey. The other purchased consumer durables, but reported zero expenditure in the survey for some unspecified reason. As a result, Deaton and Irish argued, there is underreporting in the survey of the aggregate amount of consumer durables purchased. When developing a model to deal with this problem in a mathematical form, we have to specify both the possibility of misreporting in the survey and the treatment of zero-expenditure households. Deaton and Irish proposed the p-tobit econometric model. Crag (1971), in turn, developed a variant of their model. His double-hurdle model is: y*i = α0 + α1xi + ui ui ~ N(0, σ2) * z i = β0 + β1wi + vi vi ~ N(0, 1) zi = 1 if z*i > 0, 0 if z*i ≤ 0 yi = y*i if y*i >0 and zi = 1 0 otherwise
(23)
The first equation in the model is a tobit-type demand function for consumer durables. The latent variable y *i is a function of socio-economic factors. The second equation reflects the consumer’s decision whether or not to report the expenditure in the survey questionnaire. The second equation is basically a probit function that represents the probability that a household reported or misreported an entry of expenditure for a consumer durable. The latent variable z*i is a function of household characteristics such as type of household, the ages of its members, and their educational levels. The variable zi takes on the value 1 with probability pi and 0 with probability (1 - pi), where pi and (1 - pi) are described by the distribution function as Φ(β0 + β1wi) and Φ(-(β0 + β1wi)), respectively. The case where zi = 1 and y*i > 0 indicates that a positive expenditure was reported. When zi = 0 and y*i > 0 indicates that a household purchased some items, but a zero expenditure was reported, and this is a case of misreporting. When y*i < 0, this indicates that a zero expenditure was reported.
150╇╇ Microeconomic analysis using micro-data y*
[II]
[I]
misreport (yi = 0)
(yi > 0)
z*
0 (yi = 0)
(yi = 0)
[III]
[IV]
Figure 6.3╇ Graphical illustration of the double-hurdle model.
The probability of reporting zero expenditure is: Φ(-(α0 + α1xi)/σ) + Φ(-(β0 + β1wi))Φ((α0 + α1xi)/σ)
(24)
The first term of equation (24) corresponds to the probability that a household did not purchase consumer durables, and the second term corresponds to the probability that the household purchased consumer durables, but did not report that purchase in the survey. This second term represents the source of misreporting. The probability density of reporting positive expenditure is: Φ(-(β0 + β1wi)) σ-1φ(yi - (α0 + α1xi)/σ)
(25)
In Figure 6.3, quadrant I is the area for households that purchased consumer durables and reported the amount correctly. Quadrant II is the area for misreporting. Quadrants III and IV are the areas for zero expenditure on consumer durables.
6.2 How to generate a data set by the Monte Carlo method We now consider a numerical example of the parameters of the probit and logit models and specify the data-generating mechanism for the independent variables xi and wi and the disturbance term in Table 6.1. To construct a virtual data set, we
Microeconomic analysis using micro-data╇╇ 151 Table 6.1╇ Probit and logit models: virtual data Model I (a) (b) Model II β0
β0
β1
100
-4
100
-4
[1, 40]
β2 x
w
β1
x
ε
[1, 40]
n 2
500
2
500
N(0, 30 ) N(0, 10 ) ε
n
(c)
100 -4 1
[1, 40] [–10, 10] N(0, 302) 500
(d)
100 -4 1
[1, 40] [–10, 10] N(0, 102) 500
determine the values of the parameters β0, β1, and β2, then generate random numbers in order to determine the values of the independent variables xi and wi and the realized value of εi. The independent variables xi and wi are obtained from uniform random numbers, while ei, namely the realization of εi, is derived from normal random numbers. The latent variable is determined by the equation yi* = β0 + β1xi + ei in Model I of Table 6.1, or yi* = β0 + β1xi + β2wi + ei in Model II of the same table. We specify the value of yi in the following manner: if yi* > 0, then yi = 1, and if yi* ≤ 0, then yi = 0. Micro-data sets have a large number of observations. To analyze qualitativeresponse models, it is necessary to include a large number of observations in order to get stable parameters. In this experiment, we decide a priori that the sample size is 500. Table 6.1 shows the data sets for the four cases of (a), (b), (c), and (d). The cases of (a) and (b) correspond to Model I, and the cases of (c) and (d) correspond to Model II. The values of the structural parameters are the same for the four cases. For the disturbance terms, there are two kinds of variances, namely 100 (= 102) and 900 (= 302). Figures 6.4 and 6.5 depict the relationship of the latent variables and observations when the standard error is 10. When the value of the latent variable is positive, the observation is yi = 1, and when the value of the latent variable is non-positive, the observation is yi = 0. Looking at the scatter of the latent variable, we see that it is positive when the range of the independent variable xi is between 0 and 20, it is mixed when the range of xi is between 20 and 30, and it is negative when the range of xi is over 30. As for the observations, when the range of the independent variable is between 0 and 20, yi = 1; when the range of the independent variable is between 20 and 30, the observations of yi = 1 or yi = 0 are mixed; and when the range of the independent variable exceeds 30, yi = 0. Figures 6.6 and 6.7 depict the scatter when its standard error is 30. Comparing Figures 6.4 and 6.6, we see that the dispersion is larger in Figure 6.6. In Figure 6.7, the observations yi = 1 and yi = 0 overlap more widely with the 10-40 range of the independent variable than the 20-30 range of the independent variable in Figure 6.5. Now we will explain how to construct virtual data for the tobit model. There are two fundamental data sets, as shown in Table 6.2. After determining the
152╇╇ Microeconomic analysis using micro-data 150
Latent variable
100
50
0
−50
−100
0
5
10
15
20
25
30
35
40
45
30
35
40
45
Income
Figure 6.4╇ Probit (latent variable) standard error = 10. 1.2
Observed value
1 0.8 0.6 0.4 0.2 0 −0.2
0
5
10
15
20
25
Income
Figure 6.5╇ Probit (observation) standard error = 10.
structural parameters β0 and β1, generating the independent variable xi from uniform distribution, and generating the realized value of the random disturbance εi from normal random distribution, the series of latent variables is derived from the following equation: yi* = β0 + β1xi + ei
(26)
Microeconomic analysis using micro-data╇╇ 153 200 150
Latent variable
100 50 0 −50 −100 −150
0
5
10
15
20
25
30
35
30
35
40
45
Income
Figure 6.6╇ Probit (latent variable) standard error = 30. 1.2
Observed value
1 0.8 0.6 0.4 0.2 0 –0.2
0
5
10
15
20 25 Income
40
45
Figure 6.7╇ Probit (observation) standard error = 30.
When the latent variable is yi* ≤ 0, the value of yi is specified as yi = 0. When the latent variable is yi* > 0, we specify the value of yi as yi = yi*. In case (a) the standard error of the random disturbance is 30, while in case (b) it is 10. The sample size is 500. Figures 6.8 and 6.9 indicate the scatter in the case of a standard error of 10, and Figures 6.10 and 6.11 indicate the scatter when the standard error is 30. From Figures 6.9 and 6.11, we can identify the difference between the
154╇╇ Microeconomic analysis using micro-data Table 6.2╇ Censored tobit model: virtual data β0
β1
x
ε
n
(a)
-100
5
[1, 40]
N(0, 30 )
500
(b)
-100
5
[1, 40]
N(0, 102)
500
2
150
Latent variable
100 50 0 −50 −100 −150
0
5
10
15
20 25 Income
30
35
40
45
30
35
40
45
Figure 6.8╇ Tobit (latent variable) standard error = 10.
Observed value
150 100 50 0 –50
0
5
10
15
20 25 Income
Figure 6.9╇ Tobit (observation) standard error = 10.
probit and tobit models. When the latent variable is positive, the value of the observation is unity in the probit model, while the observation is the same as the latent variable for the tobit model. In the tobit model, the value of the standard error is estimated independently, while in the probit model the ratio of the values of (β0/σ) and (β1/σ) is estimated.
Microeconomic analysis using micro-data╇╇ 155 200 150
Latent variable
100 50 0 −50 −100 −150 −200
0
5
10
15
20 25 Income
30
35
40
45
30
35
40
45
Figure 6.10╇ Tobit (latent variable) standard error = 30.
Observed value
200 150 100 50 0 –50
0
5
10
15
20 25 Income
Figure 6.11╇ Tobit (observation) standard error = 30.
This is because yi* in the probit model is only affected by the classification of negative or positive values, while in the tobit model the positive value corresponds exactly to the observation. The magnitude of variance is important to show the scatter of observations. Table 6.3 shows the virtual data set for the truncated tobit model. We constructed the data set the same way that we did for the tobit model above. But the link between the latent variable and the observation is different. In the present model, the lower limit is assumed to be 20; that is, L = 20. Therefore, the latent variable is observed as yi* = yi when the price of an automobile is greater than or equal to 20. When yi* is less than 20, the observation is dropped. Under these conditions, out of 1,200 observations, the sample size is 517 in case (a), 508 in case (b), and 514 in case (c).
156╇╇ Microeconomic analysis using micro-data Table 6.3╇ Truncated tobit model: virtual data β0
β1
x
ε
(a)
-100
5
[1, 40]
N(0, 302)
(b)
-100
5
(c)
-100
5
n
n+
1200
517
[1, 40]
2
N(0, 10 )
1200
508
[1, 40]
N(0, 52)
1200
514
Table 6.4╇ Heckman’s two-step method: virtual data α0
α1
x
ε1
β0
β1
z
ε
ρ
(a)
-100
5
[1, 40]
N(0, 302)
20
0.5
[1, 10]
N(0, 52)
0
(b)
-100
5
[1, 40]
N(0, 302)
20
0.5
[1, 10]
N(0, 52)
0.7
Table 6.5╇ Double hurdle model: virtual data (a) (b
α0
α1
x
ε1
β0
β1
w
5
0.5
[10, 100]
N(0, 302)
0
0.025
[10, 50]
0
0.025
[10, 50]
5
0.5
[10, 100]
2
N(0, 50 )
The virtual data set used for Heckman’s two-step method is presented in Table 6.4. The difference between cases (a) and (b) is the covariance between ε1 and ε2. In case (a) the covariance is 0, and in case (b) it is 0.7. The sample size is 500 in each case. The virtual data set used for the double-hurdle model is shown in Table 6.5. The parameters and the range of the independent variables are the same for the two cases, but the standard error for the tobit-type demand function is different: one is 30, the other is 50.
6.3 Examples 6.3.1 The probit and logit models The probit and logit models describe a discrete choice. In both models, we introduce the concept of a latent variable. We assume that the latent variable, yi*, has the following relationship to the independent variable, xi: yi* = β0 + β1xi + εi
(i = 1, 2,…, n)
(27)
where εi is the stochastic variable with an i.i.d. (independently identically distributed) normal distribution whose mean is zero with a variance of σ2. The observed variable, yi, takes the value of either 0 or 1, as follows: yi* > 0 ⇒ yi* ≤ 0 ⇒
yi = 1 yi = 0
(28)
Microeconomic analysis using micro-data╇╇ 157 In the equation yi* = β0 + β1xi + εi, the range of yi* is between minus infinity and plus infinity. In order to link the latent variable to the discrete choice, two choices – work or non-work – are possible, as indicated in equation (28). When yi = 1, we can derive the following relationship: yi = 1 ⇒ yi* > 0 ⇒ β0 + β1xi + εi > 0 ⇒ εi > -(β0 + β1xi)
(29)
Therefore, the probability that yi = 1 is derived as: P (yi = 1) = P(εi > -(β0 + β1xi))
(30)
That is, the probability that yi = 1 is equal to the probability that εi is greater than - (β0+β1xi). Moreover: P(εi > - (β0 + β1xi)) = 1 - F(-(β0 + β1xi)) = 1 - Ф(-((β0/σ) + (β1/σ)xi)) = Ф((β0/σ) + (β1/σ)xi)
(31)
where Ф is the distribution function of the standard normal distribution. On the other hand, when yi = 0, we can derive the following relationship: yi = 0 ⇒ yi* ≤ 0 ⇒ β0 + β1xi + εi ≤ 0⇒ εi ≤ - (β0 + β1xi)
(32)
Therefore, the probability that yi = 0 is derived as: P(yi = 0) = P(εi ≤ - (β0 + β1xi))
(33)
That is, the probability that yi = 0 is equal to the probability that εi is less than or equal to -(β0 + β1xi). Further: P(εi ≤ - (β0 + β1xi)) = F(-(β0 + β1xi)) = Ф(-((β0/σ) + (β1/σ)xi)) = 1 - Ф((β0/σ) + (β1/σ)xi)
(34)
To estimate β0/σ and β1/σ, we specify the likelihood function as: L = ∏yi=0 (1 - Ф((β0/σ) + (β1/σ)xi)) ∏yi=1 (Ф((β0/σ) + (β1/σ)xi))
(35)
Then, for the probit model, the likelihood function is: L = ∏ (1 - Ф(((β0/σ) + (β1/σ)xi)))1-yi(Ф((β0/σ) + (β1/σ)xi))yi
(36)
and the log-likelihood is: log L = ∑(1 - yi) log(1 - Ф((β0/σ) + (β1/σ) xi)) + yi log Ф((β0/σ) + (β1/σ)xi) (37) To get the maximized log L, we differentiate log L with respect to β0/σ and β1/σ.
158╇╇ Microeconomic analysis using micro-data The maximum likelihood (ML) normal equation is: ∂log L/∂(β0/σ) = ∑ (yi - Ф((β0/σ) + (β1/σ)xi))/(Ф((β0/σ) ╇╇ + (β1/σ)xi)(1 - Ф((β0/σ) + (β1/σ)xi)))φ((β0/σ) + (β1/σ)xi) ∂log L/∂(β1/σ) = ∑ (yi - Ф((β0/σ) + (β1/σ)xi))/(Ф((β0/σ) ╇╇ + (β1/σ)xi)(1 - Ф((β0/σ) + (β1/σ)xi)))φ((β0/σ) + (β1/σ)xi)xi
(38)
The ML estimator of (β0/σ) and (β1/σ) is obtained by solving ∂log L/∂(β0/σ) = 0 and ∂log L/∂(β1/σ) = 0. The difference between the probit and logit models lies in the specification of the distribution function. The distribution function of the probit model is derived from normal distribution, while that of the logit model is specified as: F(zi/σ) = exp(zi/σ)/(1 + exp(zi/σ))
(39)
where zi = β0 + β1xi. When equation (39) is solved by ziâ•›/σ, then ziâ•›/σ becomes: zi/σ = log(F(zi/σ)/(1 - F(zi/σ)))
(40)
We next write Pi = F(zi/σ). Then equation (40) becomes: log(Pi/(1 - Pi)) = (β0/σ) + (β1/σ)xi + εi
(41)
From equation (41), the estimates of (β0/σ) and (β1/σ) are obtained by using the data set of Pi and xi. However, we cannot get the structural parameters β0, β1, and σ separately from the estimates of (β0/σ) and (β1/σ). This is because the number of estimates is two, while the number of structural parameters is three. Thus, we usually define σ as unity; the disturbance term εi is assumed to be independent normal, with a mean of 0 and a variance of 1, such that εi ~ IN(0, 1), not εi ~ IN(0, σ2). Under the condition of normalization that σ = 1, the coefficients of β0 and β1 are obtained by the two estimates. We next consider two models: one with only one independent variable and the other with two independent variables. The model with one independent variable was already introduced in equation (27). The second is shown below: yi* = β0 + β1xi + β2wi + εi
(42)
The link between latent variables and observed variables is the same as indicated in equation (28). We call the model of equation (27) Model I, and the model of equation (42) Model II. Table 6.6 presents estimates by the probit, logit, and OLS methods. In case (a), 321 out of 500 observations are yi* > 0. By coincidence, in case (b), again 321 out of 500 observations are yi* > 0. In case (a), the estimate of the intercept for the probit model is 3.266 and the tangency coefficient b1 is -0.126.
Microeconomic analysis using micro-data╇╇ 159 Table 6.6╇ Probit and logit models: estimation results (a)
(b)
(c)
(d)
n+
321
321
304
286
b0
3.266 (12.5)
8.666 (8.7)
3.092 (12.6)
10.33 (8.0)
b1
-0.126 (12.8)
-0.344 (8.7)
-0.125 (12.9)
-0.429 (8.0)
0.0315 (2.4)
0.0859 (3.4)
0.528
0.823
0.506
0.849
b0
5.761 (11.2)
15.439 (8.1)
5.415 (11.3)
18.382 (7.4)
b1
-0.222 (11.4)
-0.613 (8.1)
(1) Probit model
b2 R2 (2) Logit model
b2 R2
0.528
0.822
-0.218 (11.5)
-0.767 (7.4)
0.056 (2.5)
0.157 (3.4)
0.506
0.849
(3) OLS (dependent variable: y) b0
1.229 (40.0)
1.332 (54.5)
1.220 (37.4)
1.311 (51.4)
b1
-0.029 (22.0)
-0.035 (32.5)
-0.030 (21.4)
-0.035 (33.0)
0.0066 (2.4)
0.0055 (2.5)
0.493
0.679
0.480
0.690
98.64 (35.5)
101.93 (109.4)
b2 R2
(4) OLS (dependent variable: y*) b0
100.95 (36.1)
98.90 (105.7)
b1
-4.013 (33.1)
-3.947 (96.1)
b2 R2
0.687
0.948
-3.918 (32.9)
-4.125 (105.6)
1.141 (4.9)
1.0287 (13.0)
0.686
0.958
To compare these estimates to the true values, β0 = 100 with a standard error of 30, β0/σ is 100/30 = 3.33, and β1 is -4; therefore, β1/σ = -0.133. Comparing 3.33 with 3.27 and -0.133 with -0.126, it is clear that the estimates are reasonable. In case (b), the intercept of the regression b0 is 8.666 and the tangency coefficient b1 is -0.344. Considering a standard error of 10, the true values of β0/σ and β1/σ are 10 and -0.4, respectively. In the probit model, β0, β1, and σ are not estimated separately, but we can estimate the ratio of β0/σ and β1/σ. Now, consider the results of the logit model. In case (a), the estimated value of the intercept b0 is 5.761 and that of the tangency b1 is -0.222. In case (b), the estimate of the intercept b0 is 15.439 and that of the tangency b1 is -0.6135.
160╇╇ Microeconomic analysis using micro-data According to Amemiya (1985), the coefficients in the probit and logit models have the following approximate relationships: bL0 = 1.6bP0 bL1 = 1.6bP1 This indicates that the parameters of the logit model are 1.6 times larger than those in the probit model. Let us confirm these relationships. In case (a) the ratio of the intercept is 5.761/3.266 = 1.76, and the ratio of the tangency is 0.2222/0.126 = 1.76. In case (b) the ratio of the intercept is 15.436/8.666 = 1.78, and that of the tangency is 0.6135/0.344 = 1.78. These ratios indicate that the transformation formula is reasonable.
6.3.2 Censored tobit model In the tobit model, the latent variable, yi*, is included: yi* = β0 + β1xi + εi
(i = 1, 2,…, n)
(43)
where εi ~ IN(0,σ2). The link between the latent variable and the observed variable is as follows: yi* ≤ 0 ⇒ yi = 0 yi* > 0 ⇒ yi = yi*
(44)
The relationship indicated in (44) is then transformed into the relationship of εi as: yi = 0 ⇒ yi* ≤ 0 ⇒ β0 + β1xi + εi ≤ 0 ⇒ εi ≤ -(β0 + β1xi) yi = yi*⇒ yi* = yi ⇒ β0 + β1xi + εi = yi ⇒ εi = yi - (β0 + β1xi)
(45)
Next we consider the probability that yi = 0 and the probability density of yi = yi*. In the case of yi = 0, the probability is specified as: P(yi = 0) = P(εi ≤ -(β0 + β1xi))
(46)
and in the case of yi = yi*, the probability density is specified as: P(yi = yi*) = P(εi = yi - (β0 + β1xi))
(47)
The probability density function of εi is written in the form of the standard normal density as: f((yi - (β0 + β1xi))/σ)(1/σ)
(48)
Microeconomic analysis using micro-data╇╇ 161 Therefore, the likelihood function of the tobit model is: L = ∏yi=0 Φ(-zi/σ) ∏yi>0 f((yi - zi)/σ)(1/σ) = ∏yi=0 (1 - Φ(zi/σ)) ∏yi>0 (1/(σ√(2π))) exp((yi - (β0 + β1xi))2/(2σ2))
(49)
where zi = β0 + β1xi and Φ is the distribution function of the standard normal distribution. The log-likelihood function is: log L = ∑ yi=0 log(1 - Φ (zi/σ)) + ∑ yi>0 (1/(σ√(2π))) + ∑ yi>0 (yi - (β0 + β1xi))2/(2σ2)
(50)
The ML estimator of the model is obtained by differentiating the log-likelihood function with respect to the variables β0 and β1 and the variance σ2. The ML method applied to the log-likelihood function is as follows: ∂log L/∂β0 = ∑ fixi/(1 - Φi) + (1/σ2) ∑(yi - (β0 + β1xi)) = 0 ∂log L/∂β1= ∑ fixi/(1 - Φi) + (1/σ2) ∑(yi - (β0 + β1xi))xi = 0 ∂log L/∂σ2 = (1/(2σ2)) ∑(β0 + β1xi)fi/(1 - Φi) - N1/(2σ2) + (1/(2σ4)) ∑(yi - (β0 + β1xi))2
(51)
where fi = (1/(σ√(2π))) exp((yi - (β0 + β1xi))2/(2σ2)), Φi = Ф((β0 + β1 xi)/σ)), and N1 is the number of observations for yi > 0. Table 6.7 shows the estimates. In cases (a) and (b), the structural parameters are the same, except in terms of the variance. The estimate of the standard error is almost the same as for the virtual data set. 6.3.3 Truncated tobit model In the previous section on the censored tobit model, when the latent variable, yi*, is negative, an observation of zeros exists. In the censored tobit model, we can get observations for both zero and positive expenditures. However, if yi* is less than the amount of L, there is no observation. Let’s consider automobile sales by dealers as an example. Dealers have sales information on their customers regarding the type of car they purchased, the price, their household characteristics, and other issues. But dealers have no information on households that did not purchase automobiles from them. There are thus positive latent variables for households that did purchase an automobile from dealers, but there is no information on households that did not purchase one from dealers. In cases where the dealers and researchers have no information on non-purchase households, we employ a truncated tobit model. In the truncated model we assume that the lowest price for an automobile is L. Then: yi* = β0 + β1xi + εi εi ~ IN(0, σ2)
(52)
162╇╇ Microeconomic analysis using micro-data Table 6.7╇ Censored tobit model: estimation results (a)
(b)
(1) Tobit model n+
254
b0
-97.7 (16.0)
b1
4.846 (23.1) 30.44 (22.2)
s
260 -103.8 (36.5) 5.121 (55.2) 9.3536 (23.0)
(2) OLS (dependent variable: y) b0
-22.05 (10.5)
-27.66 (19.4)
b1
2.45 (27.7)
2.65 (44.7)
0.606
0.800
R
2
(3) OLS (dependent variable: y*) 101.4 (37.6)
b0
-100.58 (113.9)
b1
4.95 (43.4)
5.019 (136.5)
R2
0.790
0.973
and there exists a sample for yi* ≥ L: yi* = yi no sample
yi*≥ L yi*< L
(53)
In this model, the density of observation is written as g(yi), and thus the probability density of g(yi) is: g(yi) = (f((yi - zi)/σ)(1/σ))/F((L - zi)/σ) if yi ≥ L. Otherwise: g(yi) = 0
(54)
where zi = β0 + β1xi. In the truncated tobit model, if the probability that y* ≥ 0 is 0.4, the probability that yi* = yi is weighted by 2.5 (1/0.4) so that the total probability becomes 1. Therefore, the likelihood function is: L = ∏ (f((yi - zi)/σ)(1/σ))/F((L - zi)/σ) = ∏ (f((yi - zi)/σ)(1/σ))/(1 - Ф(((L - zi)/σ)))
(55)
The log-likelihood function for the model is obtained by solving the following equation:
Microeconomic analysis using micro-data╇╇ 163 Table 6.8╇ Truncated tobit model: estimation results (a) +
(b)
n
517
b0
-47.7 (8.5)
b1 s
(c)
508 -102.28 (34.7)
3.007 (16.9) 30.44 (32.1)
514 -116.3 (78.7)
4.476 (49.3)
4.904 (106.4)
9.928 (31.8)
4.805 (32.0)
log L = -N log(σ√(2π)) - ½ ∑((yi - (β0 + β1xi))2/σ2) - ∑ log(1 - Ф((L - zi)/σ))
(56)
Then, considering the maximum likelihood for observations, the parameters are estimated. The ML method for the model is: ∂log L/∂β0 = (1/σ2) ∑(yi - (β0 + β1xi)) - ∑ φ(di)/(σФ(di)) = 0 ∂log L/∂β1= (1/σ2) ∑(yi - (β0 + β1xi))xi - ∑ xiφ(di)/(σФ(di)) = 0 ∂log L/∂σ2 = - N/(2σ2) + (1/(2σ4)) ∑(yi - (β0 + β1xi))2 - (1/(2σ2))∑ φ(di)/(σФ(di)) = 0
(57)
where di = (L - zi)/σ. The estimation results are shown in Table 6.8. The difference in the three cases is their standard errors. In case (a), the standard error is 30, in case (b) it is 10, and in case (c) it is 5. In cases (b) and (c), the true values are β0 = - 100 and β1 = 5; the estimates b0 are -102 in (b) and -116 in (c); and b1 is 4.476 in (b) and 4.904 in (c). On the other hand, in case (a) b0 = - 47 and b1= -3.0, indicating that due to large variance the estimates we obtained are relatively far from the true values. 6.3.4 Heckman’s two-step method With Heckman’s method, we first estimate the probit function of the y2i equation in equations (19) in Section 6.2 – namely, whether a household member chooses to work or not – and get the inverse of Mill’s ratio, λi. Then, in the second step, using sub-samples of household members that choose employment, we estimate the labor supply function based on the market wage. This is a mixture of discrete and continuous choices. The model is specified as: yi* = a0 + a1xi + ε1i yi = 1 (yi* > 0) yi = 0 (yi* ≤ 0) In the case of only yi = 1: wi = β0 + β1zi + γλi + ε2i
(58)
164╇╇ Microeconomic analysis using micro-data Table 6.9╇ Heckman’s two-step method: estimation results (a)
(b)
288
330
(1) Two-step estimation n+ a0
3.545 (12.6)
3.864 (12.5)
a1
-0.1477 (13.2)
-0.1482 (12.6)
R2
0.613 19.78 (27.0)
b0 b1 @mills 2
0.585 19.95 (31.9)
0.5670 (5.0)
0.5181 (5.3)
-0.5653 (0.8)
4.076 (6.0)
0.0785
0.151
a0
3.544 (14.3)
3.870 (13.8)
a1
-0.1476 (14.8)
-0.1490 (13.2)
b0
19.74 (26.7)
19.98 (33.1)
R
(2) Sample selection
b1
0.5673 (5.2)
0.5329 (23.6)
s
4.94 (24.1)
4.96 (23.6)
ρ
-0.0944 (0.7)
0.744 (9.9)
where λi is the inverse of Mill’s ratio. The estimated results are shown in Table 6.9. There is no difference between the true values and the estimates. 6.3.5 Double-hurdle model The double-hurdle model is one of the variants of the tobit model. The doublehurdle model includes binary choice and continuous choice simultaneously. There is a well-known tobit model that is a special case of the double-hurdle model. This double-hurdle model is specified as: y*i = a0 + a1xi + ui z*i = b0 + b1wi + vi zi = 1 if z*i > 0, 0 if z*i ≤ 0 yi = y*i if y*i >0 and zi = 1 0 otherwise
ui ~ N(0,σ2) vi ~ N(0,1)
(59)
The estimation results are shown in Table 6.10. From Table 6.10 we can see that the estimates of σ are reasonable. When the true value σ is 30, the estimated value is 29.8, and when the true value is 50, the estimated value is 51.3. On the other hand, the estimates of the tobit-type
Microeconomic analysis using micro-data╇╇ 165 Table 6.10╇ Estimates and hypothesis testing for the double-hurdle model (a) a0
1.872 (0.3)
(b) 10.09 (1.0)
a1
0.591 (8.1)
s
29.80 (16.5)
51.30 (12.0)
b0
-0.198 (0.9)
-0.00961 (0.0)
b1
0.0308(4.0)
0.3778 (3.1)
0.0294 (2.1)
Hypothesis testing: H0: a0 = 5
[0.527]
[0.883]
H0: a1 = 0.5
[0.234]
[0.000]
H0: β0 = 0
[0.352]
[0.972]
H0: β1 = 0.025
[0.445]
[0.738]
equation are relatively unstable: though the true value is 0.5, the estimated values are 0.59 and 0.37, and the value of the intercept fluctuates from 1.8 to 10.0. Compared to the tobit function, the probit function is relatively stable. The P-value of the slope parameter for the probit function is high, indicating that the estimated value is not different from the true value of 0.025. 6.4 The merits of micro-data sets revisited Finally, let us consider the differences between observations using aggregated data and micro-data sets in consumer-demand analysis of households. Here the merits of using micro-data sets are clear. We will again focus on automobile purchases of households because it is a good example for identifying the difference between aggregate data and individual data, and it enables us to better understand the importance of qualitative-response models. As we noted earlier, a micro-data set reports whether or not a given household purchased an automobile, and if the household purchased an automobile, how much the household reported spending for the automobile in the survey. It also reports various household characteristics, including socio-economic information, for each household. In contrast, aggregate data provides aggregate household data based on microdata broken down by particular categories, such as income class or the number of household members. With aggregate data we cannot know what any individual household reported spending on an automobile, or its household characteristics, including socio-economic information. If only one household in a given income class purchased an automobile worth $10,000 and the number of households within the same income class is 1,000, the average purchase of an automobile in that income class is $10. This tells us nothing useful. Without information on
166╇╇ Microeconomic analysis using micro-data household characteristics and socio-economic factors, it is not possible to conduct sophisticated and accurate empirical analysis. Using micro-data sets and complex empirical models, we can better understand the economic behavior of households and analyze economic activity more accurately.
Bibliography Amemiya, T. (1985) Advanced Econometrics, Cambridge, MA: Harvard University Press. Crag, J. G. (1971) “Some statistical models for limited dependent variables with application to the demand for durable goods,” Econometrica, 39, 829–844. Deaton, A., and M. Irish (1984) “Statistical models for zero expenditures in household budgets,” Journal of Public Economics, 23, 59–80. Green, W. H. (2008) Econometric Analysis, Upper Saddle River, NJ: Prentice Hall. Heckman, J. J. (1974) “Shadow prices, market wages, and labor supply,” Econometrica, 42, 679–693. Heckman, J. J. (1979) “Sample selection bias as a specification error,” Econometrica, 47, 153–161. Maddala, G. S. (1983) Limited-Dependent and Qualitative Variables in Econometrics, Cambridge: Cambridge University Press. Maddala, G. S. and K. Lahiri (2009) Introduction to Econometrics, Chichester, West Sussex: John Wiley. Maki, A. and Shigeru Nishiyma (1996) “An analysis of under-reporting for micro-data sets: the misreporting or double hurdle model,” Economics Letters, 52, 211–220.
7 Microeconomic analysis using panel data
Panel data analysis has a long tradition. It was first used in the analysis of variance (ANOVA) by Ronald Fisher in the early twentieth century. Panel data analysis involves pooling cross-section and time-series data. Though panel data analysis is a recent topic in econometrics, the original idea first appeared through Fisher’s analysis of variance. In Section 7.1 we explain typical panel data in economic statistics. This section discusses the development through regression analysis of the idea of panel data. After introducing a test of the equality of two means in panel data, we introduce a test of the equality of more than two means. This is the method of Fisher’s ANOVA in statistics. Then we introduce the method of ANOVA in econometrics based on regression analysis. Here we introduce concomitant variables. Finally, we explain the fixed-effects model and the random-effects model that are familiar in econometrics literature, and the test procedure, including the Hausman specification test, which tests the viability between the fixed-effects and randomeffects models. Section 7.2 explains how to generate micro-data sets by the Monte Carlo method. Section 7.3 introduces some examples of the analysis of panel data in economic statistics.
7.1 Models Some panel data in economic statistics is shown in Table 7.1. In the following equation, yij is an observed variable, β0 is an intercept, βi• is a particular coefficient for the i-th agent, β•j is a particular coefficient for the j-th period, and εij is a random variable: yij = β0 + βi• + β•j + εij
(i = 1, 2,…, n; j = 1, 2,…, m)
(1)
The random variable, εij, has an independently, identically distributed normal distribution with a mean of zero and a variance of σ2. For the coefficients βi• and β•j, the conditions ∑i βi• = 0 and ∑j β•j =0 are a priori imposed in order to identify β0.
168╇╇ Microeconomic analysis using panel data Table 7.1╇ Pooling time-series and cross-section data Agent
1 2 . . . n Average
Time 1
2
y11 y21 . . . y.1 z.1
y12 y22 . . . yn2 z.2 ↑(between)
....
.... ....
m
Average
y1m y2m . . . ynm z.m
z1. z2. . . . zn z
← (within)
Let’s now consider an ANOVA test in statistics on the equality of two means. We arbitrarily take two rows from Table 7.1 and calculate the mean of the two rows, respectively, as: z1• = (1/m) ∑j y1j = (1/m) ∑j (β0 + βi• + β•j + ε1j) = β0 + β1• + (1/m) ∑j ε1j ~ N(β0 + β1•, σ2) z2• = (1/m) ∑j y2j= (1/m) ∑j (β0 + βi• + β•j + ε2j) = β0 + β2• + (1/m) ∑j ε2j ~ N(β0 + β2•, σ2)
(2)
As an a priori restriction: β1• + β2• = 0
(3)
The statistic z1• - z2• is normally distributed with a mean of β 1• - β2• and a variance of σ2/(2/m). Under the null hypothesis that the two means are equal, the mean will be 0, i.e., β1• - β2• = 0. Combining an a priori restriction of β1• + β2• = 0 with β1• - β2• = 0, the null hypothesis is mathematically equivalent to β1• = β2• = 0. Under the null hypothesis, the quantity: Z = (z1• - z2•)√m2/(2m)/√(∑j(y1j - z1•)2 + ∑ j(y2j - z2•)2)/(2m - 2) has a t-distribution with 2m - 2 degrees of freedom. When comparing two means as the null hypothesis in regression analysis in statistics, we use a t-distribution. Next, let’s consider a test for a whole observation derived from the regression of: yij = β0 + βi• + β•j + εij
(i = 1, 2,…, n; j = 1, 2,…, m)
(4)
Here the analysis of variance (ANOVA) method is used to test the null hypothesis. Table 7.1 above presents what we call a two-way cross-classification model. The quadratic form for εij is divided in the following manner:
Microeconomic analysis using panel data╇╇ 169 Q = ∑i∑j εij2 = ∑i∑j (yij - β0 - βi• - β•j)2 = ∑i∑j (yij - zi• - z•j + z••)2 + m∑i (zi• - βi• - z••)2 + n∑j (zj• - βj• - z••)2+ mn(z•• - β0)2 = Q1 + Q2 + Q3 + Q4
(5)
The quantities Q1/σ2, Q2/σ2, Q3/σ2, and Q4/σ2 have a chi-squared distribution with degrees of freedom of mn - n - m + 1, n - 1, m - 1, and 1, respectively. Accordingly: F1 = (Q2/(n - 1))/(Q1/(mn - n - m + 1))
(6)
is followed by an F-distribution with degrees of freedom of n - 1 and mn - n - m + 1. And: F2 = (Q3/(m - 1))/(Q1/(mn - n - m + 1))
(7)
is followed by an F-distribution with degrees of freedom of m - 1 and mn - n m + 1. Now, we consider the sum of squares for yij - z•• as: S = ∑i∑j (yij - z••)2 = ∑i∑j (yij - zi• - z•j + z••)2 + m∑i(zi• - z••)2 + n∑j(zj• - z••)2 = S1 + S2 + S3
(8)
where: S1 (= Q1) = ∑i∑j (yij - zi• - z•j + z••)2 S2 = m∑i(zi• - z••)2 S3 = n∑j(zj• - z••)2 S2 is the variation between rows, and S3 is the variation between columns. Table 7.2 shows the analysis of variance for the complete two-way crossclassification model. Utilizing equation (8) and Table 7.2, we have two types of hypothesis testing: 1
Equality of means between rows: H0: βi• = 0 (i = 1, 2,…, n) HA: βi• ≠ 0 at least two βi•’s When the null hypothesis is true: F1 = (S2/(n - 1))/(S1/(mn - n - m + 1)) is followed by an F-distribution with degrees of freedom of n - 1 and mn - n - m + 1.
170╇╇ Microeconomic analysis using panel data Table 7.2╇ Analysis of variance for complete two-way cross-classification model (model I) Source of variation
Degrees of freedom
Sum of squares
Mean sum of squares (MSS)
Expectation of MSS
Residuals Rows Columns Total
mn - n - m + 1 n-1 m-1 mn - 1
S1 S2 S3 S
S1/(mn - n - m + 1) S2/(n - 1) S3/(m - 1) S/(mn - 1)
σ2 σ2 + (m/n -1)∑i βi•2 σ2 + (n/m - 1)∑j βj•2
2
Equality of means between columns: H0: β•j = 0 (j=1,2,…, m) HA: β•j ≠ 0 at least two β•j’s When the null hypothesis is true: F2 = (S3/(m - 1))/(S1/(mn - n - m + 1)) is followed by an F-distribution with degrees of freedom of m - 1 and mn - n - m + 1.
The other type of analysis of the variance model is called the variance components model (cf. Wilks 1962). The model specification is: yij = β0 + εi• + ε•j + εij
(i = 1, 2,…, n; j = 1, 2,…, m)
(9)
where β0 is the intercept, and εi•, ε•j, and εij are random variables having means of zero, positive variances, and zero covariances in the disturbance terms. Then the variance is divided into three factors as: σ2 = σ2i• + σ2•j + σ2••
(10)
Table 7.3 shows the analysis of variance for the complete two-way crossclassification model that Wilks (1962) named Model II. The difficulty one encounters with Model II lies in estimating σ2i•, σ2•j, and σ2•• from the sample {yij}: S1/(mn - n - m + 1) = σ2 S2/(n - 1) = σ2 + mσ2i• S3/(m - 1) = σ2 + nσ2•j
(11)
Using the above three equations, we get: σ2 = S1/(mn - n - m + 1) σ2i• = (1/m)(S2/(n - 1) - S1/(mn - n - m + 1)) σ2•j = (1/n)(S3/(m - 1) - S1/(mn - n - m + 1))
(12)
Microeconomic analysis using panel data╇╇ 171 Table 7.3╇ Analysis of variance for complete two-way cross-classification model (model II) Source of variation
Degrees of freedom
Sum of squares
Mean sum of squares (MSS)
Expectation of MSS
Residuals Rows Columns Total
mn - n - m + 1 n-1 m-1 mn - 1
S1 S2 S3 S
S1/(mn - n - m+1) S2/(n - 1) S3/(m - 1) S/(mn - 1)
σ2 σ2 + mσ2i• σ2 + nσ2•j
Now let’s consider the analysis of variance model in econometrics. Here we introduce the concomitant variable into the previous specification: yij = β0 + βi• + β•j + γxij + εij
(13)
The variable xij on the right-hand side has a systematic influence on the dependent variable, yij. Introducing concomitant variables reflects the difference between the data-collecting mechanisms of the social sciences and the natural sciences. In the natural sciences, it is easy to construct an isolated environment. Imagine two examples: in the first, we are interested in the relationship between the amount of wheat production and the amount of fertilizers used in wheat production; in the second, we are interested in the relationship between demand for wheat and the price of wheat. In the natural sciences, it is easy to control the amount of fertilizers used through a researcher’s experimental design. On the other hand, in the social sciences, including economics, it is difficult to impose ceteris paribus conditions in a data-generating mechanism. We sometimes observe that the demand for wheat in a society increases even if the price of wheat increases. This is because we cannot control income levels and the price of wheat relative to the prices of other goods and services. Social scientists cannot control household behavior – they can only observe it. We have to introduce concomitant variables in a model in order to introduce systematic factors that are not controllable by researchers. After excluding the systematic part of a dependent variable a priori – thereby making the dependent variable above yij - γxij – we test for the influence of time effects and/or individual effects through regression analysis. All of this is important for understanding panel data analysis. Following Fisher, we use the terms within and between in panel data analysis in econometrics. The term within is a time-series concept, while between corresponds to cross-section data as indicated in Table 7.1. In panel data analysis, we also introduce fixed effects and random effects. The fixed-effects model with two-way error components is specified as: yit = αi + λt + βxit + uit
(14)
172╇╇ Microeconomic analysis using panel data where αi denotes the unobservable individual effect, λt the unobservable time effect, and uit the remaining stochastic disturbances. For simplicity, we drop the time effect hereafter. The fixed-effects model then becomes: yit = αi + βxit + uit
(15)
where αi’s are parameters to be estimated. On the other hand, the random-effects model is specified as: yit = αi + βxit + uit
(16)
where αi is a stochastic variable formulated by: αi ~ IID(0, σα2) uit ~ IID(0, σu2)
(17)
When specifying vit = αi + uit: cov(vit vis) = σα2 + σu2 for t = s = σα2 for t ≠ s cov(vit vjs) = 0 for all t, s and i ≠ j
(18)
The difference between the fixed-effects model and the random-effects model consists in the characteristics of αi; in one it is a parameter, in the other it is a stochastic variable. This affects the number of parameters to be estimated. In the fixed-effects model, dummy variables are utilized to identify the individual effects. This means the model includes (n - 1) dummy variables in order to identify the individual effects, in addition to the concomitant variables. The randomeffects model does not need such dummy variables. Next we explain the Hausman test to determine whether the fixed-effects model or the random-effects model is appropriate for a particular analysis. As Maddala and Lahiri (2009) and others have noted, the Hausman test is not a test for fixed versus random-effects models. The essence of the Hausman test is: H0: αi is not correlated with xit H1: αi is correlated with xit Under H0, the GLS estimator is consistent and efficient (Maddala and Lahiri 2009). On the other hand, the within-group estimator, bw, is consistent whether the null hypothesis is valid or not. Greene (2008) suggests that “the term ‘fixed’ signifies the correlation of αi and xit,” and that “the crucial distinction between fixed and random effects is whether the unobserved individual effect embodies elements that correlated with the regressors in the model” (fixed-effects model) or not (random-effects model). If the null hypothesis of the random-effects model is true, the Hausman test statistics: H = (bw - bGLS)T(Vw - VGLS)-1(bw - bGLS)
Microeconomic analysis using panel data╇╇ 173 Table 7.4╇ Virtual data used for ANOVA Case
β0 βi• (i = 1, 100) (i = 101, 200) (i = 201, 300) (i = 301, 400) (i = 401, 500) β•j (j = 1) (j = 2) (j = 3) (j = 4) (j = 5) εij
Base (1)
(2)
(3)
╇ 1.0
╇ 1.0
╇ 1.0
-2.0 -1.0 ╇ 0.0 ╇ 1.0 ╇ 2.0
╇ 0.0 ╇ 0.0 ╇ 0.0 ╇ 0.0 ╇ 0.0
-2.0 -1.0 ╇ 0.0 ╇ 1.0 ╇ 2.0
-1.0 ╇ 0.0 ╇ 0.0 ╇ 0.0 ╇ 1.0 N(0, 12)
-1.0 ╇ 0.0 ╇ 0.0 ╇ 0.0 ╇ 1.0 N(0, 12)
╇ 0.0 ╇ 0.0 ╇ 0.0 ╇ 0.0 ╇ 0.0 N(0, 12)
are followed by χ2 distribution with k degree of freedom, where k is the number of independent variables.
7.2 How to generate a data set by the Monte Carlo method We now explain how to make ANOVA data sets. The model is: yij = β0 + βi• + β•j + εij
(i = 1, 2,…, n; j = 1, 2,…, m)
(19)
where β0, βi•, and β•j are parameters and εij is a random disturbance term with a normal distribution, a mean of zero and a variance of σ2. The first step for making a virtual data set is to determine the values of β0, βi•, and β•j. Then we get the realized value of the stochastic variable εij by generating normal random numbers with a mean of zero and a variance of σ2. Our observations in the series of yij is calculated by equation (19). Table 7.4 shows the set of parameters and the disturbance term for the three cases. Next, we explain how to generate the data sets for the fixed-effects model and the random-effects model. The data-generating mechanism for the fixed-effects model is similar to that of ANOVA. The fundamental equation is: yit = αi + βxit + uit
(i = 1, 2,…, N; j = 1, 2,…, T)
(20)
where N is fixed as 50 and T is fixed as 10. The data set includes 50 agents and 10 time periods. This is called balanced panel data. The intercept of αi’s is a parameter we a priori determined, and β is a parameter common to the model. The
174╇╇ Microeconomic analysis using panel data Table 7.5╇ Virtual data set for the fixed-effects model (a)╇ Same region for the independent variable xit (a.1)
(a.2)
(a.3)
(a.4)h
(a.5)
αi
0 ~ 5.0 (0.01)
0 ~ 5.0 (0.01)
0 ~ 5.0 (0.1)
β xit uit
2.0 N(20, 102) N(0, 52)
2.0 N(20, 102) N(0, 502)
2.0 N(20, 102) N(0, 52)
0 ~ 4.5 (0.1) (for i = 1, 450) 104.51 ~ 105.0 (for i = 451, 500) 2.0 N(20, 102) N(0, 52)
0 ~ 4.9 (0.1) (for i = 1, 490) 104.91 ~ 105.0 (for i = 491, 500) 2.0 N(20, 102) N(0, 52)
(b)╇ Different region for the independent variable xit
αi
β xit
uit
(b.1)
(b.2)
(b.3)
(b.4)
(b.5)
10 (for i = 1, 25) 60 (for i = 26, 50) 2.0 [10, 50] (i = 1, 25) [50, 90] (i = 26, 50) N(0, 202)
20 (for i = 1, 25) 50 (for i = 26, 50) 2.0 [10, 50] (i=1, 25) [50, 90] (i = 26, 50) N(0, 202)
30 (for i = 1, 25) 40 (for i = 26, 50) 2.0 [10, 50] (i = 1, 25) [50, 90] (i = 26, 50) N(0, 202)
34 (for i = 1, 25) 36 (for i = 26, 50) 2.0 [10, 50] (i = 1, 25) [50, 90] (i = 26, 50) N(0, 202)
35 (for i = 1, 25) 35 (for i = 26, 50) 2.0 [10, 50] (i = 1, 25) [50, 90] (i = 26, 50) N(0, 202)
realized values of the disturbance term uit are generated by normal random numbers with a mean of zero and a variance of σ2. We consider several alternatives for the range of the independent variable xit. For example, there is a case in which the interval of xit is common to all agents, while in another case it differs between agents. As for generating random numbers, xit is generated by uniform random numbers in one case and by normal random numbers in another case. We present in Table 7.5 the conditions for the Monte Carlo experiment. We utilized the idea of Nerlove (1971) in making the random-effects model. The fundamental equation is: yit = αi + βxit + uit
(i = 1, 2,…, N; j = 1, 2,…, T)
(21)
This is the same as for the fixed-effects model. However, the intercept of αi’s in the random-effects model is not constant, but rather is a stochastic variable. Therefore, in the random-effects model, there are two kinds of random variables, αi and uit. The αi’s are unobservable and systematically affect the dependent variable, yit, while the uit’s are unobservable and the accumulation of other unobservable factors not specified in the model are referred to as shock.
Microeconomic analysis using panel data╇╇ 175 Table 7.6╇ Virtual data set for the random-effects model (a)╇ Same region for the independent variable xit
αi β xit uit
(a.1)
(a.2)
N(0, 102) 2.0 N(20, 102) N(0, 102)
N(0, 12) 2.0 N(20, 102) N(0, 12)
b)╇ Different region for the independent variable xit
αi β xit
uit
(b.1)
(b.2)
N(0,102) 2.0 [10, 50] (i = 1, 25) [50, 90] (i = 26, 50) N(0, 102)
N(0, 12) 2.0 [10,50] (i = 1, 25) [50, 90] (i = 26, 50) N(0, 12)
In order to make a set of virtual data for the random-effects model using the Monte Carlo method, first the unobservable variables αi and uit are formulated as: αi ~ IIN(0, σα2) uit ~ IIN(0, σu2) We then introduce the variable vit as: vit = αi + uit where vit has the following conditions: cov(vit vis) = σα2+ σu2 for t = s = σα2 for t ≠ s cov(vit vjs) = 0 for all t, s and i ≠ j To get the series of uit, there are three steps: 1 2 3
Obtain 50 αi’s from αi ~ IIN(0, ρσ2) where σ2 = σα2+ σu2 and ρ = σα2/ σ2. Obtain 500 uit’s from uit ~ IIN(0, (1 - ρ)σ2). Add αi’s and uit’s as vit = αi +uit.
Table 7.6 shows the virtual data used for the estimation of the random-effects model.
176╇╇ Microeconomic analysis using panel data
7.3 Examples 7.3.1 ANOVA For ANOVA the estimating equation is: yij = β0 + βi• + β•j + εij
(i = 1, 2,…, n; j = 1, 2,…, m)
(22)
As indicated in Table 7.4, in the benchmark set of case (1) the means of rows and columns have different values. Thus the equality test among rows or columns is rejected by the F-test statistics. Case (2) in Table 7.4 is made under the restriction that βi•’s are equal to zero, i.e., the means of rows are equal. On the other hand, in case (3) the condition that β•j’s are equal to zero indicates that there is no time specific trend in the model. To test the null hypothesis that there is no difference between rows: H0: βi• = 0 (i = 1, 2,…, n) HA: βi• ≠ 0 at least two βi•’s When the null hypothesis is true: F1 = (S2/(n - 1))/(S1/(mn - n - m + 1)) is followed by an F-distribution with degrees of freedom of n - 1 and mn - n - m + 1. The results of the F-test are shown in Table 7.7. Case (1) rejects the null hypothesis, while case (3) does not reject it. Using the same procedure, we test the equality of means between columns. The null hypothesis is: H0: β•j = 0 (j = 1, 2,…, m) HA: β•j ≠ 0 at least two β•j’s When the null hypothesis is true: F2 = (S3/(m - 1))/(S1/(mn - n - m + 1)) is followed by an F-distribution with degrees of freedom of m -1 and mn - n - m + 1. In case (1) the null hypothesis is rejected, while in case (2) it is not rejected. 7.3.2 Fixed-effects model We now explain the fixed-effects model, including the different ranges of the intercept αi’s and the independent variable xit. The fundamental fixed-effects model is: yit = αi + βxit + uit
(i = 1, 2,…, N; j = 1, 2,…, T)
(23)
Microeconomic analysis using panel data╇╇ 177 Table 7.7╇ Results by ANOVA
F1-value (499, 1996) F2-value (4, 1996)
Case (1)
Case (2)
Case (3)
╇ 10.77
10.77
╇╇ 1.06
300.56
╇ 0.332
300.56
Note: F0.05 = 1.30.
where the values of αi differ among agents, but the slope parameter β is common to agents. There are many examples of unobservable effects specified by the fixed-effects model. One is the difference in management ability among firms as reflected in the productivity of the firms. To measure the managerial ability of the executives in a firm is difficult, and so it is difficult to gather such data. However, using financial data we can easily observe that there are differences in the management abilities of excellent companies and ordinary companies within the same industry. Due to such casual observation, we can specify the differences in management ability as the differences in the intercept of the model. We conduct two types of Monte Carlo method. One is on the relative magnitude of the distance of the intercepts and the variance of the disturbance term. The other is on the interval of the independent variable xit. In one case the interval of xit is the same for all agents in the model; in the other case, the interval differed among agents. The estimation results are shown in Table 7.8. When the range of the independent variable is common to all agents, there is no difference in the value of the slope parameter when using plain OLS, OLS on means, and OLS within estimation. This indicates that the value of the slope parameter is around 2.0, though the values of the intercept are different in each case. The P-value for the F-test of α = αi is small, indicating that through the F-test we can verify whether the fixed-effects model or the random-effects model works. The exception is case (a.2), where the value of the disturbance term is large compared to the unobservable fixed-effects. On the other hand, in Table 7.8(b), when the range of the independent variable differs among agents, the estimates of the slope parameter fluctuate when using plain OLS, OLS on means, and OLS within estimation. The F-test told us that when the gap of the intercept is large, it is easy to extract the fixed effects. But when the gap is narrow, it is difficult to extract the effects. 7.3.3 Random-effects model We next explain the random-effects model, including the different ranges of the independent variable xit. The fundamental random-effects model is: yit = αi + βxit + uit
(i = 1, 2,…, N; j = 1, 2,…, T)
(24)
where αi and uit are stochastic variables but the slope parameter β is common to agents.
178╇╇ Microeconomic analysis using panel data Table 7.8╇ Estimation results of the fixed-effects model (a)╇ Same region for the independent variable xit (a.1)
(a.2)
(a.3)
(a.4)
(a.5)
Total (plain OLS): 2.173 (13.4) 2.013 Slope (β) (280.6) 0.940 Adjusted R2 Between (OLS on means):
3.219 (2.0) 2.039 (29.1) 0.144
25.06 (51.6) 1.994 (92.3) 0.630
35.24 (28.0) 1.985 (35.3) 0.200
26.29 (42.73) 1.990 (71.8) 0.508
2.312 (3.5) 2.006 Slope (β) (62.1) Adjusted R2 0.885 Within (fixed effects):
5.648 (1.2) 1.918 (8.5) 0.127
26.43 (6.3) 1.926 (9.4) 0.149
36.96 (3.2) 1.900 (3.3) 0.020
31.12 (5.8) 1.747 (6.5) 0.078
2.053 (27.8) [0.207]
2.001 (267.2) [0.000]
1.995 (274.8) [0.000]
2.018 (276.6) [0.000]
Intercept (α)
Intercept (α)
Slope (β) F-test of α = αi
2.014 (280.9) [0.000]
b)╇ Different regions for the independent variable xit (b.1)
(b.2)
(b.3)
(b.4)
(b.5)
-9.708 (4.3) 2.883 Slope (β) (70.2) Adjusted R2 0.892 Between (OLS on means):
4.941 (2.3) 2.601 (68.0) 0.892
25.35 (12.7) 2.210 (58.8) 0.862
35.62 (16.6) 2.022 (53.1) 0.846
34.05 (15.9) 2.004 (52.0) 0.846
-23.09 (7.5) 3.150 (55.6) 0.984
-3.563 (1.4) 2.768 (61.0) 0.987
22.00 (10.8) 2.278 (59.8) 0.986
34.49 (12.5) 2.045 (39.7) 0.969
34.49 (17.0) 1.995 (52.9) 0.982
1.922 (24.7) [0.000]
1.985 (24.8) [0.000]
1.984 (25.7) [0.685]
1.943 (26.3) [0.032]
2.032 (25.9) [0.931]
Total (plain OLS): Intercept (α)
Intercept (α) Slope (β) Adjusted R2 Within (fixed effects): Slope (β) F-test of α = αi
The results of our analysis are presented in Table 7.9. When the range of the independent variable is the same for all agents, there is no difference when using plain OLS, OLS on means, and OLS within estimations. And even when the range of the independent variable differs, the estimates of the slope parameter are relatively stable.
Table 7.9╇ Estimation results of the random-effects model (a)╇ Same region for the independent variable xit (a.1)
(a.2)
Total (plain OLS): Intercept (α) Slope (β) Adjusted R2 Between (OLS on means): Intercept (α) Slope (β) Adjusted R2 Within (fixed effects): Slope (β) F-test of α = αi Variance components (random effects): Intercept (α) Slope (β) Hausman test
-1.285 (0.0) 1.973 (29.0) 0.612
-0.0803 (0.5) 1.997 (327.3) 0.994
-7.721 (0.0) 2.290 (4.3) 0.271
0.5100 (0.5) 1.967 (45.1) 0.976
1.933 (40.6) [0.000]
2.000 (391.0) [0.000]
-0.5505 (0.0) 1.937 (38.9) [1.000]
-0.1449 (0.0) 2.000 (399.6) [0.673]
a)╇ Different regions for the independent variable xit (b.1)
(b.2)
7.238 (4.7) 1.925 (66.0) 0.901
0.2252 (1.6) 1.998 (784.6) 0.999
7.843 (1.8) 1.913 (24.3) 0.923
0.2611 (0.7) 1.997 (313.2) 0.999
1.965 (45.5) [0.000]
2.001 (514.3) [0.000]
5.794 (2.4) 1.954 (53.8) [0.623]
0.1483 (0.7) 2.000 (599.0) [0.653]
Total (plain OLS): Intercept (α) Slope (β) Adjusted R2 Between (OLS on means) Intercept (α) Slope (β) Adjusted R2 Within (fixed effects) Slope (β) F-test of α = αi Variance components (random effects) Intercept (α) Slope (β) Hausman test
180╇╇ Microeconomic analysis using panel data We compare the estimation results for this analysis in Tables 7.8 and 7.9, focusing on the values of the slope parameter. When the region of the independent variable is similar among agents, the fluctuation of the estimates of the slope parameter is small, indicating that the estimation results from pooling crosssection and time-series data are similar. However, the range of the independent variable differs, and in case (b) in Table 7.8 the estimates of the slope parameters are diverse. After generating a data set by pooling cross-section and time-series data, it is important to check the characteristics of the data set before conducting your estimation. 7.3.4 Fixed-effects versus random-effects models This section explains the process for choosing between the fixed-effects and random-effects models through hypothesis testing. In Table 7.8 we examine the plausibility of an F-test to check the fixed-effects model. When the null hypothesis that α = αi is rejected, we choose the fixed-effects model. Looking at Table 7.9, the model is rejected by the F-test statistics. However, when applying the Hausman test, the random-effects model is reasonable, and thus we select this model. When choosing between the fixed-effects and random-effects models, the following process may be practical. First, compare the pooling estimator (OLS) with the fixed-effects estimator (LSDV). When the null hypothesis that all the intercepts are equal is rejected by the F-test, the fixed-effects model is proven valid. Second, test the random-effects estimation against pooling estimation under the null hypothesis that the average of residuals by pooling methods is zero by using the Lagrange multiplier method. When the null hypothesis is rejected, the random-effects model is not rejected. Third, use the Hausman test to choose between the fixed-effects model and the random-effects model. When the null hypothesis is rejected, the fixed-effects model is verified.
Bibliography Areano, M. (2003) Panel Data Econometrics, Oxford: Oxford University Press. Baltagi, B. H. (1995) Econometric Analysis of Panel Data, New York: John Wiley. Greene, W. H. (2008) Econometric Analysis, Upper Saddle River, NJ: Prentice Hall. Hsiao, C. (1986) Analysis of Panel Data, Cambridge: Cambridge University Press. Maddala, G. S. and K. Lahiri (2009) Introduction to Econometrics, Chichester, West Sussex: John Wiley. Nerlove, M. (1971) “Further evidence on the estimation of dynamic economic relations from a time series of cross-sections,” Econometrica, 39, 359–387. Wilks, S. S. (1962) Mathematical Statistics, New York: John Wiley. Wooldridge, J. M. (2002) Econometric Analysis of Cross-Section and Panel Data, Cambridge, MA: MIT Press.
8 Macroeconomic time-series analysis
Time-series analysis is one of the exciting frontiers of econometric methods and a dynamic and evolving new field of macroeconomic research. Time-series analysis provides us tools for gaining a better and more dynamic understanding of economic activity, which can contribute greatly to the policy-making process. To conduct empirical time-series research, we normally make a seasonal adjustment in the original data series to correct for possible bias. We do this by using a seasonal or monthly dummy variable. One of the main issues in timeseries analysis is how to make this seasonal adjustment. The Bureau of Labor Statistics (BLS) and the Bureau of Economic Analysis (BEA) in the United States utilize seasonally adjusted time-series analyses of economic statistics in order to better understand economic activity and craft appropriate policies. Why is it important to make seasonal adjustments in the data? Because economic activity may change at different times of the year. When we look at the monthly survey on consumer expenditure, for example, we find that there are systematic changes in expenditure over the year, and that expenditure suddenly increases in December with the coming of Christmas. Seasonal adjustment is thus necessary to control for such fluctuations. A primitive method of seasonal adjustment is calculating a moving average. Recently, researchers have developed many new and more sophisticated techniques, such as X-11 and X12ARIMA, for making seasonal adjustments in economic statistics. Time-series analysis has been a key topic in applied econometrics since 1973 as a result of the first oil price shock. The sudden sharp increase in oil prices initiated by the Organization of Petroleum Exporting Counties (OPEC) sent shockwaves throughout the global economy, dramatically increasing the cost of production of many goods and thus product prices. In those days, there was widespread skepticism about forecasting based on macroeconomic models. Because of the tremendous increase in oil prices, the cost structure on the supply side changed dramatically; due to such changes, macroeconomic relationships were extremely volatile. To overcome the shortcomings of macroeconomic models used to analyze such dramatic changes, monetarists applying existing knowledge obtained by statisticians developed time-series analysis.
182╇╇ Macroeconomic time-series analysis Monetarists base their forecasts on estimated vector autoregressive (VAR) models, which utilize time-series analysis. Macro-econometric models, on the other hand, are estimated and used for policy evaluation by Keynesian economists. VAR models are easier to estimate than Keynesian macro-econometric models. The greater reliability of the monetarist approach led to heightened interest in time-series analysis. However, scholars who believe in the structural approach to economic analysis claim that the monetarist approach merely provides measurement without any theoretical basis. In their view, this is problematic because the most important element of forecasting is the accuracy of the predicted values. Monetarists reply that VAR implicitly formulates the structure and is actually more accurate than the structural approach. In the 1980s, spurious regression was discovered in time-series data. This finding was quite important. If a time series is non-stationary and has the characteristics of a unit root, we have no confidence in the estimation results. But if macro-time-series data is stationary, the problem of spurious regression is avoided. When we estimate a macroeconomic model with linear equations using nonstationary time-series data, we appear to get a good fit. This is spurious regression, however, because even if there is no relationship between the dependent and independent variables, the fit of the model looks satisfactory, and the model is sometimes verified by observations from non-stationary time-series data. Thus, economists came to question the credibility of previous econometric results using macroeconomic statistics. In applied econometric analysis, spurious regressions are particularly important when considering the fundamental characteristics of time-series data. The unit root test checks whether the time series is a DSP (difference stationary process) or a TSP (trend stationary process). To avoid spurious regression in nonstationary time-series data, it is crucial to adopt the co-integration method of analysis. In Section 8.1 we explain the characteristics of time series and the VAR and error correction (ECM) time-series models. Section 8.2 describes how to generate a data set by the Monte Carlo method. Section 8.3 considers some examples of time-series models. Section 8.3.1 explains VAR models, Section 8.3.2 examines spurious regression using numerical examples, and Section 8.3.3 discusses the unit root test. Section 8.3.4 explains co-integration analysis.
8.1 Characteristics of time series and time-series models The term yi denotes a time series. It is stationary when (1) E(yi) = µ < ∞, (2) V(yt) = γ(0) < ∞ and cov(yt, yt-s) = γ(s), indicating that the mean, variance, and covariance are finite. Now, let’s consider the following equation: yi = αyi-1 + εi
(1)
Macroeconomic time-series analysis╇╇ 183 where εi is a random variable with a mean 0 and a variance σ2. In this equation, the value of α plays an important role in deciding the characteristics of yi. When the absolute value of α is greater than unity, the series increases monotonically. And when α = 1, the movement of yi is called a random walk. Combining the above two cases in which the absolute value of α is greater than or equal to unity, the series is called non-stationary. On the other hand, when the absolute value of α is less than unity, the series is called stationary. Depending on whether α = 1 or the absolute value of α is less than 1, the characteristics of the series change drastically. To identify whether or not α = 1 is a critically important issue in time-series analysis. When the series is a stationary process, we can utilize the existing estimation method. On the other hand, when the series is a non-stationary process, we have to apply other estimation methods to obtain reliable results. When α = 1, equation (1) becomes: yt = yt-1 + εt = y0 + ∑i=1t εi
(2)
where y0 is the initial value of yt. The average, variance, and covariance between yt and yt-s are y0, tσ2, and (t - s)σ2, respectively. That is, the variance of yt increases as t increases. On the other hand, if the absolute value of α is less than 1, equation (1) becomes: yt = αyt-1 + εt = αty0 + ∑i=1t αt-i εi
(3)
The average and variance of yt are 0 and σ2/(1 - α), respectively. In the nonstationary process the variance increases as t increases, while in the stationary process the variance is stable even if t increases. As an extension of equation (1), an intercept is now included as: yt = β + αyt-1 + εt
(4)
When the parameter α = 1, then the series of yt can be written as: yt = βt + y0 + ∑i=1t εi
(5)
The expected value and variance of yt are y0 + βt and tσ2, respectively. As before, as t increases, the variance increases. And the series of yt is a non-stationary time series. On the other hand, if the absolute value of α is less than 1, the average and variance of yt are β/(1 - α) and σ2/(1 - α), respectively, indicating that the series is stationary. In short, whether or not α = 1 is crucial in determining the characteristics of the series. Figure 8.1 displays a time series of: xt = 1.02xt-1 + εt yt = yt-1 + εt wt = wt-1 + εt zt = 0.5zt-1 + εt
184╇╇ Macroeconomic time-series analysis 500
X(t), Y(t), W(t), Z(t)
400 300 200 100 0 −100
0
20
40
60
80
Time X(t) Divergent process
W(t) Random walk(W(0)=300)
Y(t) Random walk(Y(0)=100)
Z(t) Stationary process
Figure 8.1╇ Several time series (divergent process, random walk, and stationary process).
where the initial values of xt, yt, and zt are 100, that of wt is 300, and εt is IIN(0, 52). In Figure 8.1 we can see that the trends of yt and wt are dependent on the initial value, and that the effect of the initial value persists for a long period. When we define Δyt = yt - yt-1 and Δyt = εt, and assume that Δyt is stationary, Δyt is called I(0) (integrated of order zero), and the original series of yt is called I(1) (integrated of order one). The VAR model is used for time-series analysis utilizing macroeconomic models. We propose two VAR models. The first has endogenous variables on the lefthand side and predetermined endogenous variables on the right-hand side. The second also has endogenous variables on the left-hand side, but both predetermined endogenous and exogenous variables on the right-hand side. The first example is: xt = α0xt-1 + α1yt-1 + ε1t yt = β0xt-1 + β1yt-1 + ε2t
(6)
The second is: xt = α0xt-1 + α1yt-1 +α2zt + ε1t yt = β0xt-1 + β1yt-1 + β2zt + ε2t
(7)
where the endogenous variables are xt and yt, the predetermined endogenous variables are xt-1 and yt-1, and the exogenous variable is zt. Now we consider the unit root test. Whether or not the unit root exists in an economic time series is important for identifying whether the series is either TSP
Macroeconomic time-series analysis╇╇ 185 or DSP. In time-series analysis, whether or not a series is stationary is important, because if it is stationary a shock at one time declines over the long-run, while if the series is non-stationary a shock at one time continues for a long period as depicted in Figure 8.1. Let us next consider the following equation: yt = α + ρyt-1 + βt + εt
(8)
The random disturbance here is called white noise and is followed by ε ~ IID(0, σ2). If ρ = 1 and β = 0, then yt is random walk and yt is DSP. The null hypothesis for the unit root test is ρ = 1 and β = 0. In other words, the null hypothesis is that the time series is non-stationary. The alternative hypothesis is |ρ| < 1, namely that the time series is stationary. Now we transform equation (8) into: ⋃yt = α + φyt-1 + βt + εt
(9)
This equation yields the same results as equation (8), and the null hypothesis is φ = 0 and β = 0; the alternative hypothesis is φ < 0. The unit root test was proposed initially by Dickey and Fuller (1979) in a seminal paper on the topic. There is another test to check the existence of the unit root called the ADF (augmented Dickey and Fuller) test. As its name suggests, this test is an extension of the original Dickey–Fuller test. It is depicted as: yt = α + ρyt-1 + βt + ∑ γk⋃yt-k + εt
(10)
The null hypothesis for this unit root test is ρ = 1 and β = 0. Although the corresponding time series is DSP, we can estimate the long-run relationship between the non-stationary time series of xt and yt using the cointegration analysis proposed by Engle and Granger (1987) and Johansen (1988). There are two non-stationary series of xt and yt, and the series becomes stationary after transforming the original time series into the first difference of their series. That is, xt and yt are non-stationary time series while ⋃xt and ⋃yt are stationary. Then xt and yt are said to be integrated at the first order and written as I(1). If yt - β0 - β1xt is I(0) by arbitrary β0 and β1, then xt and yt have a cointegrated relationship. The Engle–Granger type estimation procedure for co-integration analysis is as follows. First we check the characteristics of the time series by the unit root test. Then, if the characteristics are suitable for co-integration, we estimate the longrun relationship by ECM.
8.2 How to generate a data set by the Monte Carlo method Table 8.1 displays a set of parameters including the initial values of x0 and y0 and the disturbance terms ε1t and ε2t for a VAR model.
186╇╇ Macroeconomic time-series analysis Table 8.1╇ VAR models: virtual data
α0 α1 β0 β1 x1 y1 ε1 ε2 ρε1ε2 n γ0 γ1 α2 β2 Z
(1)
(2)
0.7 0.3 0.4 0.6 400 600 N(0, 202) N(0, 202) 0 301
0.7 0.3 0.4 0.6 400 600 N(0, 202) N(0, 202) 0 301 10 20 0.1 0.25 [50, 100]
To make a virtual data set for spurious regression, we construct a data set for random walk and random walk with drift and then conduct a regression. The fundamental equations for xt and yt are: xt = α0 + xt-1 + ε1t yt = β0 + yt-1 + ε2t
(11)
Table 8.2 shows the menu for the parameters α0, β0, ε1, and ε2. A numerical example of the unit root test is shown in Table 8.2. We include two stationary time series: xt = 100 + 0.5t + εt
(12)
where εt ~ IIN(0,102), and: yt = 1 + 0.2yt-1 + 0.5t + εt
(13)
The initial value of yt is 200 and εt ~ IIN(0,102).
8.3 Examples 8.3.1 VAR models We earlier specified two types of VAR models: xt = α0xt-1 + α1yt-1 + ε1t yt = β0xt-1 + β1yt-1 + ε2t
(14)
Macroeconomic time-series analysis╇╇ 187 Table 8.2╇ Spurious regression: virtual data
(a) (b) (c) (d) (e) (f) (g) (h)
α0
β0
ε1
ε2
x(0)
y(0)
0 0.1 0.2 0.3 0.4 0.5 0 1
0 0.1 0.2 0.3 0.4 0.5 1 1
N(0, 32) N(0, 32) N(0, 32) N(0, 32) N(0, 32) N(0, 32) N(0, 32) N(0, 32)
N(0, 52) N(0, 52) N(0, 52) N(0, 52) N(0, 52) N(0, 52) N(0, 52) N(0, 52)
100 100 100 100 100 100 100 100
200 200 200 200 200 200 200 200
and: xt = α0xt-1 + α1yt-1 +α2zt + ε1t yt = β0xt-1 + β1yt-1 + β2zt + ε2t
(15)
where the endogenous variables are xt and yt, the predetermined endogenous variables are xt-1 and yt-1, and the exogenous variable is zt. Table 8.3 shows the estimation results for these VAR models. These estimates are unbiased. When we compare the estimates with the true values indicated in Table 8.3, we see that there is no difference statistically between the two. In Table 8.3(a) for the xt equation, the coefficient a0 is 0.682 and 0.714 for the two models, respectively, and a1, namely the coefficient of yt, is 0.301 and 0.286, respectively. And for the yt equation, the coefficient of xt-1 is 0.412 and 0.408, and the coefficient of yt-1, namely b1, is 0.579 and 0.591 in the two models. In Table 8.3(b) the variables xt-2 and yt-2 are included, but the estimates of them are not significant statistically, because the t-value is small. We cannot reject the null hypothesis that the coefficient is zero. 8.3.2 Spurious regression When conducting regression analysis on time-series data, we implicitly assume that the time series is a stationary process, specifically TSP. When two time series of xt and yt are characterized by TSP, and yt is related to xt according to economic theory yt = α + βxt.. Usually we estimate the parameters α and β by applying the regression model yt = α + βxt + ut to the observation data, where ut is the disturbance term specified by stochastic distribution. We now explain spurious regression in the case of the Keynesian macroconsumption function. In conducting regression analysis, we usually assume that the series of variables is a stationary process or a TSP. The macro-consumption function is assumed to be a linear function: Ct = α + βYt
(16)
188╇╇ Macroeconomic time-series analysis Table 8.3╇ VAR models: estimation results (1)
(2)
(a) x(-1) y(-1) Constant x(-1) y(-1) Constant z(x) z(y) (b)
0.682 (16.4) 0.301 (7.4) 5.79 (1.6) 0.412 (9.0) 0.579 (12.9) 0.500 (0.1)
0.714 (18.7) 0.286 (7.4) 1.65 (0.2) 0.408 (10.1) 0.591 (14.7) 18.9 (2.5) 0.215 (2.4) 0.306 (3.3)
x(-1) x(-2) y(-1) y(-2) Constant x(-1) x(-2) y(-1) y(-2) Constant z(x) z(y)
0.710 (12.0) -0.0228 (0.3) 0.3266 (6.0) -0.0310 (0.5) 5.941 (1.6) 0.4168 (6.4) -0.0540 (0.8) 0.4168 (6.4) -0.0540 (0.8) 0.739 (0.1)
0.675 (11.6) 0.0766 (1.2) 0.294 (5.3) -0.045 (0.8) 3.421 (0.5) 0.410 (6.8) -0.0172 (0.2) 0.410 (6.8) -0.0172 (0.2) 16.97 (2.1) 0.224 (2.4) 0.323 (3.5)
When we gather data on total consumption and disposable income from the National Income and Product Accounts and estimate the parameters based on a regression model, the degree of fit is usually high because the correlation coefficient between the estimated value for consumption and the observed value is nearly unity. It is commonly observed that the degree of fit is high in the Keynesian consumption function. However, based on time-series analysis, we find that when the time series of two sets of economic statistics are DSP, the two series of economic statistics appear to have a strong correlation, although they are independent of each other and have no actual economic relationship. Returning to the macro-consumption function: even though there is no relationship between present consumption and present disposable income in theory, and though we find that the time series of income and consumption are DSP, it appears from applying statistical methods of estimation that there is a strong relationship between the two. This is a spurious regression on consumption and income. We need to be careful about such misleading correlations. To reiterate: when the characteristics of a time series are DSP, which is different from TSP, there appears to be a strong statistical relationship, although there is actually no relationship between the two economic statistics in economic theory.
Macroeconomic time-series analysis╇╇ 189 Let us now consider spurious regression in general. Say that there are two sets of time series that have the characteristics of DSP and they have no relationship in economic theory. Applying regression analysis to this pair of series, it appears as if there is a strong correlation between the two. This is a case of spurious regression. In other words, theoretically there is no relationship between the two time series, but statistically there appears to be strong relationship between them due to the characteristics of DSP in the time series. We next explain spurious regression in mathematical form. Say that there are two time series, xt and yt. We assume that both series have the characteristics of I(1), and that the first difference of the time series is a random walk. It is specified as: ⋃xt = ε1t ⋃yt = ε2t
(17)
where ε1t and ε2t are called white noise and have means of zero and variances σi2. Even if the series of xt and yt have no relationship in economic theory, i.e., xt and yt move independently of each other, the result of a regression equation in the linear form: yt = α + βxt + ut
(18)
appears to show a high correlation between xt and yt. This is called a spurious regression. Why is a spurious regression probable in the case of random walk for a time series? In case (1), the series of xt and yt are formulated as: ⋃xt = ε1t → xt = xt-1 +ε1t ⋃yt = ε2t → yt = yt-1 +ε2t
(19)
Then xt is transformed as: xt = xt-1 +ε1t = x0 + ∑ ε1k and the series of yt becomes: yt = y0 + ∑ ε2k
(20)
The expectation of xt is that E(xt) = x0 and that the variance of xt is Eε12 = tσ2. The expectation of yt is that E(yt) = y0 and that the variance of yt is Eε22 = tσ2. When t goes to infinity, the variances of xt and yt become infinity. Applying regression analysis to such time series, xt and yt appear to be strongly correlated through the effect of broader variance formulated by tσ2. Next, we consider the case of random walk with drift. In the case of existing drift, the series of ⋃xt and ⋃yt are written as:
190╇╇ Macroeconomic time-series analysis Table 8.4╇ Spurious regression: adjusted coefficient of determination (a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
115
0.055
0.296
-0.001
0.858
0.946
0.007
0.979
⋃xt = α0 + ε1t ⋃yt = β0 + ε2t
(21)
and the series of xt and yt are rewritten as: xt = α0 + xt-1 + ε1t yt = β0 + yt-1 + ε2t
(22)
Then these two series are transformed into: xt = α0 + xt-1 + ε1t = x0 + α0t + ∑ε1k Similarly, yt becomes: yt = y0 + β0t + ∑ε2k
(23)
Therefore, the series xt and yt exhibit a linear trend and the variances of xt and yt become tσ2 and are non-stationary. Applying regression analysis to such time series, xt and yt appear to be strongly correlated through the effect of broader variance formulated by tσ2 and the trend formulated by α0t in xt and β0t in yt. The results by OLS estimation are indicated in Table 8.4, which shows the coefficient of determination adjusted by the degree of freedom for the eight cases. There are various figures for the adjusted coefficient of determination, from no correlation in case (d) to 0.979 in case (h). The scatter of xt and yt for cases (c), (e), and (h) are indicated in Figures 8.2, 8.3, and 8.4, respectively. Though xt and yt move independently, both series suggest some correlation between them. The regression result of case (h) is as follows: yt = 40.0 + 1.10xt (15.2) (154.8) (15.3) adjusted coefficient of determination: 0.979 D-W: 0.079
(24)
This example is thus a case of spurious regression. Even if the estimates look good, it is best not to accept them uncritically. In fact, the better the results look,
450 400 350
X(t), Y(t)
300 250 200 150 100 50 0
2
52
102
152
202
252
302
352
402
452
352
402
452
Time X(t)
Y(t)
Figure 8.2╇ Time series (case (c)).
700 600
X(t), Y(t)
500 400 300 200 100 0
2
52
102
152
202
252
302
Time X(t)
Figure 8.3╇ Time series (case (e)).
Y(t)
192╇╇ Macroeconomic time-series analysis 800 700 600
X(t), Y(t)
500 400 300 200 100 0
2
52
102
152
202
252
302
352
402
452
Time X(t)
Y(t)
Figure 8.4╇ Time series (case (h)).
the more cautious we should be. Case (i) in Figures 8.5 and 8.6 is the scatter diagram from the stationary time series. Whether the time series is TSP or DSP is important in applied econometric work to avoid spurious correlation. The unit root test discussed below is conducted to check the characteristics of each time series. 8.3.3 Unit root test In addition to the time series in Table 8.2, we included two stationary time series of xt and yt as: xt = 100 + 0.5t + εt
(25)
yt = 1 + 0.2yt-1 + 0.5t + εt
(26)
where εt ~ IIN(0,102). The initial value of yt is 200. The estimation results are shown in Table 8.5(i). The dependent variables are ⋃xt and ⋃yt, and we used equation (9) to get the estimates. Table 8.5 shows the series for (c), (e), and (h) in Table 8.2 and the stationary series of (i). We checked the estimation for the series of xt in (c); it indicates that the P-value is 0.289. The P-value of yt is 0.830. Thus, the time series are non-stationary. On the other hand, in case (i) the P-value of xt is 0.000 and that of yt is 0.000, indicating that the null hypotheses of φ = 0 and β = 0 are both rejected. Thus, the time-series data are stationary.
800 700 600
Y(t)
500 400 300 200 100 0 0
100
200
300
400
500
600
700
X(t) Case(h)
Case(c)
Case(e)
Figure 8.5╇ Spurious regression (scatter diagram).
400 350 300
X(t), Y(t)
250 200 150 100 50 0
2
52
102
152
202
252
302
Time X(t)
Figure 8.6╇ Time series (case (i)).
Y(t)
352
402
452
194╇╇ Macroeconomic time-series analysis Table 8.5╇ Unit root test (1)╇ Unit root test of xt (dependent variable: Δx)
Intercept t x(-1) D-F
(c)
(e)
(h)
(i)
2.80 (2.6) 0.002 (1.9) -0.027 (2.6) [0.289]
1.11 (2.2) 0.007 (2.0) -0.011 (1.8) [0.741]
2.28 (2.8) 0.010 (1.6) -0.011 (1.7) [0.817]
94.51 (20.8) 0.4743 (20.9) -0.947 (21.1) [0.000]
(2)╇ Unit root test of yt (dependent variable: Δy)
Intercept t y(-1) D-F
(c)
(e)
(h)
(i)
2.04 (1.6) 0.005 (1.7) -0.010 (1.6) [0.830]
1.60 (1.2) 0.006 (1.3) -0.006 (1.0) [0.962]
2.69 (1.9) 0.015 (1.9) -0.013 (1.6) [0.824]
8.192 (8.0) 0.4961 (18.4) -0.787 (18.5) [0.000]
8.3.4 Co-integration Now we explain the Engle–Granger type of co-integration. At first, the unit root test is conducted for xt and yt. When the unit root exists, we then decide the order of the time series. In economic statistics in the real world, we found that almost all time-series data indicate I(1) if the unit root exists. Therefore, virtual data of xt and yt are assumed to be I(1) in the present Monte Carlo simulation. Of course, yt and xt are non-stationary. Then, regression analyses for xt on yt and for yt on xt are conducted as proposed by Davidson and McKinnon (1993). For two regressions, namely those for xt on yt and for yt on xt, we can get the series of residuals after conducting regression analysis. Then, the series of the difference of the residual obtained from the regressions, ⋃et, is regressed on et-1 and we conduct a significance test whether or not the coefficient is 0. In this case Dickey–Fuller test statistics are utilized. If the null hypothesis that the coefficient is zero is not rejected, xt and yt have a relationship of co-integration. The results of an example using the virtual data set of Table 8.2 are shown in Table 8.6. The columns (c), (e), and (h) in Table 8.6 indicate that the P-value is high and thus the null hypothesis is not rejected, meaning that the series is non-stationary. Finally, we estimate the ECM in order to find out the short-run and the longrun relationships between yt and xt. The short-run relationship is obtained in the following regression equation: ⋃yt = β0 + β1⋃xt + ut
(27)
and the long-run relationship between yt and xt is obtained by the following regression: ⋃yt = β0 + β1⋃xt + β2zt-1 + ut
(28)
Macroeconomic time-series analysis╇╇ 195 Table 8.6╇ Co-integration
Dependent variable: y Intercept t x Dependent variable: Δe e-1 D-F Dependent variable: x Intercept t y Dependent variable: Δe e-1 D-F
(c)
(e)
(h)
238.6 (21.0) 0.447 (28.7) -0.641 (5.8)
176.3 (33.0) 0.578 (14.4) 0.135 (2.1)
59.2 (10.0) 0.164 (3.6) 0.929 (19.0)
-0.013 (1.8) [0.888]
-0.007 (1.0) [0.984]
-0.036 (2.9) [0.310]
116.0 (36.1) 0.142 (18.7) -0.098 (5.8)
57.5 (9.4) 0.560 (25.8) 0.065 (2.1)
38.9 (9.3) 0.453 (18.4) 0.453 (18.4)
-0.029 (2.7) [0.439]
-0.011 (1.8) [0.879]
-0.033 (2.9) [0.304]
Table 8.7╇ ECM
Case (c): dependent variable: Δy Intercept Δx ECM-1 R2 Case (h): dependent variable: Δy Intercept Δx ECM-1 R2
Short-run
Long-run
0.3745 (1.6) -0.0989 (1.3)
0.3739 (1.6) -0.0966 (1.3) -0.0022 (0.5) 0.0004
0.001 1.146 (4.9) -0.1002 (1.3) 0.001
1.128 (4.8) -0.083 (1.0) -0.0179 (1.7) 0.0005
where zt-1 is the residual obtained from the relation of co-integrated vector of xt and yt. The estimation result is indicated in Table 8.7 for (c) and (h). The present chapter on macroeconomic time-series analysis and the previous chapter on microeconomic analysis using panel data are still developing fields. Here we have examined the basic concepts and techniques currently used in empirical analysis on economics, but this is a dynamic area. Students can keep abreast of ongoing developments at the frontier of these fields by consulting journal articles in the field as they are published.
Bibliography Davidson, R., and J. G. MacKinnon (1993) Estimation and Inference in Econometrics, New York: Oxford University Press.
196╇╇ Macroeconomic time-series analysis Dickey, D. A., and W. A. Fuller (1979) “Distribution of the estimators for autoregressive time-series with a unit root,” Journal of the American Statistical Association, 74, 427–431. Engle, R. F., and C. W. J. Granger (1987) “Co-integration and error correction: representation, estimation and testing,” Econometrica, 55, 251–276. Hamilton, J. D. (1994) Time Series Analysis, Princeton, NJ: Princeton University Press. Johansen, S. (1988) “Statistical analysis of co-integrating vectors,” Journal of Economic Dynamics and Control, 12, 231–254.
9 Summary and conclusion
This text covers the application of econometric methods to empirical analysis concerning a range of economic issues. It provides a missing link between textbooks on economic theory and econometrics by emphasizing the connection between economic theory and empirical analysis. We have examined consumer behavior, producer behavior, market equilibrium models, macroeconomic models, micro-econometric models using micro-data and panel data, and macroeconomic time-series models. This book demonstrates that if a model is correctly specified, including the specification of the distribution of random variables, and the model is estimated by a suitable method, then we can obtain the estimates of the parameters of the true relationship from the sample observation. Students now understand why it is important to consider rigorous experimental design that links theoretical models with stochastic concepts and observations when conducting empirical analysis using real-world data sets. They also understand how to estimate econometric models to confirm the accuracy of the estimates compared with the true values in the model and how to evaluate the policy implications. Extensive reliance on the Monte Carlo method in this text provides an ideal experiment in the econometric laboratory, one that illustrates the importance of economic theory and experimental design connecting model and observation. Now, drawing on economic theory as a guideline for empirical analysis, and using real world data sets, students can conduct empirical research and find out the true relationships in economics in the real world.
Index
AD-AS model 112, 115, 122, 126–7, 134–8 ADF test 185 aggregate data: disadvantages 165 aggregate demand function 115 aggregate production function 117 aggregate supply function 115 aggregate variables 111 analysis of variance see ANOVA Annual Report on Consumer Prices 4 “Another Paris” 54, 55 ANOVA: data set generation 173; in econometrics 171; estimation results 176, 177; two-way cross-classification model 168–70; variance components model 170–1 asset market 111, 114; LM curve in 115 augmented Dickey and Fuller test 185 auto-correlation 30–5 auto-correlation coefficient 31, 33 automobile market 109, 165 autonomous investment 117, 118 best linear unbiased estimator (BLUE) 20, 118 between 171 budget constraint 10, 36, 117, 124 Bureau of Economic Analysis (BEA) 181 Bureau of Labor Statistics (BLS) 39, 181 capital accumulation 117, 124 capital demand function 56, 66; estimation 66–9 capital inflow 114 capital outflow 114 Cauchy distribution 35, 36
Census of Industry 59 cereals 11–12 CES production function 72; Cobb– Douglas production function as approximation of 73–8 chi-square (χ2) test 21, 23 Chocrane and Orcutt method 31, 33, 34 Chow test 20, 21, 23, 24, 25 Cobb–Douglas production function 55, 57, 68, 123–4; as approximation of CES production function 73–8 coefficient of determination 62; adjusted 190 co-integration analysis 182, 185, 194–5 committed expenditure 37 committed income 37 competitive market 79, 80; assumptions 80–1; data-generating mechanism 96–9; model with exogenous variables 93–4, 98–9, 104–6; model without exogenous variables 91–3, 96–8, 101–4 concomitant variables 171 conjectural variation 82, 94, 99, 106–7 constant-elasticity-of-substitution production function see CES production function constant returns to scale 55 constant-utility price index 41, 42, 43, 44 consumer behavior 10–50; autocorrelation example 30–5; Consumer Price Index measurement 39–43; cross-equation restriction 35–9; data generation method 17–18; elasticity of demand 46–50; forecasting 45–6; heteroskedasticity example 25–30; mis-specified model 23, 43–5, 46,
Index╇╇ 199 48–50; model 15–17; normality 35; structural change example 18–25; theory 10–14 consumer-demand function: estimation 2 consumer-demand theory: dual approach 13, 14 consumer durables expenditure 149–50 Consumer Expenditure Survey (CES) 142 Consumer Price Index (CPI) 10; measurement 39–43 consumption function: IS-LM model 114, 120, 122, 132; Klein two-equation system 112, 113, 117–18, 128 cost equation: definition 52 cost functions 13, 14, 52, 53, 57; estimation 58–9, 69–70; LES 15–16, 17 cost minimization: first-order conditions 16, 52, 56; principle 16, 55, 56; under constant levels of production 56 CPI see Consumer Price Index cross-equation restrictions 35–9 cumulative standard normal density function 145 decreasing returns to scale 55, 59 demand curves see market demand curves demand functions 12, 43; LES 8–9, 17, 18, 43; Marshallian 11, 14; see also capital demand function; consumerdemand function; factor demand function; labor demand function; market demand function; production function demand schedule 85 depreciation rate 117, 140 Dickey–Fuller test see unit root test difference stationary process (DSP) 182, 185, 188–9, 192 direct utility function 13, 14; LES 15, 17 double-hurdle model 147, 149–50, 156, 164–5 DSP 182, 185, 188–9, 192 Durbin–Watson statistics 8, 22, 23, 31 ECM 185, 194–5 economies of scale: in production function 59, 70–2 efficiency parameter 55; elasticity of demand 46–50; cross-price 11;
own-price 11; see also income elasticity of demand; price elasticity of demand empirical analysis: difficulties 109; orthodox procedure for 6–7 endogenous growth theory 73 endogenous variables 79, 86 Engel’s coefficient 20 equations: errors in 6, 89 equilibrium condition of competitive market 53 error correction models (ECM) 185, 194–5 errors: in equations 6, 89; in variables 6 exchange rate see foreign exchange rate exogenous variables 79, 86 expected inflation rate 115 expected price level 115 expenditure functions: LES 15, 17, 36–7 experimental design 1–5; aspects 2; purpose 2; unsuitable 4–5 exponential distribution 35, 36 extended model 36 F-test 176, 177, 180 factor demand functions 52; estimating 66–9; see also capital demand function; labor demand function; production function Family Expenditure Survey 149 Family Income and Expenditure Survey (FIES) 3–4 feasible generalized least-squares (FGLS) method 30 fiscal budget: balanced 115 fiscal deficit 114 fiscal surplus 114 Fisher, Ronald 167 Fisher equation 114 Fisher price index 43 fixed-effects model 171–2; data set generation 173–4; estimation results 176–7; random-effects model vs. 172, 180 foreign exchange rate 116, 123; fixed regime 116, 138–9; flexible regime 116, 123, 138–40 Friedman, Milton 6
200╇╇ Index full-employment level 133–4, 135 full-information maximum-likelihood (FIML) method 107–8, 109, 132–3 Gauss–Markov theorem 20, 118 generalized least-squares (GLS) method 30 goods and services market 111, 114; IS curve in 115 government expenditure multiplier 111 government savings (Sg) 114 Haavelmo bias 111–12, 119, 124–5, 128–31 Hausman test 172–3, 180 Heckman’s two-step method 147–9, 156, 163–4 heteroskedasticity 25–30; in aggregated cross-section data 26; methods to exclude 30; in time-series data 26 Hicksian demands 13, 14, 16, 17 Hildreth and Lu method 31, 33, 34 household labor supply 143; analysis 143–9 household utility function 117 hypothesis testing: viability 8 identification problem: macroeconomic models 117–18; market equilibrium models 79–80, 84–91; rank condition for 92 identities: three-sided 111 imperfect markets 81; equilibrium in 83; equilibrium price path 83, 84; see also monopolistic market; oligopolistic market income effect 143 income elasticity of demand 11, 47–50; definition 12 increasing returns to scale 55, 70–1 indirect least-squares (ILS) method 128–30, 131 indirect utility function 13, 14; LES 15, 17 inflation rate: and national income 122 inflation supply function 115 inter-temporal equilibrium condition for households 124 inverse of Mill’s ratio 148, 163
investment: autonomous 117, 118 investment function 114, 120, 132 investment multiplier 111, 113; as unity 113 IS curve 115, 121–2 IS-LM model 112, 114, 119–22, 125–6, 131–5 Jarque–Bera test 21, 35, 36, 45 k-th homogeneity 55 Klein’s rule 63 Klein’s two-equation system 112, 117–19, 128–31; data set generation 124–5 labor demand function 56; estimation 68, 69 labor market 111, 115 labor supply function 163 Lagrange multiplier method 3, 11, 52, 180 large economy: definition 116 Laspeyres price index 39, 41, 42, 43, 44 latent variables 142, 144; double-hurdle model 149; logit model 156; and observed variable 147; probit model 151, 152, 153, 154, 156; tobit model 147, 152–4, 155, 160 law of equal marginal products per unit cost of input 52 law of equal marginal utilities per dollar 11 least-cost rule 52 linear expenditure system (LES) 15; cost function 15–16, 17; demand functions 8–9, 17, 18, 43; direct utility function 15, 17; expenditure functions 15, 17, 36–7; indirect utility function 15, 17; utility function 2, 20, 36 linear homogeneity: production function 54–5, 66, 67 linear probability line 143 liquidity preference: and real money supply 114–15 liquidity trap 133–4, 135 LM curve 115, 121–2, 134
Index╇╇ 201 logit model 145, 147, 156–60; data generation 150–1; distribution function 158; estimation results 158–60; longrun: definition 54; short-run vs. 54 luxury goods 11, 12 macro-consumption function 187, 188 macroeconomic models 111–41; data generation method 124–8; general models 112–17; identification problem 117–18; variables 113; see also Haavelmo bias; IS-LM model; Klein’s two-equation system; Mundell–Fleming model; neoclassical growth model macroeconomic time-series analysis see time-series analysis macroeconomics 111 management ability 177 marginal cost (MC) curve 83, 84, 108 marginal propensity to consume (MPC) 112 marginal revenue (MR) curve 83, 84 marginal revenue = marginal cost condition see MR = MC condition market basket 40 market borders 109 market clearing condition 84–5, 101, 108 market demand curves 82, 83; with disturbance terms 91; identifiable 88 market demand function 79, 80, 84–5, 86–8; competitive market models 91–4, 101, 104; monopolistic market models 95–6, 107–8; oligopolistic market model 94 market equilibrium 79–109; competitive market examples 101–4, 104–6; data generation method 96–100; definition 79; identification problem 79–80, 84–91; models 91–6; monopolistic market examples 107–9; oligopolistic market example 106–7; theory 80–4 market supply curves 82; with disturbance terms 91; identifiable 88 market supply function 79, 80, 84–5, 86–8; competitive market models 91–4, 101, 104; monopolistic market models 95–6 market wages 148
markets: classification 80; see also asset market; goods and services market; labor market Marshall, Alfred 11 Marshallian demand functions 11, 14 Marshallian demands 13, 14, 15, 17 maximum likelihood (ML) method 31, 33, 34 maximum likelihood (ML) with grid method 31, 33, 34 MC curve 83, 84, 108 micro-data sets: merits 165–6 microeconomic analysis using micro-data see qualitative-response models microeconomic analysis using panel data see panel data analysis microeconomics 111 Mill’s ratio: inverse 148, 163 minimum expenditure function see cost function misreporting 149–50 mis-specified model 7; consumer behavior 23, 43–5, 46, 48–50 monetarist approach 181–2 money demand see liquidity preference money demand function 120, 132–4, 138 money supply 116 monopolistic market 79, 80; datagenerating mechanism 99–100; linear market demand function model 95–6, 107–8; log-linear market demand function model 96, 108–9; profit maximizing behavior 81, 82–3 Monte Carlo method data set generation 8; consumer behavior 17–18; macroeconomic models 124–8; market equilibrium models 96–100; panel data models 173–5; producer behavior 57–61; qualitative-response models 150–6; VAR models 185–6 moving average 181 MR curve 83, 84 MR = MC condition 53, 79, 81, 82; monopolistic models 95–6, 107; oligopolistic model 94–5 multi-collinearity 58, 61–6 multiplier effect 111; see also government expenditure multiplier; investment multiplier
202╇╇ Index Mundell–Fleming model 115–16; large open economy 116, 123, 126–7, 138, 140; small open economy 116, 123, 126, 127, 138–9 national income: definition 112; on distribution 111, 113, 114; and disturbance term 118; on expenditure 111, 113, 114; identities on 113, 114, 117, 124; and inflation rate 122; on production 111, 113, 114 National Income and Product Accounts 188 natural sciences: social sciences vs. 171 necessary goods 11–12 neoclassical growth model 116–17, 123–4, 128, 129, 140–1 net export function 123, 138 non-stationary process 183, 185 normality 35 observation: and theory 6 observational period: determining 2 oil price shock 181 oligopolistic market 79, 80; conjectural variation model 94–5, 106–7; data-generating mechanism 99; profit maximizing behavior 81–2 ordinary least-squares (OLS) estimation method 20, 33, 38; consumption function estimation 128, 129, 131; on means 177, 178, 179; unbiased and efficient estimates 30; unbiased but not efficient estimates 30, 31; within estimation 177, 178, 179 Organization of Petroleum Exporting Countries (OPEC) 181 p = MC condition 79 Paache price index 41, 42, 43 panel data analysis 167–80; data generation method 173–5; models 167–73; see also fixed effects model; random effects model Pareto, Vilfredo 54 permanent income hypothesis 6 Philips curve 115 price-elastic goods 11
price elasticity of demand 46–50, 107; definition 12 price-inelastic goods 11–12 price-takers 2, 80, 81 probability distribution function 144 probit model 142, 145, 156–60; data generation 150–5; distribution function 158; estimation results 158–60; graphical illustration 146; likelihood function 157 producer behavior 51–78; cost function estimation 58–9, 69–70; data generation method 57–61; economies of scale 59, 70–2; factor demand function estimation 66–9; linear homogeneous production function 54–5, 66, 67; models 55–7; multicollinearity 58, 61–6; theory 52–5 production function 52, 57; economies of scale in 59, 70–2; estimation 69; linear homogeneity 54–5, 66, 67; see also Cobb–Douglas production function profit function 52, 53; definition 52 profit maximization 81; as common goal 54; first-order conditions 81; principle 52, 53 qualitative-response models 142–66; data generation method 150–6; microdata set merits 165–6; see also double-hurdle model; Heckman’s two-step method; logit model; probit model; tobit model random disturbance: role in estimation and hypothesis testing 101–4 random-effects model 172; data set generation 174–5; estimation results 177–80; fixed-effects model vs. 172, 180 random shock 9 random walk 183, 186, 189; with drift 186, 189–90 rank condition 92 real money supply: and liquidity preference 114–15 real-world data sets 197 regression: spurious 182, 186, 187–92, 193
Index╇╇ 203 regression analysis 5, 6 reproducibility: of empirical results 2 reservation wage 148 residuals 18 returns to scale: constant 55; decreasing 55, 59; increasing 55, 70–1 revenue: definition 108 Roy’s identity 14, 15 S-shaped regression line 144 sample selection bias 148 saving and investment balance 114 savings abroad (Sr) 114 seasonal adjustment 181 seemingly unrelated regression (SUR) method 39 Shephard’s lemma 14, 16, 17 shock 21, 94, 95, 107 short-run: definition 54; long-run vs. 54 simultaneous estimation method 36, 39 small economy: definition 115–16 SNA 2, 3–4 social sciences: natural sciences vs. 171 space 55 spurious regression 182, 186, 187–92, 193 stable lag structure 34 standard normal density function 145 stationary process 183, 185, 187 Statistics Bureau (Japan) 39 stochastic concept 5–6, 101 stochastic distribution 6, 142, 144 structural change 18–25; definition 18 subsistence level 37 substitutes 109 substitution effect 42 supernumerary income 37 supply curves see market supply curves supply schedule 85 System of National Accounts (SNA) 2, 3–4 t-distribution 35, 36 target variables 111 technological progress 55
temporal utility function 124 theory: observation and 6 three-stage least-squares (3SLS) method: consumer behavior models 39; macroeconomic models 132–3, 136–7, 139–40; market equilibrium models 107–8, 109 time discount rate 140 time series 182; non-stationary 183, 185; stationary 183, 185, 187 time-series analysis 181–95; co-integration analysis 182, 185, 194–5; dataset generation method 185–6; spurious regression 182, 186, 187–92, 193; unit root test 182, 184–5, 192, 194; see also VAR models tobit model 142–3, 147, 160–3; censored 154, 160–1, 162; data generation 150–6; estimation results 161, 162, 163; likelihood function 161, 162; truncated 155–6, 161–3; see also double-hurdle model; Heckman’s two-step method Törnqvist price index 43 trend stationary process (TSP) 182, 184, 187, 192 two-stage least-squares (2SLS) method 129, 130, 131 two-way cross-classification model 168–70 unemployment function 115 unemployment rate 115 unit: determining 2 unit root test 182, 184–5, 192, 194 utility function 10; LES 2, 20, 36; neoclassical growth model 124; parameters 18; see also direct utility function; indirect utility function utility indicator 16 utility maximization 10, 81; first-order conditions 11; principle 11 VAR models 182, 184; data set generation method 185–6; estimation results 186–7, 188 variables: errors in 6; in macroeconomic models 113 variance components model 170–1
204╇╇ Index vector autoregressive models see VAR models Wald test 21, 22, 23 Walras law 35 white noise 185, 189
White’s (heteroskedasticity) test 21, 27–30 within 171 X-11 181 X12ARIMA 181