1,521 96 1MB
Pages 337 Page size 396 x 612 pts Year 2005
Markets, Information and Communication
The Internet bubble, which peaked in size in 2000, has now well and truly burst. As with all bubbles, there are varying explanations for its occurrence, but the hype that surrounds the Internet has shouldered a lot of the blame. There is, however, no doubt that the Internet has significantly changed the way people live, think and do business. This impressive volume presents the Austrian school of thought and their considered response to the “Internet economy” with clarity and insight, and counts amongst its contributors several brilliant young academics and important figures such as Peter Boettke, Richard Aréna and the late Don Lavoie (to whose memory this book is dedicated). Topics include: • • • •
“Austrian” theories of the firm and the Internet economy entrepreneurship and e-commerce private lawmaking on the Internet Hayek and the IT entrepreneurs
Markets, Information and Communication presents an accurate picture of the extent to which Austrian economics can help us to understand the Internet economy and the extent to which it vindicates Austrian economics. It will be welcomed by researchers in economics, Austrian economists and all those wanting a better understanding of the economic and legal ramifications of the Internet. Jack Birner is Research Professor at University College Maastricht, The Netherlands, Professor of Economics at the University of Trento, Italy, and permanent visiting Professor at BETA, Université Louis Pasteur, Strasburg. He is author of The Cambridge Controversies in Capital Theory also published by Routledge. Pierre Garrouste is Professor at the University of Lyon II, France.
Foundations of the market economy Edited by Mario J. Rizzo, New York University and Lawrence H. White, University of Missouri at St Louis
A central theme in this series is the importance of understanding and assessing the market economy from a perspective broader than the static economics of perfect competition and Pareto optimality. Such a perspective sees markets as causal processes generated by the preferences, expectations and beliefs of economic agents. The creative acts of entrepreneurship that uncover new information about preferences, prices and technology are central to these processes with respect to their ability to promote the discovery and use of knowledge in society. The market economy consists of a set of institutions that facilitate voluntary cooperation and exchange among individuals. These institutions include the legal and ethical framework as well as more narrowly “economic” patterns of social interaction. Thus the law, legal institutions and cultural and ethical norms, as well as ordinary business practices and monetary phenomena, fall within the analytical domain of the economist. Other titles in the series The meaning of market process Essays in the development of modern Austrian Economics Israel M. Kirzner Prices and knowledge A market-process perspective Esteban F. Thomas Keynes’ general theory of interest A reconsideration Fiona C. Maclachlan Laissez-faire banking Kevin Dowd
Expectations and the meaning of institutions Essays in economics by Ludwig Lachmann Edited by Don Lavoie Perfect competition and the transformation of economics Frank M. Machovec Entrepreneurship and the market process An enquiry into the growth of knowledge David Harper
Economics of time and ignorance Gerald O’Driscoll and Mario J. Rizzo Dynamics of the mixed economy Toward a theory of interventionism Sanford Ikeda Neoclassical microeconomic theory The founding of Austrian vision A.M. Endres The cultural foundations of economic development Urban female entrepreneurship in Ghana Emily Chamlee-Wright Risk and business cycles New and old Austrian perspectives Tyler Cowen Capital in disequilibrium The role of capital in a changing world Peter Lewin The driving force of the market Essays in Austrian economics Israel Kirzner An entrepreneurial theory of the firm Frédéric Sautet Time and money The macroeconomics of capital structure Roger Garrison
Microfoundations and macroeconomics An Austrian perspective Steven Horwitz Money and the market Essays on free banking Kevin Dowd Calculation and coordination Essays on socialism and transitional political economy Peter Boettke Keynes and Hayek The money economy G.R. Steele The constitution of markets Essays in political economy Viktor J. Vanberg Foundations of entrepreneurship and economic development David A. Harper Markets, information and communication Austrian perspectives on the Internet economy Edited by Jack Birner and Pierre Garrouste
Markets, Information and Communication Austrian perspectives on the Internet economy Edited by Jack Birner and Pierre Garrouste
First published 2004 by Routledge 11 New Fetter Lane, London EC4P 4EE Simultaneously published in the USA and Canada by Routledge 29 West 35th Street, New York, NY 10001 Routledge is an imprint of the Taylor & Francis Group
This edition published in the Taylor and Francis e-Library, 2005. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.” © 2004 Editorial matter and selection, Jack Birner and Pierre Garrouste; individual chapters, the contributors All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging in Publication Data A catalog record has been requested
ISBN 0-203-18041-0 Master e-book ISBN
ISBN 0–415–30893–3 (Print Edition)
This volume is dedicated to the memory of Don Lavoie
Contents
List of figures and tables List of contributors Preface Acknowledgments Introduction
xii xiii xv xvi 1
JACK BIRNER
PART I
Digitally connected networks 1 Subjectivism, entrepreneurship and the convergence of groupware and hypertext
19
21
DON LAVOIE
2 Open source software and the economics of organization
47
GIAMPAOLO GARZARELLI
3 The digital path: smart contracts and the Third World
63
MARK S. MILLER AND MARC STIEGLER
PART II
Some history 4 High-tech Hayekians DON LAVOIE, HOWARD BAETJER, AND WILLIAM TULLOH, WITH COMMENTS BY HOWARD BAETJER, MARC STIEGLER, AND PIETRO TERNA
89 91
x Contents 5 The new economy as a co-ordinating device: some Mengerian foundations
126
ELISE TOSI AND DOMINIQUE TORRE
PART III
The organization of the firm 6 “Austrian” determinants of economic organization in the knowledge economy
141
143
NICOLAI J. FOSS
7 The new economy and the Austrian theory of the firm
169
PHILIPPE DULBECCO AND PIERRE GARROUSTE
PART IV
Networks and communication 8 The small world of business relationships
187 189
GUIDO FIORETTI
PART V
Markets and market failure 9 Some specific Austrian insights on markets and the “new economy”
199 201
RICHARD ARÉNA AND AGNÈS FESTRÉ
10 Turning lemons into lemonade: entrepreneurial solutions to adverse selection problems in e-commerce
218
MARK STECKBECK AND PETER BOETTKE
11 Big players in the ‘new economy’
231
ROGER KOPPL AND IVO SARJANOVIC
PART VI
The monetary sector in the Internet
247
12 Bubble or new era? Monetary aspects of the new economy
249
ANTONY P. MUELLER
13 Possible economic consequences of electronic money JEAN-PIERRE CENTI AND GILBERT BOUGI
262
Contents xi PART VII
The legal framework
287
14 The emergence of private lawmaking on the Internet: implications for the economic analysis of law
289
ELISABETH KRECKÉ
Index
308
Figures and tables
Figures 3.1 3.2 3.3 3.4 3.5 3.6 3.7 8.1 8.2
Hubs of high trust Low-trust tragedy Remote-trust bootstrapping Simple negotiation game Separation of duties Covered call option Layered games A graph connected according to a small-world topology Dunbar’s finding of a correlation between neocortex ratio and mean group size 11.1 Wheat prices from 5 January 1970 to 22 June 2000 11.2 Rescaled range analysis
66 68 71 73 74 76 78 191 194 240 243
Tables 2.1 2.2 2.3 11.1 12.1 13.1
Top developers Active sites Top Linux server vendors Estimated values of the Hurst coefficient US economic expansion 1991–9 in historical perspective Features of electronic money, currency, checks, and debit cards
48 48 48 244 252 279
Contributors
Richard Aréna, LATAPSES, IODE, CNRS, University of Nice, Sophia Antipolis, France (arena@idefi.cnrs.fr) Howard Baetjer, Department of Economics, Towson University, USA (hbaetjer@ erols.com) Jack Birner, Department of Sociology, University of Trento, Italy (jbirner@ gelso.unitn.it) Peter Boettke, James M. Buchanan Center for Political Economy, Department of Economics, George Mason University, USA ([email protected]) Gilbert Bougi, Centre d’Analyse Economiqué, Université d’Aix-Marseilles III, France ([email protected]) Jean-Pierre Centi, Centre d’Analyse Economique, Université d’Aix-Marseilles III, France ([email protected]_3mrs.fr) Philippe Dulbecco, CERDI, CNRS, University of Auvergne, France (Philippe. [email protected]) Agnès Festré, LATAPSES, IODE, CNRS, University of Nice, Sophia Antipolis, France (festre@idefi.cnrs.fr) Guido Fioretti, International Centre for Economic Research, Villa Gualino, Turin, Italy (fi[email protected]) Nicolai J. Foss, LINK, Department of Management, Philosophy and Politics, Copenhagen Business School, Denmark ([email protected]) Pierre Garrouste, ATOM, Paris 1 and University of Lyon II, France ([email protected]) Giampaolo Garzarelli, Dipartimento di Teoria Economica e Metodi Quantitativi per le Scelte Politiche, Università degli Studi di Roma “La Sapienza”, Rome, Italy ([email protected]) Roger Koppl, Department of Economics and Finance, Fairleigh Dickinson University, USA ([email protected])
xiv Contributors Elisabeth Krecké, Université d’Aix-Marseilles III, France (Elisabeth.KRECKE @wanadoo.fr) Don Lavoie, former David H. and Charles G. Koch Professor of Economics, School of Public Policy, George Mason University, USA Mark S. Miller, CTO and COO ([email protected]) Antony P. Mueller, University of Caxias do Sul, Brazil (Antonypmueller @aol.com) Ivo Sarjanovic, 32, Avenue des Vergys, CH-1225 Chene Bourg, Switzerland ([email protected]) Mark Steckbeck, James M. Buchanan Center for Political Economy, Department of Economics, George Mason University, USA (msteckbe@ gmu.edu) Marc Stiegler, Combex, Inc. ([email protected]) Pietro Terna, Dipartimento di Scienze economiche e finanziarie G. Prato, Università di Torino, Italy ([email protected]) Dominique Torre, LATAPSES-IDEFI, University of Nice, Sophia Antipolis, France (torre@idefi.cnrs.fr) Elise Tosi, LATAPSES-IDEFI, EAItech-CERAM, University of Nice, Sophia Antipolis, France (tosi@idefi.cnrs.fr) William Tulloh, Erights.org ([email protected])
Preface
From 24 to 26 May 2001 the Association des Historiens de la Tradition Economique Autrichienne held its third annual conference in the Tuscan cities of Pisa and Lucca. Its title, “Austrian perspectives on the new economy,” indicates that the Association does not limit its activities to historical research but is also forward looking. Indeed, the main reason for its foundation, in 1998, was the conviction that Austrian economics is a largely unexplored treasure trove of resources that may be used to shed light on current economic phenomena. The location of the conference illustrates that the Association has international cooperation written large in its charter. We were the guests of the University of Pisa and the Fondazione Dino Terra of the City of Lucca, to both of whom we are very grateful for their financial and physical support. Special thanks are due to Professor Raimondo Cubeddu of the Department of Political Science of the University of Pisa and Professor Pierluigi Barotta, president of the Fondazione Dino Terra. Raimondo Cubeddu deserves our special gratitude for making the generous offer to host the conference in the Association’s first official venture outside France and for actively involving himself in the local organization. We should further like to thank Dr Flavia Monceri for managing the daily chores of the organization with flair and good humor, and Antonio Masala and Christian Swan for assisting her. We also want to thank the participants from the United States of America, South America, the United Arab Emirates, South-East Asia and Europe for making the conference a lively global market place of ideas. A last word of thanks is due to the referees, who provided valuable advice for the selection of the conference papers for this volume. Pierre Garrouste–Jack Birner Paris–Venice 8 March 2003
Acknowledgments
The editors are grateful to the Mercatus Center (formerly Center for the Study of Market Processes) for permission to reproduce “Prefatory note: the origins of ‘The Agorics Project’” by Don Lavoie and “High-tech Hayekians: some possible research topics in the economics of computation” by Don Lavoie, Howard Baetjer and William Tulloh, pp. 116–46 of Market Process, Vol. 8, Spring 1990.
Introduction Jack Birner
The first hype of what was commonly referred to as the “new economy” now seems to be over. Initially, expectations ran very high and investors in dotcom companies made loads of money while most of the companies themselves kept accumulating losses. As is the case with all bubbly booms, an important part of the explanation lies in the disappointment of investors with alternative opportunities (which for a short while were disdainfully referred to as belonging to the “old economy”). In addition, the financial sector has now come under suspicion for having generated the exaggerated optimism that characterized the new economy sector. In the mean time, public opinion seems to have swayed to the other opposite and many commentators and disillusioned investors are now sounding the death bell for the new economy. This cyclical change of views on the surface hides, however, phenomena which are assuming an increasingly important role in our daily lives. Even to the uninitiated it is clear that computer-stored databases and the Internet have changed the speed with which information travels. This has increased the opportunities for consumers and producers to acquire information. It may also have increased their search costs. I say “may,” because the matter is far from clear. If we project the great number of sources of information that is now available on the Internet back into a recent past without them, then we would have to conclude that search costs in the past were very much higher. This conclusion, however, would overlook the fact that without modern information technology most of these sources would not have been created in the first place. Now that they are available, information seekers face a problem of selection and its costs. The matter is not limited to a market environment. Researchers who were used to spending many days in dusty library archives now have the possibility to access (to use a neologism that is one of the consequences of the Web) many databases, at least for a first screening. The issues addressed by the chapters in this volume are not narrowly linked to the first flirt between the “traditional” economy and the Internet, the “new economy” of the newspapers and glossy business magazines. This may be derived from the fact that, while at the time of the conference, in May 2001, scepticism about the new economy was on the increase, now, two years later, none of the chapters has lost any of its actuality. This may also be taken as one of the indications
2 Jack Birner that we are only on the verge of the Internet revolution.1 In order to take that into account and to dissociate ourselves from the excesses surrounding the concept of the new economy, we now prefer to speak of the Internet economy, the term we adopted for the title of this volume. The Internet is often described as an institution that has spontaneously arisen and evolved. Even though the spontaneous character of its initial emergence is highly questionable, it cannot be denied that much of its expansion shows the features of a self-organizing process. This raises the closely related and very complex issue of the need for regulation, including the need for protecting privacy, intellectual property rights and maintaining an anti-trust market environment. The most important recent example is the legal battle between the United States government and Microsoft.2 Another question concerns safety, notably of the payments made through the Internet. Do these matters require regulation by a central (government) organization or can private commercial and noncommercial initiatives deal with them? In a world where the next source of information is only a mouse-click away, information spreads at very high speed and the availability of many different sources of information makes it easier than before – at least in principle – to obtain accurate information. The combination of fast traveling and undistorted information is one of the most important predictions of sociological network theory; it is the type of information that is generated by a network characterized by weak links (the seminal text is Mark Granovetter’s “The strength of weak ties,” 1973). The Internet is such a network: in the Web’s impersonal environment a relation sealed by a click is one the weakest links one can possibly imagine. The potential of sociological network theory for the analysis of organizations and markets has by no means been exhausted and the analysis of the Internet economy is another occasion to put the theory to work (and put it to the test). That only two of the chapters in this volume explicitly go into the network-theoretical background of the Internet economy testifies to the fact that the international research community itself is an unconnected network. The lack of communication among economists and between economists and sociologists has long been a source of wonder to me.3 However, it is not only the lack of communication that is to blame for the large white spots that are left on the map of the social sciences. Their terrae incognitae are also the result of the lack of imagination and knowledge about the history of their own disciplines of social scientists themselves. For instance, to talk about the network(ed) society is a very fashionable thing to do nowadays. But the way in which the expression is used is a misnomer: societies have always been networks. The Internet just makes people more aware of this. The fact that not even hypertexts have brought researchers from various disciplines with an interest in the same problems closer together shows that the availability of a weakly linked structure is at the most a necessary condition for efficient communication. One of the goals of this volume is to point to some promising links that have as yet been little explored. Many such links exist between the Internet economy and the Austrian school of economics. Characteristic features of the Internet economy, such as the role of knowledge, information and
Introduction 3 communication, the coordination of individual activities and the spontaneous emergence of ordered structures, the links between the economy and the legal and political framework, the network structure of social interaction and the dynamics of competition have been studied by economists and other social theoreticians. But Austrian economics is unique in that it analyzes all these elements as interdependent phenomena.4 It has also, from its very beginning, studied the interaction between the monetary or financial and real sectors of the economy, which in the Internet economy seems to be assuming new features. That is why the Association des Historiens de la Tradition Economique Autrichienne (Association of Historians of the Austrian Tradition in Economic Thought) chose this particular theme for its third annual conference; we thought that the Austrian approach would be eminently suitable for understanding the Internet economy. We also thought that, conversely, as the phenomena that are identified with the Internet economy seem to correspond rather closely to the world described by Austrian theories, they might be used to put them to the test. We leave it to the reader to judge to what extent our expectations are borne out by the contributions to this volume. One of the first academics to combine several of the conference’s themes in his work is Don Lavoie. The organizers invited him to give the keynote address and he accepted enthusiastically. He wanted to combine the conference with a holiday with his family in Florence. Unfortunately, that was not to be. Just before the conference was due to begin Don let us know that he would not be able to attend because he had to undergo medical tests. Even though few participants knew how serious the situation was, Don’s cancellation threw a shadow over the venue. (His paper was read by Mark Miller.) Soon afterwards it turned out that he was suffering from a disease with desperately low prospects of survival and less than six months later, on 6 November, he died, only 50 years old. Despite his deteriorating health, he gracefully accepted to cut and revise his long original paper, a job which he finished with the help of Bill Tulloh at the end of July. The editors are proud to include in this volume what is probably Don Lavoie’s last contribution to economics. We owe a debt of gratitude to this great pioneer for the prominent, courageous and creative role he played in continuing to remind economists that the Austrian tradition has much of value to offer. His death leaves a deep and sad void. We dedicate this volume to his memory. Don Lavoie’s pioneering role in the field that constitutes the theme of the conference also made us decide to include an article on “High-tech Hayekians,” which he published together with Howard Baetjer and Bill Tulloh in Market Process in 1990. We also include Don’s prefatory note, as it puts the article in perspective. The article reports on a series of interviews with young computer scientists who were interested in what the article calls the economics of computation, or the interface between computer science and economics. All of them took their inspiration from the work of F.A. Hayek. The article is so rich in content that I will not even try to give the briefest of summaries. Let me just say that it discusses many lines of research as possibilities for the future that in the intervening years have been taken up and developed – also by some of the contributors to this volume.
4 Jack Birner One of the authors and one person who was interviewed contributed a brief retrospective after the 12 years since the article appeared. Another interviewee, Mark Miller, presented a paper at the conference which he wrote together with Marc Stiegler. Since one of the offshoots of the “economics of computation” is the growing body of computer simulations of economic processes (shortly before this volume went to the press, CESSBA, the Italian Center for Agent Based Social Simulations was founded5), we also asked Pietro Terna, an expert in this field, for a comment. The existence of these “high-tech Hayekians” prove the actuality of the conference theme. Most participants to the conference draw an income from teaching and writing about the ideas of the Austrian school in the relatively protected environment of the university. The people interviewed in the article, on the other hand, earn their living by creatively applying these and similar ideas. They have patented and sold the results in the rough (even rougher?) environment of the market. For example, Eric Drexler and Mark Miller made a highly original and creative use of network theory, using it to design, patent and sell a securitycode system that could make the Internet a safer place (they took their inspiration from Hayek and discovered only later the similarity of their design to Granovetter’s network theory and are now gracefully referring to their network pictures as Granovetter diagrams). The presence of both types of participants, academics and people working in the Internet economy, stimulated many lively discussions and was an important factor in making the conference a success.6 A major cause of its success was the skill, diplomacy, enthusiasm and good taste of Raimondo Cubeddu of the Department of Political Science of the University of Pisa, who gracefully offered to take care of the practical and a large part of the financial aspects of the conference. Raimondo hosted us in Pisa, which offers a unique cultural and artistic environment. He found some splendid locations for the various sessions, the most impressive of which was the Palazzo del Consiglio dei Dodici dell’Ordine di S. Stefano. He let us stay in the beautiful Royal Victoria Hotel, opened in 1839 and still owned and managed by the same family. It once was the holiday address of members of many European royal families, who preceded us in enjoying the view of the Arno river from its Jugendstil-decorated apartments. He also took us to various carefully chosen restaurants with such regularity that more than one participant regretted having left his jogging shoes at home. On the last day we were taken by bus to the nearby wonderfully preserved fortified city of Lucca (the birthplace of Giacomo Puccini) as guests of the Fondazione Dino Terra del Comune di Lucca and its president, Professor Pierluigi Barotta. The sessions took place, between one wedding and the next, in the spectacular Sala degli Specchi of the town hall, Palazzo Orsetti. Between Pisa and Lucca, the entire setting was highly stimulating and conducive to informal gettogethers between researchers in Austrian economics from all over the world, and particularly from both sides of the Atlantic. We have high hopes that the conference has contributed to a cross-fertilization between these two (very loosely defined) groups, who are both represented in this volume.
Introduction 5
About this volume Digitally connected networks The first three contributions land us right in the middle of the Internet economy and its “Austrian” features. All three discuss aspects of computer software engineering and its potential consequences, but they range from the highly speculative and philosophical (the creation of value, Lavoie) through a detailed organization theoretical analysis of open source software (Garzarelli) to a concrete proposal for applying smart contracts to unblock potential wealth in third-world countries (Miller and Stiegler). Don Lavoie’s “Subjectivism, entrepreneurship and the convergence of groupware and hypertext” highlights the role of entrepreneurial imagination in the creation of wealth. An important tenet of Austrian economics is that economics in general and the market in particular is about the coordination of individual activities. Activities are guided by perceptions and expectations, a thesis known as subjectivism, and more in particular, perceptions of other individuals and expectations about their future activities. This intentional aspect of subjectivism is particularly present in the work of Ludwig Lachmann, which is a major source of inspiration for Lavoie. In the tradition of Lachmann, and of Israel Kirzner, what entrepreneurs do is to coordinate individual plans. Their concept of entrepreneurship is thus more encompassing than the traditional one since everybody who does so is an entrepreneur. In the process of coordinating plans, individuals discover new opportunities. If they succeed in realizing them, they increase their satisfaction level. The picture that emerges from these elements is that economic growth is the creation of value through a learning process that involves a better coordination of individual plans. By combining coordination and subjectivism, we arrive at an image of entrepreneurship as furthering the mutual orientation of individuals’ expectations of how other individuals perceive the world. Lavoie observes that the hypertext software and groupware both serve to coordinate individual plans, and in this chapter he examines what impact they may have on economic growth. Not surprisingly to the readers of Lavoie’s work – which has a hermeneutical orientation – the role of written communication and its meaning is at the center of his analysis.7 In this perspective the entrepreneurial function of seeing new opportunities becomes a form of “reading of a meaningful situation in a language-constituted world.” This certainly captures its creative aspects. Lavoie further concentrates on the consequences for markets of the introduction of writing on paper as a preliminary to a tentative analysis of the effects of hypertexts and groupware on economic growth; it is too early to assess the effects that computers have on the economy.8 The two types of software discussed in this chapter enable us to transcend the limits of traditional texts written on paper by combining the advantages of keeping the original text while adding annotations: reading and writing will no longer be separated. In this way, more integrated networks of meanings are created which, according to Lavoie, will facilitate the process of communication in which wealth is created: “Entrepreneurship
6 Jack Birner is fundamentally dependent on the media in which human communication takes place.” The objective of Giampaolo Garzarelli’s chapter on “Open source software and the economics of organization” is very similar to that of the previous one: to examine some consequences of information technology for the economy. He sees in open source software (or OSS; the most well-known example is probably Linux) a new model for economic organization that needs a new analytical tool kit. The “Austrian” aspect is not difficult to identify: it is the idea that a spontaneous self-organizing process is more efficient than a centrally planned structure. Garzarelli’s main argument may be formulated as follows. The modularity and its dual, “information hiding,” that characterize software in general, constitute exact analogues to the way in which Hayek describes the virtues of the market and competition. For the market and the price system to act as a “system of telecommunications” it is necessary that pairs of individuals have (even a very small) part of their knowledge or perceptual fields in common. This allows them both to benefit from the much larger part of the knowledge of other individuals that is hidden from them and to adjust and improve their own knowledge (that is, to learn). Garzarelli takes the reader through the following variations. He uses the work of Deborah Savage to argue that professions as characterized there are very similar to the spontaneous and decentralized way in which open-source software organizes itself. This, however, covers only the supply side. In order to incorporate demand, Garzarelli uses the theory of clubs, which is about the supply and demand of shared goods. But contrary to what the theory of clubs implies, the shared good that software is does not necessarily cause congestion. On the contrary, in the interesting cases such as OSS there are increasing returns to their increased use: the more diffused a particular type of OSS becomes, the more positive network effects it will have.9 What further distinguishes the OSS story from the more traditional economic approaches is that what is maximized is not profit but reputation. In an environment where everybody is motivated by trying to live up to or do better than the average standard, there is no monitoring problem in the common sense of the word. Individuals monitor themselves.10 This chapter critically and creatively examines a number of existing economic theories for their applicability to OSS without delivering a finished analytical framework. But in the course of his discussion the author is generous with indications of where to look.11 That the direction is “Austrian” is not only argued explicitly. It also shows unintentionally where the author observes that OSS has its origins in the activities of hackers. This, of course, is a variation on Mandeville’s theme – gladly adopted later by Austrian economists – of private vices and public benefits. The title of Mark Miller’s and Marc Stiegler’s chapter, “The digital path” is a deliberate variation on the title of the book with which the Peruvian economist Hernando de Soto is finally drawing attention.12 The Other Path, published in 1989, argues that the poor of the world would not be so poor if only they could legalize the assets over which they de facto but not de iure dispose. This would be the case if generally enforceable property rights could be established and transferred. Miller and Stiegler propose an electronic solution to this problem. They want to avoid
Introduction 7 involving governments, which seems like a good idea, particularly in countries where the checks and balances of a developed democracy are not available (and these are not only countries of the so-called Third World!). In network terms, the problem described by de Soto is how to turn low-trust networks into high-trust ones. In the latter, reliable relationships are more easily formed. For this to be possible, it is necessary to have a structure where local networks (such as villages) are connected by hubs that are part of a larger network (which is also the structure of the Internet). The economic success of the “first world” can be explained by the existence of many of these trust hubs, which range from banks and credit rating institutions to courts of justice. They are the nodes that connect local and otherwise unconnected networks in which trust relations (strong ties) may already exist. However, the lack of more numerous and longer distance connections (with weaker ties) prevents these local relationships to be turned into more stable enforceable rights. These social-network aspects are not explicitly discussed by the authors but their analysis lends itself perfectly to this approach. Notice that there is a strong parallel between De Soto’s and Miller and Stiegler’s analysis and the phenomenon of Italian industrial districts. Many of these owe their success to their location at the intersection of local networks with strong ties and the global weaktie network that constitutes the world market. According to Hayek, this modern global market system has evolved together with the social and legal framework that makes it work out of the pursuit of the interests of individuals themselves and of their close relatives (compare for instance, Hayek 1979: 173). The phenomenon that De Soto describes, however, is a counterexample to this thesis. Apparently, particular conditions in social networks and the history of their development may obstruct a spontaneous process that Hayek says will lead towards a fully integrated market structure.13 Miller and Stiegler’s proposed solution relies on two instruments that, according to them, have come within reach with the widespread use of the Internet and the availability of relatively low-cost computers. One consists of so-called smart contracts. They may enable local communities to take advantage of existing institutions that are specialized in producing trust in more advanced countries, notably rating agencies. Smart contracts are software programs that embody cooperative arrangements without requiring coercive recourse (the general legal features of this type of arrangement are discussed by Elisabeth Krecké in the last chapter). These programmes may rely on encryption techniques (for instance, to dodge government interference) and the creation of disconnections in networks between agents. For example, instead of dealing directly with each other, the parties to an arrangement use an escrow until both parties have fulfilled their obligations. The chapter gives some examples of more complicated arrangements of this kind. Their discussion of contracts as games is very interesting. There may be still a problem, though. Universally enforceable contracts are necessarily formulated in very general terms. This risks destroying the basis of the potential wealth of the poor in the Third World: their detailed knowledge of very specific local conditions. This is why the second instrument is introduced: video contracts. Instead of writing down the terms of an agreement or the conditions of a particular property right,
8 Jack Birner parties may digitally record the process of negotiation. In case of later disagreement, this would facilitate, according to Miller and Stiegler, the solution of conflicts. The authors admit that their proposal is a venture into largely uncharted territory and that a piecemeal introduction should help to assess its feasibility. I for one wonder how feasible their solution would be in a country like communist China, where the government effectively censures the Internet. Their general message, however, is a variation on the theme that links all three contributions in this section: trust hubs serve as the link between local strong-tie networks with global weak-tie ones. The final paragraphs of the chapter sound not so much an Austrian as an optimistic classical liberal note: smart contracts with their reliance on softwarebased rules will bring the ideal of “the rule of law but not of men” closer than ever before. Some history The chapters in this section both deal with the more recent history of the Internet economy and some more remote connections. The recent history is represented in the form of a reprint of “High-tech Hayekians”, an article originally published in Market Process in 1990. It summarizes a series of conversations with young entrepreneurs in the information technology sector. What they all have in common is that they somehow or other were inspired by Hayek’s ideas or ideas very similar to Hayek’s. The article makes for fascinating reading and many of the features that are there discussed as possibilities have in the mean time been introduced. This is also confirmed by the brief retrospectives by one interviewer and one interviewee, and by Pietro Terna. More backward-looking is Elise Tosi’s and Dominique Torre’s “The new economy as a co-ordinating device: some Mengerian foundations.” Taking as their point of departure Menger’s discussion of the particular type of goods that we would now call trust-related, the authors observe that the Internet performs the same function as the Mengerian entrepreneurs: it bridges gaps in the communication network with the consequence that information is more widely available. According to the authors, the Internet is similar to other information carriers in that it has increasing returns, an effect which is strengthened by its high degree of connectivity. Its cost structure and the externalities it creates, however, lead to non-efficient price setting for its services and to an underutilization of its potential. To cure this particular type of coordination failure the authors recommend active outside intervention (a conclusion similar to the contribution by Centi and Bougi but quite different from the chapters by Garzarelli, Miller and Stiegler, and Krecké). The organization of the firm Nicolai Foss in his “ ‘Austrian’ determinants of economic organization in the knowledge economy” addresses an argument that is not only related to the Internet economy. His main thesis is that where diverse and specialized and hence more
Introduction 9 dispersed knowledge accounts for a substantial part of the value added in production (a situation that he calls a Hayekian setting), the entrepreneur’s authority is weakened. This is because the workers whose activities he is supposed to coordinate are the owners of specialized knowledge that can only be kept up to date by continued interaction with other specialists. This implies the existence of influential networks that are not contained within or do not coincide with the boundaries of the firm. Where these are located is a problem that was addressed by Coase. The blurring of these boundaries poses a problem for the organization of the firm, but it would be a mistake to think that one can arbitrarily mix a central organization form with a market-type coordination. This is because ownership and delegated rights respond to different incentives, as Mises argued. Foss illustrates this in a case study of Oticon, which had to learn this lesson the hard way. In their “The new economy and the Austrian theory of the firm” Philippe Dulbecco and Pierre Garrouste, too, analyze the consequences of the Internet Economy for the organization of the firm, but they do so by discussing its impact on the type of a firm’s economic activity. With the faster diffusion of information in more highly connected networks between enterprises, firms are forced to adapt by innovating continuously. To put this differently: their capital consists more than ever in a stock of knowledge that requires continuous investment in learning to keep it from becoming obsolete. Whereas competition continues to play an important role (think only of the battles for making certain software or telecommunications standards generally accepted), knowledge-intensive production processes rely on complementary knowledge and information. The point about the importance of the mix between competition and complementarity was made as early as 1960 by George Richardson in his Information and Investment (quoted by the authors) but it is only now, with the increased interest in the Internet economy, that it will, perhaps, begin to be recognized more generally. The more prominent role of complementarities requires new organization forms. The authors argue that information and communication technologies and organization forms coevolve. The more a firm’s assets are intangible and knowledge based, the more the firm becomes an information processor. This has consequences for its organization, a point also made by Foss. In a world where information becomes more readily available, less specific, less ambiguous and non-rival, investing in firm-specific capabilities and their maintenance becomes more important as the basis for making a profit. This is easier said than done, however. The decision to invest in the wrong type of knowledge is usually difficult to reverse and taking the right decisions has become more difficult in the fast-changing environment of the Internet economy (its brief history is littered with corpses). Let me add that we have here a knowledge problem twice over: not only do entrepreneurs have to decide on the correct type of knowledge and its internal organization as capital, they also have to foresee what new external knowledge will be discovered in the future. As Popper has observed, this is impossible by the very definition of what constitutes discovery. There is a close link with the paradox of information, which says that in order to decide rationally whether an unknown bit of information is worth its cost of acquisition one woud have to know its value in advance. Here lies a vast field of research that
10 Jack Birner remains largely unexplored; one of the ways of coming to grips with this set of problems is to elaborate the idea of Richardson that firms actively try to control their environment in order to keep problems of knowledge and information within manageable bounds. To return to Dulbecco and Garrouste, they argue that one such way is vertical integration and they refer to the increasing number of mergers as a confirmation that their first steps towards an Austrian theory of the industry and of the firm lead in the right direction. Networks and communication What explicit attention to networks may contribute to the progress of economic analysis is shown in Guido Fioretti’s brief but very interesting “The small world of business relationships.” As far as the Internet economy is concerned, its main hypothesis goes against the grain of the majority of the other contributions. According to Fioretti, we may expect there to be a structural invariant across the boundary of the Internet economy. As economists have begun to recognize, after Herbert Simon drew their attention to the boundaries of rationality, the human brain has only a limited capacity of handling complexity. One of the implications is that we can handle only a limited number of social connections. That we nevertheless succeed in communicating and interacting with many others, often at great distances, is due to the structure of the networks of which we are part. What Miller and Stiegler call hubs may link many short-range networks that are thus integrated into what Hayek in 1945 called a system of telecommunications. Hayek used this expression to characterize the price system, but Granovetter and his followers have convincingly shown that all human relationships may be described in these terms. Fioretti gives a number of examples of what disconnections in such networks may lead to. To many economists the most surprising one will be Keynesian underemployment equilibrium: it is the consequence of a loop in the information network that causes economic agents to keep repeating the same behavior. That this link has not been noticed before is due to an almost general blindness of economists to the fact that in order to model interactive behavior one must specify the interaction structure.14 What they implicitly assume instead is that every economic agent can interact with any other agent. The worst manifestation of this neglect are analyses based on representative individuals. As the author observes, this assumption implies the absence of institutions. Does this mean that Fioretti thinks the Internet has not changed anything? No! It has liberated us from the interaction constraints imposed by our physical location. Information and communication technologies make distance, or characteristic path length, which together with the degree of clustering is the crucial parameter of networks, less relevant. This is the feature on which Miller and Stiegler base their proposal. Fioretti sketches a research agenda to put his hypothesis to the test. He ends by hinting at the possibility that our cognitive constraints put an upper limit to the number of technologies that may be combined, which in its turn regulates the number of links that may evolve between firms. There is a parallel with biological evolution, where the positive or negative contribution of a particular
Introduction 11 gene to the overall fitness of the organism is dependent on the number of other genes it affects. The idea of an optimal level of interaction between genes merits a critical elaboration in the direction of economic systems. Markets and market failure In “Some specific Austrian insights on markets and the ‘new economy,’” Richard Aréna and Agnès Festré criticize the idea (which they call competitive economic equilibrium or CEE) that the Internet economy signals a return to the type of free market economy that prevailed in the West in the nineteenth and early twentieth century, the context in which the foundations of neoclassical economic theory were laid. That paradigm has been criticized for its lack of applicability to the economies of the twentieth century, where coordination by the state partly replaced coordination by the market. With the Internet economy, so this view goes, the real world has come to look more like the economy of the neoclassical microeconomics textbooks: initial endowments, preferences and techniques have become more easily identifiable and may thus be considered to be given; easy access to the Internet has lowered the cost of information and almost eliminated information asymmetries; location in physical space has become well-nigh irrelevant; and the general use of auctions has (re-?)introduced tâtonnnement as a process of coordination. The authors’ criticism of CEE takes two forms. The first is theoretical and based on the analysis of some prominent Austrian economists. The second criticism is more empirically orientated and uses several types of actual Internet market relationships. The theoretician that is discussed first is Hayek. His emphasis on the heterogeneity of agents and the importance of their perceptions and expectations in the coordination process that also takes place though the Internet is very different from CEE’s emphasis on factors that are “given” right from the beginning. What is also not given are the definitions of goods and markets, particularly in the context of the fast-changing Internet economy. Hayek’s vision of the functioning of the market economy as a dynamic process of discovery seems to suit the Internet economy better. What the authors criticize, however, is Hayek’s neglect of the existence of different types of markets and his recourse to the “empirical fact” that market economies show a tendency towards equilibrium. For an analysis of the variety of markets and a more satisfactory treatment of what markets do Aréna and Festré go back to Carl Menger. Menger bases his analysis on the idea that goods may have various degrees of exchangeability and he recognizes that markets are social institutions whose differences derive from differences in social rules and habits. Friedrich von Wieser elaborated this into an analysis of the relationships between markets and property rights and different types of contracts. Ludwig Lachmann provided a further generalization in his analysis of the general institutional embeddedness for the variety of market forms. What all these authors, including Hayek, have in common is that they presuppose that agents operate in a social interaction structure. This is perhaps the most important difference with CEE, one which has been addressed earlier in this Introduction.
12 Jack Birner The authors see in their description of the several types of Internet markets a confirmation that the line of analysis that runs from Menger to Wieser to Lachmann is to be preferred over CEE and also over Hayek. But a lot of theoretical work remains to be done. Whereas Foss and Dulbecco and Garrouste are groping for an Austrian theory of the organization of the firm, Aréna and Festré are trying to construct an Austrian theory of the organization of markets that is also applicable to the Internet economy. The pun in the title of the next contribution, “Turning lemons into lemonade: entrepreneurial solutions to adverse selection problems in e-commerce” by Mark Steckbeck and Peter Boettke will not be lost on an audience of economists. Its argument can be summarized as follows. According to Akerloff, asymmetric information between buyers and sellers sets in motion a process of adverse selection of the quality of goods that will make markets function worse and worse, even to the point where they may cease to exist. This is the theoretical premise. The empirical premise is that commercial relationships through the Internet are rife with such asymmetries. Conclusion: Internet markets are short-lived or show other signs of failure. Not so, say the authors. The counterexample they give is the story of three Internet firms that live off the trade in second-hand and out of print books. Their success and that of the market they help to maintain is then explained in terms that are borrowed from Hayek: markets are self-regulating processes that are in continuous evolution. Trust, which is fundamental for their efficient functioning, is established by means that do not need intervention from a nonmarket agency; it is the outcome of the creativity of entrepreneurs who earn money by acting as intermediaries. The last chapter of this volume, by Elisabeth Krecké, gives an analysis of this process in more general terms. The authors reject the hypothesis that individuals generally behave decently, even in the face of the temporal opportunities created by asymmetric information, out of the type of wellunderstood self-interest that is equivalent to the recognition that they are all stakeholders in a trust-based market system.15 This is very Hayekian: individuals are not supposed to have an opinion about the system as a whole and if they do, they must not act upon it – unless they recognize the merits of the free-market economy.16 They are likely to wreak havoc in all other cases, such as central planning or in the presence of one or a few other very influential agents. That is the subject of the next chapter, by Roger Koppl and Ivo Sarjanovic, “Big players in the ‘new economy.’ ” Here we enter the domain of market failures as traditionally defined. First of all: what are big players (BPs)? They are agents that have the power to influence the market in which they operate; they do not depend on profits or losses,17 and their behavior does not follow rules.18 Because of their power and unpredictability, BPs create uncertainty in the markets where they operate. In financial markets their presence may cause herding. One of the objectives of this chapter is to test whether BPs encourage herding in commodities markets. Its second, more speculative goal is to indicate what consequences BPs may have for the Internet economy. To start with the latter, in the same vein as the chapter by Aréna and Festré, Koppl and Sarjanovic argue that the Internet economy has lower transaction costs and is very much like the economy as described by Carl
Introduction 13 Menger in that it relies on knowledge even more than the non-digital economy. That would imply that BP intervention can be expected to be even more costly in terms of higher prices or lost opportunities for innovation than it would in the pre-electronic age economy. Since prices that have not been established and innovations that have not materialized are unobservable, the test of the BP hypothesis in the international wheat market is used as a speculative proxy for the BP–Internet economy hypothesis (the authors conclude that the outcome corroborates their theory). The test itself involves the application of a model for measuring herding and bubbles in terms of error duration that is of interest in its own right. The monetary sector in the internet economy Antony Mueller’s “Bubble or new era? Monetary aspects of the new economy” applies the lessons of Mises’ business cycle theory to the Internet economy. Even though the New economy failed to manifest itself in productivity statistics, it affected the global economy in a more indirect and perhaps more profound way. Mueller analyzes the shift in the stance of the president of the Central Bank of the world’s most powerful economy, Alan Greenspan. His high expectations that the Internet economy would reinforce the United States’ leading position in the world economy led him to allow the increase in the stock of money and the level of indebtedness to continue undisturbed. This is the second way in which this chapter’s analysis is very “Austrian”: expectations have a crucial role to play. Implicitly, the author endorses the BP theory of Koppl and Sarjanovic by emphasizing that Greenspan’s measures do not fit into any of the accepted economic cubby holes and have thus become more unpredictable.19 The power of the Federal Reserve Board’s president in the United States and that country’s powerful position as the “consumer of last resort” as Mueller calls it, but also as the most important financial market and member of international financial institutions have had far-reaching consequences for the world economy as a whole. For one thing, it lowered the perceived risk of investing in the Internet economy, and this led to a loosening of investment morals. Second, it created a climate in which undisciplined monetary policies in other countries were rewarded instead of being punished. The reward consists in what practically amounts to a guarantee that any country that makes a mess of its economic policies will be bailed out.20 This has created a climate of expectations that nurtures moral hazard and has set off a chain of international financial crises which according to the author is likely to continue. This is further enhanced by the computer and Internet-aided globalization of financial markets, which are characterized by greater volumes, volatility, velocity and virtuality, as he so nicely puts it. All this does not invalidate the message of Austrian business cycle theory: the huge global mechanism of money creation gives rise to malinvestments, which remain hidden as long as the boom continues. The longer this lasts, the more painful it will become to reverse these misdirections of investments. Borrowing the terminology of the previous chapter, the argument of the current chapter may be formulated thus: what started as an
14 Jack Birner instance of BP interference, led by positive expectations about the future of the Internet economy, in one country’s monetary sector, is reinforced (or perhaps endogenized) to the extent that it creates effects in the world economy as a whole that risk getting out of control. It takes ever more strenuous efforts to keep the ever more rapidly accumulating debt from forcing the real economy to a crash landing. The end of the boom, however, will not necessarily signal the end of the Internet economy. This is because the real economy is in continuous evolution and capable of adopting digital innovations – provided the monetary sector does not produce excessive noise. How this may be achieved is one of the issues addressed in the next chapter on “Possible economic consequences of electronic money,” by Jean-Pierre Centi and Gilbert Bougi. Whereas in the previous chapter the monetary sector remained something exogenous to the Internet economy, the question this chapter poses is what an Internet-endogenous monetary system might look like. Electronic money looks like the logical next step in the evolution of money from a means of exchange and store of value with a real value based on alternative uses to a purely trustbased medium. With the introduction of electronic money, banking will not be limited to officially recognized financial institutions, including central banks, which will thus lose their monopoly in the field of monetary policy. Free banking and competition between currencies might very well become the rule. The authors indicate that this would have far-reaching consequences for monetary policy (including the maintenance of price stability) and law enforcement (as electronic money makes tracking illegally earned money almost impossible). Whereas most of the contributions to this volume concentrate on the spontaneous mechanisms of the Internet economy, the conclusion of this chapter points to the possible need for intervention: “The challenge is to develop an institutional framework that provides transparent rules for the electronic payments system, safeguards the value of money, and protects individual freedom.” The legal framework Almost the exact opposite is advocated by Elisabeth Krecké in the final chapter, “The emergence of private lawmaking on the Internet: implications for the economic analysis of law.” Her point of departure is that the Internet has created such a fast evolving and complex and unprecedentedly novel system of transactions that the traditional types of centralized regulation and legislation will not be able to keep up with it. The author looks for an answer to this problem in the tradition of law and economics, and then more particularly in the half of that tradition that applies economic tools to legal contexts. In this perspective, the question is modified into the problem whether a system of legal rules that regulate trust and protect reputation can emerge spontaneously. This leads to two further questions: what constitutes law, and what are the features of such a rule-creating process. As to the latter, Krecké examines and rejects the positivist conception of law. It has been very influential in shaping the idea that is now almost generally accepted and that identifies laws with commands emitted by a national government or a
Introduction 15 supra-national authority that have to be obeyed in order to avoid sanctions. This view is contrasted with one that the author finds in the work of some economists and which she calls polycentric. It regards legal rules as the result – at least in part – of a discovery process in which entrepreneurs try out new types of contracts. This is more like what happens in the Internet economy. Instead of selling a product and relying on an existing legal framework, Internet entrepreneurs often sell a package that consists of a product and a set of promises that include the safety of the payment made, the guarantee to return the product if it does not live up to the buyer’s expectations, etc. A concrete example of this is given by Steckbeck and Boettke in Chapter 10. The underlying chain of reasoning is as follows. Entrepreneurs need to survive in order to make a profit (and the other way around). In order to achieve that, they need previous customers to return and new ones to become clients: entrepreneurs need to inspire trust. That will only happen if they keep a good reputation. If someone invents a system of rules of behaviour that is particularly conducive to this, he or she may let others make use of it against the payment of a fee (or a lump sum if the system of rules is sold). Unless, of course, the system is copied without compensation. This raises the largely unexplored field of property rights on the Internet. The author might have drawn the parallel with the invisible-hand mechanism through which protection agencies are established in Robert Nozick’s Anarchy, State and Utopia.21 Another mechanism she might have used as an example is the way in which Carl Menger (she only refers to him for his analysis of law) and almost every economics textbook after him rationally reconstructs the emergence of the modern banking system. That the evolution of the monetary and the legal system have much in common becomes particularly clear where Krecké writes that an economic interpretation of rule formation on the Internet shows that it is difficult to distinguish between what is and is not law. This is very similar to the invention by private banks (and not only these; see the chapter by Centi and Bougi) of new types of money, a process that has made the control over the stock of money by central banks an increasingly difficult task. Here we find ourselves in the second domain the author discusses, which is that of the question of what is law. The top-down type of law that uses the crux of legal positivism has to yield more and more to a type of law that is created from the bottom upwards – largely because the first type is not satisfactory for commercial transactions on the Internet. This chapter is complementary to the ones by Miller and Stiegler and Steckbeck and Boettke in that it gives a more theoretical and philosophical analysis of the type of concrete mechanism that is described there. The implication for the economic analysis of law is that it will become a discipline of increasing importance in the age of the Internet economy.
Notes 1 As The Economist (23 January 2003) rightly argues, many technological revolutions started with an initial boom and bust only to become very influential later. One of the examples it gives are the railways.
16 Jack Birner 2 Microsoft’s hold over the world of the Internet is nicely illustrated by the example of a French participant to the conference who unintentionally referred to portals as “gates.” 3 Compare Birner 1999, where Hayek’s 1937 “Economics and knowledge” is singled out as one of the earliest network analyses of markets and competition. The fact that this article is discussed by several contributors to this volume without going into its network features strengthens my amazement. 4 My fellow-editor Pierre Garrouste commented that what I write here is in apparent contradiction with my earlier remarks on the neglect of network models. He is right in the sense that the network type of analysis by Austrian economists has remained largely implicit, as I intimated in the previous note. However, the Austrians’ emphasis on the importance of knowledge and its communication logically presupposes the existence of a communication structure, for the description of which network analysis is an excellent instrument. Hayek has already been mentioned. Ludwig von Mises’ analysis of entrepreneurship is the other example. Apart from my previously mentioned article I refer the reader to the first section of this volume, where networks have a very prominent role, and to Guido Fioretti’s contribution. 5 http://eco83.econ.unito.it/mailman/listinfo/cessba. 6 As one of the organizers I am hardly an independent witness. Let me therefore hasten to add that we received unsolicited compliments from many participants. 7 For a non-hermeneutical attempt to deal with the role of language in an economic context, cp. Birner 1999. It is tempting to elaborate Lavoie’s analysis in terms that are more familiar to philosophers of language. The two pieces of software that he discusses can be seen as means of connecting private-language arguments (à la Wittgenstein) or local networks of meaning (in the sense of Ryle) to more encompassing structures of meaning. In the process, a more widespread agreement on meaning may be established, which facilitates communication. In a Popperian perspective, we could perhaps speak of groupware and hypertexts as world-1 objects that link world-2 psychological states and in the process contribute to the discovery of world-3 content. 8 Apparently, he agrees with Robert Solow’s well-known quip that “you can see the computer age everywhere except in the production statistics” (Solow 1987: 36). 9 If this were not an introduction, I would be tempted to elaborate the parallel with group selection in neo-Darwinian context. Positive network effects are very similar to ecological niches. 10 Here lies an interesting potential link between economics and the sociological literature on the internalization of norms from which both disciplines would stand to benefit. 11 And the notes that I added indicate that I at least find this chapter very inspiring. 12 After risking his life for it. Few social scientists enjoy the dubious satisfaction that their ideas are thought to be so important that someone or some group – in this case the terrorists of Sentiero Luminoso – wants to eliminate their author. 13 That Hayek’s purely economic approach is too poor to describe modern market societies has been argued, amongst others, by Birner and Ege 1999. 14 Alan Kirman is a conspicuous exception. Compare, for example, Kirman 1983. 15 Yet another explanation would be that humans are basically decent social beings who by nature show a tendency to behave cooperatively. This potential confrontation with a more typically sociological approach, however, is not discussed in this chapter. It merits, as the saying goes, further research. 16 This ideological bias in Hayek’s work is discussed by some of the contributors to the first conference organized by AHTEA. Compare Birner, Garrouste and Aimar 2001. 17 “They are funded by a pool of fiscal resources,” the authors add. That seems unduly restrictive; see the brief discussion in the next note. 18 It is “discretionary,” which would be more accurately described as not following rules that are known or knowable to other market parties. But this would not fit the behavior of central banks, one of the examples of BPs the authors give. For instance, in order to
Introduction 17 predict the decisions of directors of central banks, it is sufficient to have a look at their c.v.s: where they have taken their degree in economics and when is essential information for knowing the sort of rules they will follow. The characterization of a BP and the discussion in the chapter strongly suggests that BPs coincide with government agencies and central banks. Its scope is wider, though. Very wealthy charities, for instance, fit the description (if we delete the limitation to fiscal resources). The J. Paul Getty Trust has the means to influence the art market decisively. But then, the art market is not a typical market, despite the fact that it is one of the few instances in reality of a market where auctioneers make prices. I asked Neil De Marchi, who writes about the art market and its history, if there was any literature on BPs in the art market, but to his knowledge there is not. Extending the domain of application of BP theory would be very much in the Popperian spirit to which the authors subscribe; it would make the theory more falsifiable by increasing its content. 19 This is also an implicit criticism of what I observed in note 18. 20 The fact that the explanation that the United States backs this up in order to ward off communism has lost its footing might lead one to give more weight to endogenous economic factors, which would be a very Marxian conclusion. 21 The difference is that Nozick argues that the monopoly of violence is established by the strongest protection agency; it takes more fantasy to imagine the same taking place on the Internet. What sort of violence could be exerted there? Spam attacks perhaps? The more relevant problem is how property rights can be established, a problem that is addressed by the author.
Bibliography Birner, J. (1999) “Making markets,” in S.C. Dow and P.E. Earl (eds) Economic Organisation and Economic Knowledge: Essays in Honour of Brian Loasby, Cheltenham: Edward Elgar. Birner, J. and Ege, R. (1999) “Two views on social stability: An unsettled question,” American Journal of Economics and Sociology. Birner, J., Garrouste, P. and Aimar, T. (2001) F.A. Hayek as a Poltical Economist. Economic Analysis and Values, London: Routledge. Granovetter, M. (1973) “The strength of weak ties,” American Journal of Sociology 78: 1360–80. Kirman, A. (1983) “Communication in markets: A suggested approach,” Economics Letters 12: 101–8. Nozick, R. (1974) Anarchy, State, Utopia, Oxford: Blackwell. Richardson, G.B. (1960) Information and Investment, Oxford: Oxford University Press. Solow, R.M. (1987) “We’d better watch out,” New York Times Book Review, 12 July.
Part I
Digitally connected networks
1
Subjectivism, entrepreneurship and the convergence of groupware and hypertext Don Lavoie
Entrepreneurship as mutual orientation There are at least two possible types of social process. (There may be more). We may describe the first as ‘mechanical’, the second (for want of a better term) as ‘orientative’. In the first, whatever men do within a period depends on the position they have reached. A ‘feedback’ mechanism in which each subsequent step depends on ‘distance from equilibrium’ is a special instance of it . . . The other kind of social process, by contrast, leaves ample scope for divergent expectations. In it men’s actions are neither determined by what happened in a past period nor by the distance of their present position from an imaginary equilibrium. No doubt, in making and revising their plans they will take account of these facts. But the latter serve them as points of orientation, and not as determining forces. (Ludwig M. Lachmann 1994: 204–5)
What are the chief causes of the extraordinary economic prosperity in the so-called knowledge economy of the past couple of decades? Our models of economic growth have emphasized objective, material productivity. The production of larger amounts of physical stuff seems to be the very definition of wealth. But the Austrian school,1 on the basis of what it calls its “subjectivist” approach, contends that it is mainly our ability to coordinate with one another that enables us to increase the objective standards of living of people around the globe. And our ability to coordinate with one another in turn may depend fundamentally on our capabilities for mutual orientation, to see not the things themselves, but to see how one another are seeing things. The Austrian school’s view of economic development identifies the entrepreneurial process, a process for the mutual coordination of plans, as a key factor in the creation of wealth. The theory of entrepreneurship studies the way the multiplicity of plans of market participants get oriented to one another. It could be that among the important effects information technology is having on the new economy are direct enhancements to this mutual orientation process. Tools that support the way we orient ourselves to one another directly aid in the entrepreneurial coordination of plans, the very process the Austrian school has identified as key to economic prosperity. When Ludwig Lachmann refers to an orientative approach to economics he
22 Don Lavoie is suggesting that economics ought to be seen as an interpretive study, focusing on the ways in which market participants articulate meaning to one another. Subjectivism should try to understand the processes for the articulation of economic meanings, and the texts that result from such processes. Texts in this sense are words and other articulations (including drawings, graphs, charts, or prices) that have been “written down” in some specific medium, whether on clay tablets, papyrus scrolls, or relational databases. Having been written down, texts are also available for future return, to be reread, interpreted, worked upon, paraphrased, quoted, or even rearranged. The term “subjectivism” may be unfortunate. The word seems to imply some sort of turn inward into the realm of the mind.2 However, what Austrians are talking about is not inaccessible mental thoughts but human expressions of meaning, and that these meanings are fully in the real world. Indeed, they are what make all the difference in economic development, the creation of real wealth. The realm of meaning is best seen as a public realm of expressions, especially of articulations in language. The point of subjectivism is not to turn our attention to some kind of inwardness, but to the public articulations (verbal, written, electronic, etc.) through which market participants attempt to communicate their meanings to one another. Taking the idea of meaning seriously obliges us to pay more attention in the theory of entrepreneurship to the way market participants engage in acts of articulation and of the reading of articulations. Just as the successful market participant learns to orient him/herself to the articulations of the market, so the economist needs to attend to the communicative processes through which market participants orient themselves to one another. In his important work, Competition and Entrepreneurship, Israel Kirzner developed a perspective on the fundamental nature of entrepreneurship according to which it is seen as involving alertness to opportunities to improve on the use of priced factors of production, but as not itself a factor of production in the ordinary sense.3 The entrepreneur’s alertness to market opportunities is what impels him or her to act, and through that action to improve on the market’s coordination process. The diverse plans of separate individuals come to “dovetail” with one another through the entrepreneur’s coordinating actions. Entrepreneurship, in this perspective, is a sort of meta-factor of production: it is the alertness to how best to use factors of production. It is understood to be the essential driving force behind the process of economic development in that it improves the allocation of scarce resources, increasing the market’s coordinative capacity, and the ability of market participants to get more of what they want for less. The point Kirzner makes about entrepreneurship as not merely a factor of production to be optimized along with all the others, but as referring to the capacity of the economy to coordinate its uses of all the factors, is an important contribution. But the concept of alertness as he developed it suggests that what is involved in entrepreneurship is primarily a matter of directly seeing something that physically exists in the world, not the communication of meanings among market participants within specific technological and institutional contexts. It is the ability
Subjectivism and entrepreneurship 23 to “see” opportunities to take advantage of (and unintentionally improve upon) the market’s pricing of productive inputs. The entrepreneur “notices” inputs that are underpriced in relation to their (perceived) potential to produce outputs, and acts to rearrange factors of production into a pattern that tries to reap profit from the situation. This very effort to gain pure entrepreneurial profit has the effect of increasing market coordination. The Kirznerian entrepreneurial act might seem to be the engagement by an isolated, cultureless individual to pre-existing objective circumstances.4 The entrepreneur is the maverick who simply “sees” objectively what others have so far overlooked. The elaborations of the entrepreneurship in Kirzner’s work are too abstractly described. They depict the entrepreneur as an intermediary in a world of (already somehow meaningful) price signals, almost as if these are peculiar agents who only communicate in money bids and offers, as if they were not involved in linguistic behavior, and a linguistically constructed and institutionally contextualized world, but simply spotting numbers, price discrepancies, and making profits from exploiting differences between costs and revenues. There is no explicit place in the analysis where human articulations of meaning happen. Or rather it is as if the only things we ever say or write are the numbers we call prices, with which we silently “signal” to one another. Even these price signals are somehow taken to already have their meaning ready-made. I do not think, however, that this is the friendliest or the most useful way to read what Kirzner and other Austrians are trying to do. He may be guilty of underemphasizing the important role of language that is involved in the entrepreneurial process, but we do not need to ignore that element of the process. The Kirznerian entrepreneur is properly understood to be thoroughly immersed in the already linguistically constituted world, where the social relation par excellence is not market exchange but the exchange of words, and other articulations. We are already engaged in acts of verbal articulation, where we are trying to say what we mean to one another, before anything like markets can come upon the scene. Markets are an extension of language,5 and they work through concrete acts of articulation, whether oral, written or electronic. You could say that markets simply add whole new kinds of texts, prices, profits, contracts, advertising, etc. to the conversation.6 The words are still a key part of market communication, but now there are these other kinds of texts such as prices circulating in our conversations that take numerical form and contribute to certain cognitive processes we call accounting, without which, as Mises was claiming in his famous challenge to socialism, modern technology could never have evolved. These articulations take place in a given political, social, and technological environment. The “seeing” of an entrepreneurial opportunity is best understood not as perception, but as a kind of reading of a meaningful situation in a language-constituted world. This is the kind of approach to entrepreneurship that has been usefully elaborated in Spinosa et al. (1997) as a process of disclosure of new worlds. The point is that the entrepreneur is not merely seeing what is there to be perceived, but rather opening up whole new ways of seeing.
24 Don Lavoie Lachmann (1994: 283) pointed in this kind of direction when he called for an orientative or an interpretive economics. Like Menger and Mises he saw institutions as integral parts of economic processes. He points to the “mode of orientation” through which individual actors coordinate their plans with one another. Institutions belong to the realm of culture, not that of nature. They are immersed in history. Although we can observe their operations, our observations cannot disclose to us what meaning their objects have to those enmeshed in them, a meaning that varies from group to group and over time. It is impossible to elucidate such meaning until we realize that the mode of existence of institutions corresponds to, and varies with, the mode of orientation of those who participate in them. Such a mode of orientation is an element of culture, a web of thought – open to interpretation but not measurable. We need to remember that Lachmann uses the term “institutions” to cover a remarkably wide set of phenomena. He includes organizations, but also regularized practices such as profit/loss accounting, or the use of mailboxes for posting letters. He also includes such phenomena as law, language, money, and prices, and other evolved structures by which human action gets mutually oriented. He describes the orientative function of institutions in his essay “On institutions” (1971: 49–50): An institution provides means of orientation to a large number of actors. It enables them to co-ordinate their actions by means of orientation to a common signpost. If the plan is a mental scheme in which the conditions of action are co-ordinated, we may regard institutions, as it were, as orientation schemes of the second order, to which planners orientate their plans as actors orientate their actions to a plan . . . Whether we post a letter, wait for a train, or draw a cheque, our action is in each case orientated towards a complex network of human action of which we know enough to make it serve our ends, though we may know next to nothing about the internal working order of these institutions. Unlike many who seem to only see institutions as constraints on the freedom of action, Lachmann saw them as indispensable aids to human action. They are “signposts” which facilitate the mutual adaptation of plans. Although Lachmann did not explicitly work out a distinct theory of entrepreneurship, the one that is implicit in his work depicts it as essentially a process of mutual orientation. By “orientation” he meant not only, or even primarily, how we are oriented to the physical world, but rather how we are oriented to one another. He thought that key to the entrepreneurial function is the ability to interpret, to read the “market signals.” Anybody can do the calculations of potential profit or loss but the profit/loss signals guide the market process
Subjectivism and entrepreneurship 25 through the selection of those who prove to be better readers of market signals. Lachmann ([1951] 1977: 102–3) put the point this way in a review of Mises’ Human Action: Profits, those temporary margins between today’s cost of complementary factor resources and tomorrow’s product prices, are signposts of entrepreneurial success. In a symbolic form they convey knowledge, but the symbols have to be interpreted . . . The market process . . . promotes the rise of those better equipped than others to wrest economic meaning from the happenings of the marketplace . . . The essence of the matter is that the market process promotes the spreading of knowledge through the promotion of those capable of interpreting market data and of thus transforming them into market knowledge, and the elimination of those who cannot read the signs of the market. Entrepreneurship is not an individual act of perception, a matter of looking at an already existing physical reality; it is an interpretive and communicative process involving efforts of different persons to communicate meaning about the future to one another. It is perhaps misleading to even present this coordinating process in terms of the activity of “the entrepreneur,” the single individual who is more or less alert to opportunities. What we have here is a coordination process that is taking place among persons who are oriented toward the future. Lachmann (1956: 22–3) makes this point about the importance of the process of interpretation this way: “In a market economy success depends largely on the degree of refinement of one’s instruments of interpretation. On the other hand, every act is a source of knowledge to others.” What we are really talking about here is not that mysterious; it is simply the reading of the texts of marketrelevant articulations. Among the institutional forms that Austrian economists ought to pay attention to are the articulation processes and the resulting texts through which human actors communicate their meanings. And crucial to these articulation processes are the institutional and technological circumstances in which market participants act. As Lachmann suggests, the degree of refinement of the “instruments of interpretation” is key to market success. In Kirzner the question can sometimes seem to be: how do entrepreneurs discover, or create, or share objective knowledge of the physical world? A better way to put the question might be: how do they orient themselves to one another’s perspectives on the world, and in particular to one another’s articulations? The philosophical framework of Kirzner’s entrepreneur seems to be one that takes “seeing” to be central to the process of mutual coordination. An opportunity is “seen” if the entrepreneur is sufficiently alert, or if not it is “overlooked.” But looking and seeing are misleading metaphors for what is better rendered as a transaction of articulated meanings. It would be better to understand what is going on here as a matter of what could be called a dialectic of “seeing and saying.”7 The entrepreneur gets at what is going on in the economy not by directly confronting
26 Don Lavoie the world itself, but indirectly, by way of an effort of interpretation of what is being said about the world. We are still trying to get at what opportunities for profit really exist, but the way to get at these opportunities is indirectly through texts, through interpreting the efforts of others to say what they mean. To get a handle on entrepreneurship we need to ask about the prevailing capabilities of the communicative media through which entrepreneurs are trying to orient themselves to the plans of consumers and producers. How, exactly, do market participants manage to convey their meanings to one another? An opportunity to improve the coordination of plans is not so much “noticed” with the eyes as it is gleaned from conversations. The question shifts from how or why the entrepreneur sees an opportunity others have missed, to the question, how do market participants talk, write, show, in some manner say what they mean to their suppliers and customers, their coworkers, potential investors, or stockholders? The entrepreneur is situated within the context of a complex capital structure, where the market value of intermediate capital goods is a necessary guide to production decisions. Producers of capital goods several stages removed from the final consumer have no way of knowing what is more valuable other than by the information contained in relative prices. The Misesian idea of economic calculation points to the necessity of quantitative profit/loss calculation in order for the remote meanings of consumers to be translated, through the process of imputation of value, into a readable form. Producers of capital goods are suspended in this relative-price world, able to coordinate with consumers only because of this price information. But the point is not that prices alone, as numbers, are sufficient to accomplish this coordination. Permeating the market process are millions of efforts to communicate meaning. This is what I want, this is what I can do, and this is how I could pay. The pricing of goods and services is a set of additional acts of articulation inserted into a world already buzzing with acts of talking and writing, of descriptions of products, of production possibilities, of alternative technologies, of contracts, etc. Since entrepreneurship fundamentally involves attempts to communicate, we need to inquire into the communicative-technological environment, the media, if you will, in which such communication can take place. A fundamental change in the tools for writing and reading, such as were provided by the printing press a few centuries ago and by the Internet today, could utterly transform the workings of this sort of process for knowledge creation and sharing. The communicative media, within which ideas are brought into articulation and reinterpreted, have changed dramatically over the past several decades, and this could be expected to have significant effects on our orientative capacities. In other words, entrepreneurship should be conceptualized as a social process of mutual orientation in which tools for improving our ability to make and comprehend articulations would be expected to have a significant impact. What tools like these do, I will claim, is make it easier for people who see the world differently from one another, to nevertheless make sense to each other, to orient their actions to one another.
Subjectivism and entrepreneurship 27
Text and its structure Lachmann’s work Capital and Its Structure depicts the entrepreneur as involved in trying to mesh his/her plans with those of the ultimate consumers, by in some sense “fitting” into an evolving structure of capital. This idea of fit is what Lachmann called “complementarity.” The process of economic production involves the continuous adaptation of heterogeneous parts of the capital structure to one another. It is in this context that he sees the essential role of the entrepreneur as one of figuring out how to specify particular ways of using heterogeneous capital goods that in principle can be used in a multiplicity of alternative ways. Lachmann (1956: 16) identifies the key role of the entrepreneur in terms of deciding how initially to select (or later adapt) particular patterns of use of capital goods. The entrepreneur’s function as regards capital . . . is to specify and make decisions on the concrete form the capital resources shall have. . . . As long as we disregard the heterogeneity of capital, the true function of the entrepreneur must . . . remain hidden. In a homogeneous world there is no scope for the activity of specifying. Early in that book Lachmann (1956: xv) says that he is not pointing “towards the ‘objective’ and quantifiable, but towards the subjective interpretation of phenomena.” Then he gives us an idea what he means by a structure, from a subjectivist point of view. The generic concept of capital without which economists cannot do their work has no measurable counterpart among material objects; it reflects the entrepreneurial appraisal of such objects. Beer barrels and blast furnaces, harbour installations and hotel-room furniture are capital not by virtue of their physical properties but by virtue of their economic functions. Something is capital because the market, the consensus of entrepreneurial minds, regards it as capable of yielding an income. This does not mean that the phenomena of capital cannot be comprehended by clear and unambiguous concepts. The stock of capital used by society does not present a picture of chaos. Its arrangement is not arbitrary. There is some order in it. Capital is a structure in the sense that it has a meaningful order. The order consists in more or less intricate patterns of complementarity among diverse, specialized capital goods. We can comprehend capital, Lachmann insisted, “by clear and unambiguous concepts,” even while we insist that it is built out of subjective orientations, values, and expectations. Like others in the Austrian school, Lachmann sees the process by which entrepreneurs adapt the structure of capital to the changing demands of consumers as central to the process of economic development. Enhancements to capital, which the Austrians sometimes misleadingly called a “lengthening” of the structure of production, are for the Austrians what drive improvements in standards of living in growing economies.8
28 Don Lavoie Howard Baetjer (1998: 22) refers to this notion of lengthening but I think he has a better way of putting it with the term “complexifying.” When new capital goods are developed . . . what is generally involved is the development of new, more complex patterns of complementarity. The new capital must fit with the old (and other new goods) in order to be useful. The lengthening of the capital structure involves what Lachmann (1978: 79) calls a “‘division of capital’, a specialization of individual capital items.” We might call it a “complexifying” of the capital structure, an increasing intricacy of the pattern(s) of complementarity among increasingly specialized capital goods, born in the ongoing growth and division of knowledge. The adaptation and complexification of the capital structure can be thought of as a sort of social learning, where new knowledge is not only held in people’s heads but comes to be embodied in our capital structure, just as it gets embodied in our institutions, practices, laws, prices, recipes, and documents.9 One can say that our capital structure “learns” in the sense that Stewart Brand (1994) meant when he wrote the fascinating book, How Buildings Learn, that is, people working with these material things keep adding adaptations to them. Capital goods learn in the sense that people incrementally adapt them so that they come better to fit the environment into which they can be fit, and ultimately to the desires of the participants. The reality in which entrepreneurial coordination has to take place is a complex, structured reality, in which parts have definite relationships to other parts. Capital goods are appraised as having the potential to earn a future flow of income because they are thought to be able to fit into the structure. The subjectivity of the structure, Lachmann is saying, does not change the fact that the structural relationships are real, and need to be taken into account. Entrepreneurs face the challenge of navigating within this complex structure of interconnected meanings, and to adapt elements of the structure so they fit into the whole. When that entrepreneurial process is working well social learning in the form of rising standards of living can be the result. Texts should be thought of as possessing a meaningful, evolving structure of this sort. Just as Austrians challenge those mainstream economists who mistake capital for a homogeneous quantity, so we can question those who tend to see “information” as a homogeneous quantity and media as nothing more than carriers of information. For the same kinds of reasons Austrians have wanted to discuss capital in terms of heterogeneity, complementarity, context, and structure, we should look at texts in these terms. Like capital goods, texts can “learn” in the sense that those who work with them can add on improvements over time. Like capital, texts cannot be adequately appreciated apart from the institutional context within which they are used. Like capital, they are not understandable as an amount of something, say information, but as a heterogeneous structure with complex internal relationships. A text’s parts stand in definite meaningful relationships to other parts, and to other texts.10 It
Subjectivism and entrepreneurship 29 can be fixed in many different kinds of forms, or structures, and these can make it more readable. Thus we can differentiate the content of a text with the structure or form in which it is presented. What meaning a reader gets from a text will often depend greatly on the form in which the text is fixed as well as the institutional context in which it is used. Different presentational structures can make the same underlying content convey meaning very differently. And yet the structure of a text is not strictly separable from its content. Understanding the content in part involves understanding that structure. What exactly the “structure” of a text consists of will vary dramatically with different technological media. There is at least a minimal amount of structure in any text, at least in the way the sentences are broken up into paragraphs and larger groupings. Structure does things like this; it helps us to do things with texts, to take the text apart and put it back together, to present the content of a message to an audience, to place it into some kind of larger context, to draw attention to the relationship between different parts of itself, to see one part of it in terms of a larger whole of which it is a part. Structural elements we now take for granted like spacing between words and page numbers were not always available to aid readers. The difference these added structural elements have made historically to the effectiveness of writing is very suggestive as we try to think about the knowledge economy. It may be that more complex kinds of production plans could not have evolved without accompanying structural features that made it easier for readers to work with texts. Improvements to our ability to add structures to texts may have had an important effect on our ability to communicate what we mean, especially by allowing us to increase the complexity of what we can say, and thus what we can do. Complexity enters in two ways here. An articulation by one market participant to another is not just a description of a part of a complexly structured reality; it may also be a complexly structured description of a complexly structured reality. If the prevailing technologies for working with text permit, articulations can become structured in more complex ways that enrich our possibilities for conveying what we mean. In order to work with production plans that reach a certain level of complexity, it is necessary to have not only an exchange of unstructured content, an exchange of a linear, sequential string of words, but an exchange of complexly structured articulations of content. We find ourselves giving structure to the articulations in order to help guide the reader in how to read it. In such situations we are trying to help to provide the reader with a context in which the content of the message can be interpreted. To characterize structures only as reading aids, however, would be to understate the scope of what they do. The structures in a text can include things that help the reader not just to read it, but also to work with it. The key to the knowledge economy, I will be arguing, is not primarily a matter of getting “access” to quantities of data, it is more the revolution in ways of working with, and working together with, the data. We do not just possess information; we come to specific articulations in definite contexts, with questions in mind, with distinct purposes guiding our interest.11
30 Don Lavoie The theory of entrepreneurship ought to attend more to the technological media through which our articulations can be constructed and used. The meanings that subjectivism says we should pay attention to are not some kind of mysterious, inaccessible realm of the mental, they are all around us in the material form of spoken words and written texts, and are available for working with, and for interpretation. The fact that texts have a material reality to them tells us that they cannot adequately be treated as pure content, but that particular forms in which they have been fixed, and the contexts in which they are used, can matter. Entrepreneurship has been strengthened, I think, by fundamental improvements in the media in which we have been able to conduct our conversations. To get some perspective on the possible impact the latest technological media may be having on entrepreneurship it might be useful to review the kinds of transformations of media that have already taken place. The work of the historians of writing can help us to appreciate what textual structures have been in pre-electronic media, and how they have helped in the process of mutual orientation.12 From the very beginning of the human species there has been an important connection between conversational media and economic advancement. How we are able to “converse” makes an enormous difference in our ability to use, convey, and generate economically relevant knowledge, and thus in our wealth and wellbeing. Our media technologies, from talking to writing to printing, to a whole variety of electronic tools, could all be called, in a broad sense, alternative ways of holding “conversations” with one another. We could refer to three overall modes of conversing, going from verbal, to written, to electronic modes. In all these media we are exchanging utterances, even though an “utterance” could mean anything from a word or sentence verbally spoken, to a typed document printed, to an increasingly various set of electronic texts. What the history of writing shows us is the profound changes that occurred as the result of the shift from a purely verbal to a written mode.13 As Walter Ong and others have elaborated, an exclusively oral culture is strikingly different from the kind of verbal-plus-written culture we are in today, and many of the differences have to do with the challenge of preserving knowledge when texts cannot be written down. There is a need for complex procedures for the memorization of ritualized and formulaic verbalizations, sayings, narratives, epoch stories, “old wives tales,” etc., that today look to us like awkward ways to make up for the fact that nobody could preserve the “texts” in written form. The nature of entrepreneurship must have been completely different in such a world. Market processes first emerged in strictly oral cultures, where transactions could only be conducted verbally and sealed with a handshake, and many types of market transactions today (informal deals between corporate executives, auctions, yard sales) continue to occur primarily in verbal discursive modes. But the fact that we have changed from an oral to a literate culture has, of course, utterly changed the way market coordination takes place. In a strictly oral culture a deal essentially needs to be transacted face-to-face, without recourse to preserving in text aspects of the transaction that might be repeated in the future. Exchange thus tends to
Subjectivism and entrepreneurship 31 take the form of bazaar trading, with a premium on squeezing surplus out of the individual deal, as opposed to increasing the likelihood of future, repeated deals. Writing and reading as a new form of discourse comes on the scene after, and according to many interpretations of the prehistorical record, as a result of, markets. The earliest forms of writing seem to have been accounting records, and these gradually complexify into written accounts in words of market transactions, contractual arrangements, etc. The fundamental advantages of writing over talking stem from the fact that things “written down” can be returned to, reworked, reinterpreted, or in some way worked upon. The persistence of the written text allows us in principle to make improvements upon what we are saying by revision. The evolution of writing, clearly, has had profound effects on the workings of market institutions. Prices that can be written down, accounts that can be recorded, are quite different from their precursors in oral culture. The possibility of written law stabilizes and clarifies property rights, and permits far more complex types of ownership arrangements. Money itself is writing, and all the various forms of monetary instruments are products of writing. Prices and other components of offers for exchange are not just written but posted in public places to attract buyers. Perhaps most importantly, written contracts and the whole apparatus of contract law become possible. Terms of incredibly complex transactions can be written out and consulted later in case of alleged breach of contract. I like to think of the comparison Ong elaborates of oral and written culture in terms of a contrast between verbalization and writing/reading, where I deliberately express the first as one thing and the other as two distinct things separated by a slash. The slash, the cleavage between writing and reading, is important. Oral conversation is a far more integrated process where the talking and listening are in a concise and continuous dialectical interplay with one another. By contrast, the writing and the reading of paper-based discourse are thoroughly separated components. The cleavage is both the source of the greatest strength of writing/reading and also the source of its greatest weakness. It is a strength in that the word cut loose from its original context can reach a wide range of disconnected readers, but it is a weakness in that we lose the immediate interplay of the saying and the hearing that is taken for granted in verbal conversation. The age between the rise of writing in the fourth century BC through the codex in the second century AD, and the printing press in the fifteenth, was a gradual transition where western culture was increasingly shaped by written texts. A key aspect of this transition was the improvements in our ability to add structure to texts. The transition from cuneiform and hieroglyphics to the alphabetic form of writing introduced new possibilities for ordering texts. The transition from the roll or volumen to the codex, the book broken into pages, provided similar powerful structuring capabilities. These new technologies took off because the alphabetic form of writing and the structure of the book in the form of numbered pages permitted a more complex ability to manage the word. Economic production, I would argue, has enjoyed the same kind of dynamic of increasing complexity through writing. At first, writing down transactions was
32 Don Lavoie only an aid to memory in a fundamentally oral process, but having economically relevant texts available gradually permitted more and more complex types of transactions and production plans to be organized, and in turn led to more complex forms of texts to guide these activities. Not only has the printing revolution brought about a significant increase in the kinds of structures available to writers; it has also dramatically improved our ability to share these more complexly structured texts. Structural hooks such as running heads, indices, tables of content, page numbers, paragraphs, and so forth, were really tools for sharing text. Conversations about a book could be precisely linked to texts in a manner that allowed a much more precise engagement with the ideas. In an article entitled “The open society and its media,” Miller et al. (1995) summed up, in a particularly sharp fashion, the different strengths of oral and written modes of conversation.14 I reproduce (with some slight rewordings) a chart Miller et al. supply in their paper to illustrate the relative advantages of verbal and written (under paper technology) discourses. Strengths of each mode are as follows: Verbal Visibility of arguments (can hear an absence) Revision after publication Small expressions Small audiences Fast publication Immediate feedback
Written Persistence of arguments (cumulative) Revision before publication Large expressions Large audiences Freedom of entry Complex reputation systems
This formulation is of course a simplified one that passes over many of the subtle distinctions one finds in the literature on the history of writing, but it has the advantage of drawing attention to some of the essential issues. Verbal and written modes of communication, Miller et al. are saying, are good for different kinds of communicative tasks. By setting up this kind of contrast between the different strengths of alternative pre-electronic modes of conversation that have been familiar for centuries, Miller et al. are trying to provoke us to think about the combinations of strengths that are not easy to get in either mode. In the pre-electronic age, if one wanted to have a conversation that picked two from column A, and three from column B, it was difficult. There are situations, say, where we might like to engage in a relatively quick interpersonal engagement of the kind talk is good for, but also to make what we say persistent, which writing is good for. We are used to such tradeoffs and routinely find ways to bounce back and forth between modes to make up for the shortcomings of each. What Miller and his collaborators are really getting at is that the possibilities of electronic media can be usefully thought about in relation to trying to reach new combinations of these kinds of strengths.
Subjectivism and entrepreneurship 33 One could sum up the differential advantages of these media this way: the advantages, by and large, of the paper-based writing medium have been those related to persistence, and the ability to return to and revise what we are saying before we make it public, and afterwards in order to make improvements for the next edition. These persistence-based advantages involve the various kinds of intertextual structures that can be built into any one speaker’s utterance. The advantages of the oral medium, on the other hand, have been those of interpersonal interaction, which is essentially our ability to easily alternate among speakers, and thus to use the feedback of interaction to revise what we are saying after we have made it public.
The convergence of groupware and hypertext Knowledge comprises two components: knowledge structure and information content. Structure represents a framework used to arrange information to make it meaningful, while content represents the information itself. Knowledge structures therefore provide the context for making sense of information . . . Information and knowledge can be acquired from many different internal and external sources. That knowledge is then subjected to value-adding processes such as labeling, indexing, sorting, abstracting, standardization, integration, recategorization, as well as the process of making the knowledge explicit . . . A well-structured repository allows for a high degree of viewing flexibility. If the repository is conceived as a knowledge “platform,” then many different knowledge views may be derived based on the particular content, format, and medium by which it is presented to the user . . . (Michael H. Zack and Michael Serino 2000: 307–9)
It is commonly supposed within the business literature that the computer and telecommunications revolutions are somehow ultimately the engine driving economic development, but of course the question is how. There is controversy about exactly what it is that these information technologies are doing that contributes to wealth. Or to put it in terms we have been using, what is it about making text electronic that enhances our ability to orient our plans to one another? These questions, however, are too big to do justice to here, and I think we are too early in the media transformations to be able to take a critical perspective on all the different ways computers and telecommunications are changing the way the economy works. What I would like to do here is to illustrate one aspect of the change in media, the ability to make flexible the kind of shared textual structures that were emphasized by the historians of writing. I will review briefly the themes of the business literature on “groupware” and “hypertext” that claim that these key capabilities of information technology are the force that is transforming the new economy. Like most business literature, the writings in knowledge management are not typically trying to offer social scientific explanations of how the world works, but rather are trying to offer practical advice to practitioners. Their question is not
34 Don Lavoie so much to estimate the effects of groupware and hypertext capabilities on wealth as to try to explain how best to deploy such capabilities to improve the chances of making your organization successful. Several scholarly studies have had difficulty finding clear evidence that investment in information technology in general, or in knowledge management software systems in particular, contribute to worker productivity, and of course there are plenty of ways to waste money on information technology “investments” which do not help productivity at all but which can mess up the statistics.15 This skepticism about information technology’s contribution to productivity has spawned in turn a large but somewhat unfocused management consulting literature such as Thorp (1998) and Lucas (1999), purporting to show how businesses can overcome the “paradox.” But not all this literature is unfocused. The strand of knowledge management literature I am summarizing here offers some real insight about what software might be doing for some businesses that could be driving the new economy. The strand I have in mind is perhaps best epitomized by the papers collected in the book Knowledge, Groupware and the Internet, edited by David E. Smith.16 Among the contributors are Ikujiro Nonaka, author of an often-cited Harvard Business Review article “The knowledge-creating company,” John Seely Brown and Paul Duguid, co-authors of the book The Social Life of Information, and Michael Zack of the Lotus Institute. When this literature refers to software capabilities they call groupware and hypertext they are generally pointing to the aspects of media that I have been calling, respectively, the interpersonal and the intertextual. The emphasis is on the ability of such software to help market participants share knowledge. The knowledge management literature argues that improvements in “knowledge sharing” have come from the convergence of these groupware and hypertext capabilities. This phrase “knowledge sharing” is a bit misleading. What we have here is not a matter of sharing some fixed thing called knowledge. It involves tools to enhance the kinds of work that groups of persons can do together. Nonaka (1994) depicts the knowledge creating process as a spiral of writing and reading, of alternating between rearticulation and reinterpretation. That is, the potency of these new capabilities is said to be rooted in the process of a group of coworkers bringing tacit knowledge into explicit form, and reading off articulations to glean new tacit awareness. The “seeing” of opportunities is enabled through the facilitation of the “saying” of what is meant in texts. These authors understand the point I have been making about understanding text in its context, both institutional and technological. Brown and Duguid (2000: 246), for example, have been insistent in their claim that we need to see institutions as complementary with communicative technologies – as enablers and not just constraints on action. In an age that emphasizes individual freedoms, institutions are often portrayed merely as constraints on this freedom. In particular, they are portrayed as
Subjectivism and entrepreneurship 35 monolithic, static, even stagnant constraints. Yet, because they are human creations, they rarely are just that. They change as people change. And they adapt to serve social needs. Similarly, the technological issues I have been calling textual structure – the way structure facilitates the contextualizing of what is said – is widely appreciated by these authors. The shift from talking about “data processing” to talking about knowledge management in fact signals a fundamental shift in thinking.17 The issue is not the amount of information or data that can be stored, or how far or fast it can be transported. The issue is how meaningful data, that is, knowledge, can be gotten from raw information. Software capabilities are powerful because they allow participants to add more kinds of structure to texts and documents, to permit different views, different contextualizations of the information. Just as advances in writing led to structures such as paragraphs, headings, tables of contents, footnotes, and indices, so these new electronic text advances are leading to new kinds of structures. The technologies that enable this knowledge sharing consist of tools for the writing and reading of shared, structured, modifiable, electronic documents. The terms groupware and hypertext can be understood as emphasizing two different aspects of these tools – groupware emphasizes the interpersonal sharing of documents and hypertext emphasizes the intertextual structuring of the documents. Although each of these aspects can be thought of in isolation from the other, the knowledge management literature is increasingly emphasizing their combination as the key to success. Groupware is more powerful precisely when it incorporates more hypertext capabilities, and vice versa. Groupware begins with the notion of individuals working together in a networked environment, and asks what tools, including textual structures, might enhance their work. Hypertext begins at the other end. It starts from the context of an individual confronting a structured text, and soon comes to take up the interpersonal context of shared access to the text. Hypertext is not to be equated with the Web as it is now, in the early twentyfirst century. We have all had experience, to be sure, with the basic “hypertext” functionality that is supported in HTML, hypertext markup language, such as the simple jumplink where one clicks on something to go to another document. But while somewhat suggestive, this familiar sense we have from current Web browsers can be misleading. It might be more useful in some ways to think about the use by corporations of complex relational databases that are available to coworkers over local area networks, or document repositories with built-in specialized annotation and search functionality such as Folio Views. The kinds of hypertextual structuring capabilities that have become available within corporations often go way beyond the simple links most of us experience while browsing the Web, and we should expect significant advances in new kinds of structuring capabilities in the future that would make what we see now on the Web look as crude as the manuscripts of the early days of writing look to us today.
36 Don Lavoie Much of the literature on the possibilities of hypertext has focused on the world of science, literature, and scholarship in general. Some of the literature focuses on the individual scholar using information on his own, as distinct from the collaborative work context of groupware. The earliest discussion of hypertext capabilities was the 1945 article by Vannevar Bush imagining a tool to help a single scholar work with information, and one of Doug Engelbart’s pioneering articles on hypertext was entitled “Special considerations of the individual as a user, generator, and retriever of information” (1961). Ted Nelson ([1974] 1987; [1980] 1990), who coined the term hypertext, was more oriented to the context of film and the humanities than that of collaboration in business organizations.18 The development of hypertext tools and of the thinking about what they can do for us, have evolved, however, toward the kind of context groupware literature is talking about: collaboration within the workplace and coordination in the marketplace. The focus on the intertextual comes to embrace the context of the interpersonal.19 Engelbart’s work came to focus primarily on the context of groups of coworkers, and how to augment their ability to work together and share knowledge. The team of developers that attempted to implement Ted Nelson’s vision came to increasingly emphasize practical business applications of the ideas.20 Gradually both the thinking about hypertext and the practical systems involving these capabilities came more and more to turn to the world of business.21 Similarly, groupware tools and the literature about them are converging from the other end toward hypertext themes. Although simple email may give us a general sense of what the interpersonal capabilities are evolving into, it can be misleading to focus on the limited capabilities of the kind of email systems most people are using today. The real potential of groupware may be more evident in the more advanced kinds of applications that permit work groups to collaborate more effectively, for example, networked calendaring, scheduling, and workflow applications. Products such as Lotus Notes Domino, Microsoft Exchange, and others (which include but go way beyond email) are closer to what the knowledge management literature is talking about than a basic email system is. Zack and Serino (2000: 306) are getting at the issue of convergence of these kinds of capabilities when they differentiate distributive from collaborative applications. Distributive applications maintain a repository of explicitly encoded knowledge created and managed for subsequent distribution to knowledge customers within or outside the organization. These applications exhibit a sequential flow of information into and out of a central repository, structured to provide flexible access and views of the knowledge. Their primary focus tends to be on the repository and the explicit knowledge it contains, rather than on the contributors, the users, or the tacit knowledge they may hold. Collaborative applications are focused primarily on supporting interaction among people holding tacit knowledge. . . . These applications directly support interaction and collaboration within and among knowledge-based teams, enabling “teams of teams” to form across knowledge communities.
Subjectivism and entrepreneurship 37 A distributive application in this terminology is a repository of texts with a complex and to some degree flexible structure, with links, annotations, categorizations, or other tools to make it easier to “navigate” around in it, mark it up, and so forth. But to some extent it may still be conceived of as a tool for an individual researcher, as Vannevar Bush had in mind. The traditional focus of groupware, on the other hand, has not been primarily the texts, the documents with which the group works, but the coordination of a set of tasks and responsibilities. What tools such as these can help with is the organizing of the interdependent tasks of a workgroup. When Zack and Serino (2000: 306) use the term collaborative applications, they are focusing on the way a repository within a collaborative environment becomes a different kind of thing from one conceived only as a store of information. In contrast to distributive applications, the repository associated with collaborative applications is a by-product of the interaction, rather than the primary focus of the application. This repository of messages is dynamic and its content emergent. The ability to capture and structure emergent communication within a repository provides a more valuable, enduring, and leverageable knowledge by-product than the personal notes or memories of a traditional conversation or meeting. As a classic exposition of the idea of groupware (Lotus Institute 1995, unpublished manuscript) puts it, what is needed is an integration of what are called “push” and “pull” capabilities. Push is like email (or sending around paper memos). It is the ability to send somebody information, to alert someone of something. Pull is like the web (or networked databases, or file cabinets). It is the idea of deliberately going after information. Where the power of groupware really shows itself is when these two modes can work in concert with one another. When the best we could do is tell someone in a memo that the information needed is to be found in a file cabinet on the third floor, we have not integrated push and pull enough to make knowledge sharing work very well. The simple and now familiar act of sending an email with a URL in it constitutes a powerful kind of functionality that allows us to work with information far more effectively than we could when push and pull were disconnected. The combination of the interpersonal and the intertextual allows organizations to create new knowledge and share existing knowledge in different ways than are supported in a strictly paper-based communication medium. The power of electronic media is in the paradoxical-sounding notion of flexible structures. Structure connotes fixity and persistence, but flexibility seems to connote the opposite. Miller et al. had argued that what gave verbal discussion its power was its flexibility. The interpersonal give and take of verbal conversation is what gives verbal conversation several of its strengths, what they call the capacity for “revision after publication,” “fast publication,” and “immediate feedback.” They also argued that one of the factors that has given written communication its power has been the opposite kind of strength, what they called “persistence of arguments.” Writing is effective in helping interlocutors to express what they mean
38 Don Lavoie precisely because words “written down,” unlike words spoken, have the quality of fixity. Fixing what is written down leads to the development of textual structures that permit the strengths Miller et al. called “revision before publication” to improve on the usefulness of the text. But electronic text is fundamentally a new kind of writing in that it also permits flexibility. Paradoxically, you can save an original document in its pure, untouched form, and yet also add to it, say, a layer of markups or annotations of different kinds. What you write into the electronic space can stay; it can be fixed, and so it can be there to return to, to add things to, to reorder, or restructure, or whatever. Its fixity does not prevent it from being the site of a back and forth dialogue, at whatever pace one might prefer for the kind of discussion involved. To “buy” the fixity and structure, you do not need to give up the valued capability of relatively quick interpersonal interaction. It seems useful to look at what is going on now in electronic media as essentially an extension of the same sort of evolutionary dynamic that the historians of writing described. The real payoff of electronic media comes from tapping the advantages of the complex structuring of texts that had arisen in the aftermath of Gutenberg, but now in a context where multiple persons are involved in an interactive process with these structured texts. We can think of the new flexible structuring capabilities in terms of navigation and annotation. The potential for navigating, for finding one’s way around in a large, complex electronic text, constitutes a fundamental advantage of the new media. What is written down does not only get shaped into a single form with structural elements that aid the reader, as happened with improvements to printing, but can get shaped into a “meta-structure.” It can be given a form in which alternative structures can be selected, at the choice of the reader. The single hierarchical structure of a table of contents of a printed text can help a reader locate something, but an electronic text can offer the reader a choice of a variety of alternative hierarchical structures. Different readers approach texts with different questions in mind, so that equipping those who work with text with a flexible range of alternative ways of viewing the material can aid tremendously in the text’s usefulness. Think in this context of the limitations of file cabinets and folders as a way of storing corporate documents, as opposed to a networked database with software that offers the reader alternative hierarchies. A file cabinet takes advantage of the power of alphabetic ordering, but can only take advantage once, so to speak, in that it has to select one kind of order over other possibilities. Say we are keeping folders with information about both coworkers and the tasks they are working on. When we come to the file cabinet to look for information about a person, we would be hoping the filing system has ordered them by name. If we come with a different question, like “Who is on this task?” we hope instead that the information is stored by task.22 Paper documents force us to make a decision, to settle for one hierarchy as the least bad alternative, and stick with it. Clearly, being able to have a networked database instead of a file cabinet to store information would make a major difference in our ability to get to it, to find what we
Subjectivism and entrepreneurship 39 are looking for, and to present it to others in ways that reflect our own meaning. Now we can say: “let’s view this list of documents in task order, or in order by name, or date, or whatever.” This capacity for restructuring is not only an issue of how we manage to find information we are specifically looking for. It shapes our ability to convey information to others in a contextualized manner that helps us to orient ourselves to one another’s meaning. If I am trying to convey something about the personnel working on several tasks, if I order the information by the last names of the person, and juxtapose to each name the tasks in the project that this person is responsible for, this helps the reader to focus on how the labor time of any particular person is allocated. But the same information in order by task, with names appearing under their tasks, conveys quite different meaning, and is useful in different situations. The structuring of the content makes a difference, a big difference, in how it can be read. Electronic media open up the possibility for a wide variety of other navigational capabilities. Printed works often contain internal cross-references such as when we say something like “see page 5 on this point,” but the jumplink represents a dramatic reduction in the costs of following such connections. Convenience is not mere convenience. Electronic texts can make it so easy to follow links of this kind that in effect the reader’s experience undergoes a qualitative shift. The typical experience in a book culture is that of the linear working through of a text in the author’s order, from beginning to end. The typical experience on the Web is one of following the connections that look promising to the reader. It is with annotation capabilities that the distinctive differences of electronic media really become clear. Printed materials can of course get annotated. For a long time people have been inserting annotations into the margins, noting page numbers worth comparing, applying highlighter pens to key passages, and so forth. In a corporate context it is common to duplicate a document, encouraging a group of people to separately mark it up with comments, collect the marked-up copies, and then produce a new draft. The need to go through such a process reflects the need to bring the interpersonal and intertextual closer together. But the awkwardness of doing this in paper documents shows the potential of electronic media. Printing had the effect of separating the writing from the reading process. Annotations in paper represent an attempt to introduce interactivity into an inflexible medium, leading to a proliferation of parallel annotations. The annotations in most scholars’ personal copies of books and journal articles constitute a single further step of dialogue, a response to the original argument, but the dialogue only rarely includes a third step, a rejoinder by the original author. What I have been calling the slash between writing and reading cuts off the process from what might have been a fruitful, ongoing interchange. Silvio Gaggi (1997: 103) points to the potential for hypertext to remove the “slash,” to reconnect what paper writing had separated: In hypertext, as Landow and others describe it, readers can append their own comments and responses, add new nodes or lexias, to any parts of the text that
40 Don Lavoie they are interested in, and they can create new links among the various lexias. Thus, the distinction between reader and writer is attenuated, perhaps even dissolved entirely. The text is no longer a one-way communication system in which information and ideas proceed only from author to reader, but a communication system in which all participants can contribute to and affect the content and direction of the conversation. Being able in electronic media to mark up a single shared text over a network opens up a crucial new kind of advantage. In effect it makes the text into a living forum, something more like words in a verbal dialogue, and yet without losing the advantages of fixity. One can view the original version without the annotations and other markups, or one can view a selection of markups that come from certain participants, or that raise certain questions. A hypertext document, for example, can be marked up with a categorizing tool that allows the reader to “collect” the passages of the document that pertain to a certain issue, and then at a click, to view only those passages juxtaposed to one another. That view of selected quotations can be shared with others looking at the same document, and they can add other categories to it, yielding other selected representations of the document. Moreover, the markups can themselves become additional navigational aids. We are already used to the way our marked up copies of printed text can become more useful when we have engaged with them, say, by marking the key passages with a red pen, or adding some marginalia. With electronic media it becomes possible to use the capacity of the computer to search the markups themselves, or to reorganize information in terms of the accrued layers it has accumulated from active readers. The advantages of adding groupware and hypertext capabilities to our repertoire of kinds of articulation can be summed up by reference to the table I borrowed from Miller et al. for comparing talking with writing. Electronic media are beginning to transcend the limitations of most of the trade-offs identified in that table. The main purpose Miller et al. were trying to serve with this contrast was not only to reiterate the kind of analysis Walter Ong and others have undertaken to compare the two great pre-electronic discourse media we have had up to now, talking and (paper) writing, but to point the way toward the future. Their point was that adding a third mode to our discursive repertoire would make an astonishing difference in society’s capacity to work with text, a difference on the order of what happened with the inventions of writing and the printing press. It seems natural to ask what kinds of effects such technological changes could be having on entrepreneurship. Might they be not only what are driving the wealthcreation process of certain organizations, as many authors in the knowledge management literature believe, but also what are driving the economic development of society as a whole? These kinds of technologies are bringing about a profound transformation of the nature of organizations and management, and they may be primary causes of the new economy’s prosperity.
Subjectivism and entrepreneurship 41
Conclusion: mutual orientation and wealth This paper has tried to view the new electronic technologies through the lens of the literature on the history of writing and the printing press. That literature, in turn, has been viewed through the lens of the Austrian school’s notion of subjectively meaningful structures. Significant changes in our ability meaningfully to structure what we are saying have transformed the workings of entrepreneurship in the past, in a manner that may be highly suggestive for thinking about the transformations we are undergoing today. Entrepreneurship is not a direct “seeing” of opportunities. The entrepreneurial process involves the mutual orientation of different persons to one another. Somehow a producer’s perspective of what possible production methods might work needs to come into some degree of coordination with a consumer’s perspective of what goods are most desirable. I see an opportunity mainly through coming to an understanding about what somebody is “saying” about the world. This means that we need to attend to the economy’s conversations through which we try to explain what we want, what we are trying to do, etc. Entrepreneurship is fundamentally dependent on the media in which human communication takes place. Creating and sharing electronic information over a network is a fundamentally different communications medium from that of the creation and sharing of paper documents. The way coworkers and market participants are able to share articulations with one another has been profoundly transformed, especially over the past decade. The theory of entrepreneurship ought to be taking these changes into account. Computers and modern communications media have undoubtedly contributed to the productivity of labor and other factors, however, it could be that the recent general prosperity is being driven as much by improvements in tools for the exercise of the general coordinative function of entrepreneurship as it is by measurable increases in the productivity of specific factors of production: labor, capital and land. This kind of orientative contribution of the tools would be difficult to measure in productivity studies, but may represent one of the main driving forces of wealth. The claim of the knowledge management literature is that certain innovations in information technology are transforming the contemporary workplace and marketplace, and I am arguing that these kinds of changes may be propelling the entrepreneurial process. One of the fundamental changes that information technology may be bringing about is enhancing the ability of market participants subjectively to orient themselves to one another and to their world. In other words, something that sounds utterly subjective and mental, the way people orient themselves to one another, and in particular to one another’s articulations, may be what underlies the very real, objective improvements in living conditions in the new economy. We make ourselves wealthier by making ourselves better understood.
42 Don Lavoie
Notes The author would like to thank Bill Tulloh for editing this draft from a longer version of the paper, and Mark Miller and Virgil Storr for helpful comments on the earlier draft. 1 I refer to the contemporary Austrian school in the sense elaborated in Karen Vaughn’s book Austrian Economics in America, spanning the period from the work of Carl Menger in the 1870s, through the writings in Europe of Ludwig Mises, Friedrich Hayek, and Ludwig Lachmann, all the way to the work of the contemporary American Austrians such as Kirzner, Rizzo, Garrison, and Boettke. 2 The Austrian school has contributed to misunderstanding here. Friedrich Wieser called subjectivism a psychological method, and there are lapses to psychologistic language in all the Austrian writings on subjectivism. If taken seriously this psychologism would lead to the position Laslo Csontos (1998) calls methodological solipsism. For an attempt to sort out the legitimate from the illegitimate meanings of the words subjective and objective, see my “The progress of subjectivism” (1991a). 3 See Kirzner (1973, 1979, 1985). 4 In my paper, “The discovery and interpretation of profit opportunities: Culture and the Kirznerian entrepreneur” (1991b) I developed this point about the culturelessness of Kirzner’s notion of the entrepreneur. See also Lavoie and Chamlee-Wright (2000). 5 One objection to this treatment is that ordinary discourse is merely talk, whereas market exchange is a form of action. Two agents who talk to one another are not actually doing anything, whereas two agents who engage in a market transaction are accomplishing something, in particular they are altering property ownership and thus shifting the whole framework in which future action can take place. But can we really maintain this distinction between communicating and acting? From the Austrian point of view a “merely verbal” exchange is every bit as much a matter of doing something as an exchange of ownership. 6 The point is not to belittle the differences between markets and other communicative processes. I agree that there are important differences here. Of particular importance is that prices make possible a process of economic calculation and the imputation of value appraisements across different stages of the structure of production. There is no way that non-price language can convey the relative value of higher-order goods to entrepreneurs who are several stages away from those goods in the capital structure. I do not want to slight such differences as these, but I would insist that it can be very useful to think of market processes as a particular kind of linguistic communicative process whose similarities to verbal and written discourses are as important as the differences. 7 Brice Wachterhauser (1999) puts the point this way in his recent book, a summary of the work of Hans-Georg Gadamer’s philosophical position. 8 Böhm-Bawerk ([1884] 1959) deployed this term in his classic work on capital, as did some other Austrians. It has been thought to be misleading in that it suggests that there would be a single dimension, length, in which to measure the extension of the capital structure. For a useful contemporary elaboration of Austrian capital theory, see Lewin (1999). 9 I get the idea of social learning if not the phrase from Hayek, and I have tried to elaborate the idea in my books on the critique of central planning (Lavoie 1985a, 1985b). 10 As with capital, in a textual structure everything is shaped by the meanings things have to the participants, in this case, the writers and readers, but the structure is nevertheless real. 11 There is more going on here than an analogy between capital and texts. As Baetjer’s book Software As Capital points out, a significant and growing part of the capital structure of the economy is in the form of software tools whose function is precisely the structuring of articulations.
Subjectivism and entrepreneurship 43 12 I will be drawing from scholars such as Walter Ong, Jack Goody, Elizabeth Eisenstein, Henri-Jean Martin, and Roger Chartier on the way the evolution of writing, the printing press, and the rise of literacy have transformed human society. This literature has been largely neglected by economists, and the neglect goes both ways. The literature on the history of writing has said little explicitly about the impact of writing on economic processes per se, although often it is easy enough to see the implications of broader changes on economic phenomena. 13 See Lavoie 2001, “Subjective orientation and objective wealth: Entrepreneurship and the evolution of the structure of ‘text’” for a more in-depth treatment of these issues. 14 These authors are taking up the same questions the historians of writing were raising about the differences between oral and written culture, however, they are not historians. Like many of the writers in the knowledge management literature whom I will discuss in the next section, they are more practitioners than academics. They are a group of software engineers who have worked extensively on developing groupware and hypertext capabilities. 15 See, for example, Brynjolfsson’s often cited paper “The productivity paradox of information technology: review and assessment” (1993). See also the articles and books at the knowledge management web site: http://www.strassmann.com. 16 The book includes a set of influential articles that were published in the 1990s in some of the leading organizational journals, such as Organizational Science, Sloan Management Review, and California Management Review. 17 Several of the authors discussed earlier who have elaborated on the history of writing, such as Martin, when they do talk about contemporary technological media, use the language of information and data processing. 18 See also Eric Drexler (1991) for a summary of the implications for scientific work of a full hypertext publishing system along the lines of Nelson’s vision. 19 Thus when Nonaka (1994: 35) coins the term the “hypertext organization,” trying to draw out aspects of hypertextual organization of text that have analogies in the organization of organizations, he is emphasizing this convergence. He draws attention not only to intertextual structures of text, but also the interpersonal advantages of sharing structured text: Hypertext . . . links related concepts and areas of knowledge to allow a problem to be viewed from many angles. In many ways, this is analogous to the ability of individuals to relate stories in different ways according to the nature of the audience. The same knowledge might be used but in different formats, making it easier to draw relationships between different sets of information. The core feature of the hypertext organization is the ability to switch between various “contexts” of knowledge creation to accommodate changing requirements from situations both inside and outside the organization. 20 The Miller et al. coauthors I have cited here were among the developers who were trying to implement Nelson’s vision. 21 When Tim Berners-Lee (1999), who could be called the inventor of the Web, describes the early inspiration for the hypertext markup language, he places emphasis on the needs he and others at CERN had to try to facilitate collaborative work. 22 Or we duplicate copies of information into different filing locations in order to improve the odds of finding what we are looking for, but that leads to other serious problems, in trying to maintain changes in two different locations.
44 Don Lavoie
References Baetjer, H. (1998) Software As Capital: An Economic Perspective on Software Engineering, Los Alamitos, CA: IEEE Computer Society. Berners-Lee, T. and Fischetti, M. (1999) Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web, New York: HarperBusiness. Böhm-Bawerk, E. von [1884, 1889, 1912] (1959) Capital and Interest, South Holland, IL: Libertarian Press. Brand, S. (1994) How Buildings Learn: What Happens After They’re Built, New York: Penguin. Brown, J.S. and Duguid, P. (1998) “Organizing knowledge,” California Management Review 40(3) (reprinted in D. Smith (ed.) Knowledge, Groupware, and the Internet, Boston: Butterworth-Heinemann, 2000). Brown, J.S. and Duguid, P. (2000) The Social Life of Information, Cambridge, MA: Harvard Business School Press. Brynjolfsson, E. (1993) “The productivity paradox of information technology: Review and assessment,” Communications of the ACM 36(12): 66–77. Bush, V. (1945) “As we may think,” Atlantic Monthly 176(1): 641–9 (reprinted in T.H. Nelson, Literary Machines, Sausalito, CA: Mindful Press, 1990). Chartier, R. (1994) The Order of Books: Readers, Authors, and Libraries in Europe between the Fourteenth and Eighteenth Centuries, trans. L.G. Cochrane, Stanford, CA: Stanford University Press. Chartier, R. (1995) Forms and Meanings, Texts, Performances, and Audiences from Codex to Computer, Philadelphia: University of Pennsylvania Press. Chartier, R. (ed.) (1989) The Culture of Print: Power and Uses of Print in Early Modern Europe, Princeton, NJ: Princeton University Press. Csontos, L. (1998) “Subjectivism and ideal types: Lachmann and the methodological legacy of Max Weber,” in R. Koppl and G. Mongiovi (eds) Subjectivism and Economic Analysis, New York: Routledge. Drexler, K.E. (1991) “Hypertext and the evolution of knowledge,” Social Intelligence 1(2). Eisenstein, E.L. (1979) The Printing Press as an Agent of Change, Communications and Cultural Transformations in Early-Modern Europe, New York: Cambridge University Press. Engelbart, D.C. (1961) “Special considerations of the individual as a user, generator, and retriever of information,” American Documentation 12(2) (April). Gaggi, S. (1997) From Text to Hypertext, Decentering the Subject in Fiction, Film, the Visual Arts, and Electronic Media, Philadelphia: University of Pennsylvania Press. Goody, J. [1987] (1993) The Interface Between the Written and the Oral, New York: Cambridge University Press. Kirzner, I.M. (1973) Competition and Entrepreneurship, Chicago: University of Chicago Press. Kirzner, I.M. (1979) Perception, Opportunity, and Profit: Studies in the Theory of Entrepreneurship, Chicago: University of Chicago Press. Kirzner, I.M. (1985a) Discovery and the Capitalist Process, Chicago: University of Chicago Press. Lachmann, L.M. (1951) “The science of human action,” Economica 18: 412–27 (reprinted in L. Lachmann, Capital, Expectations, and the Market Process: Essays on the Theory of the Market Economy, Kansas City: Sheed, Andrews and McMeel, 1977). Lachmann, L.M. (1956) Capital and Its Structure, London: Bell and Sons.
Subjectivism and entrepreneurship 45 Lachmann, L.M. (1971) The Legacy of Max Weber, Berkeley: Glendessary Press. Lachmann, L.M. (1994) Expectations and the Meaning of Institutions, edited by D. Lavoie, London: Routledge. Landow, G.P. (1994) Hyper/Text/Theory, Baltimore: Johns Hopkins University Press. Lavoie, D. (1985a) Rivalry and Central Planning: The Socialist Calculation Debate Reconsidered, New York: Cambridge University Press. Lavoie, D. (1985b) National Economic Planning: What Is Left?, Cambridge, MA: Ballinger. Lavoie, D. (1991a) “The progress of subjectivism,” in M. Blaug and N. de Marchi (eds) Appraising Modern Economics: Studies in the Methodology of Scientific Research Programmes, Cheltenham, UK: Edward Elgar, pp. 470–86. Lavoie, D. (1991b) “The discovery and interpretation of profit opportunities: Culture and the Kirznerian entrepreneur,” in B. Berger (ed.) The Culture of Entrepreneurship, San Francisco: Institute for Contemporary Studies (trans. into French as Esprit d’Entreprise, Cultures et Societes, Paris: Maxima, 1994). Lavoie, D. and Chamlee-Wright, E. (2000) Culture and Enterprise: The Development, Representation, and Morality of Business, New York: Routledge. Lavoie, D. (2001) “Subjective orientation and objective wealth: Entrepreneurship and the evolution of the structure of ‘text,’” working paper, School of Public Policy, George Mason University. Lewin, P. (1999) Capital in Disequilibrium: The Role of Capital in a Changing World, New York: Routledge. Lucas, H.C. Jr. (1999) Information Technology and the Productivity Paradox: Assessing the Value of Investing in IT, New York: Oxford University Press. Martin, H.J. (1995) The History and Power of Writing, trans. L.G. Cochrane, Chicago: University of Chicago Press. Miller, M.S., Tribble, E.D., Pandya, R., and Stiegler, M. (1995) “The open society and its media,” in J. Lewis and M. Krummenacker (eds) Prospects in Nanotechnology: Towards Molecular Manufacturing, New York: Wiley. Mises, L. von [1949] (1966) Human Action: A Treatise on Economics, 3rd edn, revised, Chicago: Henry Regnery Company. Nelson, T.H. [1974] (1987) Computer Lib/Dream Machines, Redmond, WA: Microsoft Press. Nelson, T.H. [1980] (1990) Literary Machines, edition 90.1, Sausalito, CA: Mindful Press. Nonaka, I. (1994) “A dynamic theory of organizational knowledge creation,” Organizational Science 5(1) (reprinted in D. Smith (ed.) Knowledge, Groupware, and the Internet, Boston: Butterworth-Heinemann, 2000). Ong, W.J. (1977) Interfaces of the Word: Studies in the Evolution of Consciousness and Culture, Ithaca, NY: Cornell University Press. Ong, W.J. (1982) Orality and Literacy: The Technologizing of the World, London: Routledge. Smith, D.E. (ed.) (2000) Knowledge, Groupware, and the Internet, Boston: ButterworthHeinemann. Spinosa, C., Flores, F., and Dreyfus, H.L. (1997) Disclosing New Worlds: Entrepreneurship, Democratic Action, and the Cultivation of Solidarity, Cambridge, MA: MIT Press. Thorp, J. (1998) The Information Paradox: Realizing the Business Benefits of Information Technology, New York: McGraw-Hill. Vaughn, K.I. (1993) Austrian Economics in America: The Migration of a Tradition, New York: Cambridge University Press. Wachterhauser, B.R. (1999) Beyond Being: Gadamer’s Post-Platonic Hermeneutical Ontology, Evanston, IL: Northwestern University Press.
46 Don Lavoie Zack, M.H. and McKenny, J.L. (2000) “Social context and interaction in ongoing computer-supported management groups,” in D. Smith (ed.) Knowledge, Groupware, and the Internet, Boston: Butterworth-Heinemann. Zack, M.H. and Serino, M. (2000) “Knowledge management and collaboration technologies” in D. Smith (ed.) Knowledge, Groupware, and the Internet, Boston: ButterworthHeinemann.
2
Open source software and the economics of organization Giampaolo Garzarelli
[T]he productivity of social cooperation surpasses in every respect the sum total of the production of isolated individuals. (von Mises 1981 [1933]: 43) It should be noted that most inventions will change both the costs of organizing and the costs of using the price mechanism. In such cases, whether the invention tends to make firms larger or smaller will depend on the relative effect on these two sets of costs. For instance, if the telephone reduces the costs of using the price mechanism more than it reduces the costs of organizing, then it will have the effect of reducing the size of the firm. (Coase 1937: 397, note 3)
Introduction It is often remarked that innovation in computer technology is profoundly affecting the organization of production and of consumption of contemporary society. For instance, consumers are said to be increasingly participant in the production process, leading to an increase in the modular nature of most products and organizations, and to an increase in the thickness of most markets (e.g. Cox and Alm 1998; Dolan and Meredith 2001). However, it is seldom acknowledged that there is a complementary, and at least equally important, aspect of this technological innovation: its software counterpart (some exceptions are Baetjer 1998 and Lavoie, Chapter 1, this volume). This work attempts to do some justice to this shortcoming by describing some elements of a new type of organization of work, the one generated by open source software development. Open source includes such software success stories as Apache, Perl, Sendmail and Linux. To give but a few recent statistics on the phenomenon – although these figures are subject to frequent fluctuations – as of March 2001 the top web server is Apache, with 53.76 per cent of the market, for a total of 64.37 per cent of all active sites (see Tables 2.1 and 2.2). And according to the Linux Counter, as of 30 April 2002 there are 125,549 registered users and 95,834 registered machines (); but the Counter also estimates the worldwide Linux
48 Giampaolo Garzarelli Table 2.1 Top developers Developer
February 2002
Per cent
March 2002
Per cent
Change
Apache Microsoft iPlanet Zeus
22,462,777 11,198,727 1,123,701 837,968
58.43 29.13 2.92 2.18
20,492,088 12,968,860 889,857 855,103
53.76 34.02 2.33 2.24
–4.67 4.89 –0.59 0.06
Source: Netcraft (). Note: iPlanet is the sum of sites running iPlanet-Enterprise, Netscape-Enterprise, Netscape-FastTrack, Netscape-Commerce, Netscape-Communications, Netsite-Commerce and Netsite-Communications. Microsoft is the sum of sites running Microsoft-Internet-Information-Server, Microsoft-IIS, Microsoft-IIS-W, Microsoft-PWS-95, and Microsoft-PWS.
Table 2.2 Active sites Developer
February 2002
Per cent
Apache Microsoft iPlanet Zeus
10,147,402 4,069,193 283,112 177,225
65.18 26.14 1.82 1.14
March 2002 9,522,954 3,966,743 265,826 170,023
Per cent
Change
64.37 26.81 1.80 1.15
–0.81 0.67 –0.02 0.01
Source: Netcraft (). Note: iPlanet is the sum of sites running iPlanet-Enterprise, Netscape-Enterprise, Netscape-FastTrack, Netscape-Commerce, Netscape-Communications, Netsite-Commerce and Netsite-Communications. Microsoft is the sum of sites running Microsoft-Internet-Information-Server, Microsoft-IIS, Microsoft-IIS-W, Microsoft-PWS-95, and Microsoft-PWS.
Table 2.3 Top Linux server vendors Vendor
Market share %
Compaq IBM HP Dell Fujitsu Siemens Others
25 10 7 7 3 48
Source: IDC, 2000, Q4, 1999 unit shipments, cited in West and Dedrick (2001: Table 2).
users to be 18 million (). It is also interesting to notice that some of the most successful PC makers have recently also become some of the top Linux server vendors (see Table 2.3).1 The influence of open source both as a business and as a software development model has been vast (e.g. Release 1.0 1998: 8ff.; DiBona et al. 1999; Rosenberg 2000). For example, Netscape decided to develop an open source browser, Mozilla, in 1998; IBM adopted Apache as a web server for its Websphere product line; while
Open source software 49 Apple ships Apache along with their operating system. And Microsoft, seen by many open sourcers as the ultimate enemy,2 is looking into the possibility of going open source in some products (perhaps because of the pressures originating from the ongoing antitrust litigation) by launching so-called shared source: seemingly, a hybrid of proprietary and open software (D.H. Brown Associates, Inc. 2001).3 The open source philosophy assures a ‘self-correcting spontaneous’ organization of work that is ‘more elaborate and efficient than any amount of central planning could have achieved’ (Raymond 2001: 52). By drawing on the recently developed organizational theory of professions and on the classic theory of clubs, the pages that follow will attempt to describe how this organization can exist. To this end, it is first of all (and primarily) suggested that the organizational economics of open source software development is so complex that a theory that has the ambition to explain it needs to begin by looking at the nature of the knowledge involved in the production and consumption of open source software itself.4 The main advantage of following such a cognitive approach is that it lends itself well to explaining the self-organizing as well as the self-regulating properties of open source economic organization. That is to say that the approach gives solid foundations to the eclectic organizational theory that this exploratory essay proposes. Perhaps the most interesting point that shall emerge is that this ‘atypical’ organization is at least as good as a firm in sharing rich types of information in real time.5 I submit that the two reasons for why this is so are (a) that constituents have symmetry of absorptive capacity, and (b) that software itself is a capital structure embodying knowledge. Indeed, in this regard, I go so far as to suggest that the distinction between input (knowledge) and output (software) is in some ways amorphous because knowledge and software are not only the common (spontaneous) standards, but also the non-rivalrous network products being shared.
Contextualization In general, software development is very complicated and involves a substantial amount of experimentation and trial-and-error learning. This renders it a cumulative process where improvements are incremental rather than radical. Contrary to, for example, pharmaceuticals, innovation is not discrete and monolithic, but often builds on previous software. In addition, innovation usually proceeds at a faster pace than most other industries because numerous individuals simultaneously try multiple approaches to solve the same problem. Clearly, such process is imbued with uncertainty. But the multiple approaches and the numerous individuals also create a great variety of potential improvements, arguably more than any single individual, thus increasing the possibilities for success. In turn, the variety leads to new problems and to new trial-and-error learning. What renders all this possible and at the same time makes software so supple is its peculiar nature, namely, its modularity. Modularity is one method to manage
50 Giampaolo Garzarelli complexity. Programs, especially more modern ones of the object-oriented type, are per se composed of different, interacting modules; and it is possible to change a part of a module or an entire module without knowing all information about the program that the module belongs to and without altering other modules or the overall purpose of the program (Baetjer 1998). This is possible because through modularization a program hides information among modules while at the same time allowing for their communication – this principle is known as information hiding. Originally introduced by Parnas (1972), information hiding assures that software is extendable, compatible, and reusable.6 According to this principle in fact ‘system details that are likely to change independently should be the secrets of separate modules; the only assumptions that should appear in the interfaces between modules are those that are considered unlikely to change’ (Parnas et al. 1985: 260). Consequently, information hiding stimulates the division and specialization of knowledge, allowing productive knowledge to converge to its most valued use. And all this entails that the only benchmark to assess the ‘efficiency’ of a particular software is not so much its ability to perform its tasks as its ability to evolve in order potentially to perform its tasks even better (Baetjer 1998). The traditional, corporate approach to software development is centred on hierarchical relations. The decision of what software to develop, test or improve comes from the top of the hierarchy. Open source software development, in contrast, is practically based on the absence of hierarchy.7 But as others have pointed out, this does not at the same time necessarily imply that all open source software projects lack a sometimes even rigid organizational structure.8 Apparently, it is not rare for an open source project to be terminated in the absence of a meritocratic management structure organizing the development process. Conversely, it is also apparently not rare for a very interesting project to fail to create momentum because of organizational rigidities. What exactly is open source software then? Tim O’Reilly, founder and CEO of O’Reilly & Associates, a company that publishes many books on open source, offers a concise definition. Open source is a term that has recently gained currency as a way to describe the tradition of open standards, shared source code, and collaborative development behind software such as the Linux and FreeBSD operating systems, the Apache Web server, the Perl, Tcl, and Python languages, and much of the Internet infrastructure, including Bind (The Berkeley Internet Name Daemon servers that run the Domain Name System), the Sendmail mail server, and many other programs. . . . [But] open source (which is a trademark of the Open Source Initiative – see ), means more than the source code is available. The source must be available for redistribution without restriction and without charge, and the license must permit the creation of modifications and derivative works, and must allow those derivatives to be redistributed under the same terms as the original work. (O’Reilly 1999: 33–4, emphasis removed)
Open source software 51 Notably, the participation to open source projects is voluntary (there is strong selfselection) and supervision is assured on a peer review basis.9 The origins of open source go back to the so-called hacker culture. 10 Hackers are very creative software developers who believe in the unconditional sharing of software code and in mutual help. The advent of the microcomputer diffused this ethos beyond the narrow confines of the academic environments where it originally developed (MIT, Stanford and Carnegie-Mellon), and it multiplied digital linkages. In effect, it dematerialized the need for concentration of hackers in specific laboratories, moving their concentration to cyberspace. Eric Raymond, hacker and author of the very influential open source ‘manifesto’ The Cathedral and the Bazaar (2001), summarizes the fundamental philosophy underlying the open source community in the context of his discussion of Linux. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone. Or, less formally, ‘Given enough eyeballs, all bugs are shallow.’ I dub this: ‘Linus’s Law’. In Linus’s Law . . . lies the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you’ve winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect. In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena – or, at least, that they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door. (Raymond 2001: 30–1, emphasis removed) Let us try to identify some of the necessary ingredients for an organizational theory of bazaar-style software development.
The attributes of professions Deborah Savage, in an innovative piece, proposes the following economic definition of a profession: a ‘profession is a network of strategic alliances across ownership boundaries among practitioners who share a core competence’ (Savage 1994: 131). The keyword here is competence. As the literature beginning with the resuscitated contributions by Penrose (1995 [1959]) and Richardson (1998: chapter 10) has made clear, in fact, production is not as simple as making a soup: it is not so much a question of putting some inputs together (K,L), performing some manipulations f (•), and, voilà, obtaining some output X.11 Rather, it is a complex
52 Giampaolo Garzarelli process involving abilities, experience, and learning – it is, to put it in more general terms, a cognitive-based process encapsulating different routines and capabilities evolving through time (Nelson and Winter 1982).12 The capabilities involved in producing goods and services are often based on tacit knowledge in the sense of Michael Polanyi (e.g. 1966).13 In the specific case of professions, we have a knowledge that is highly tacit and specialized or, to use a catch-all wording, we have ‘esoteric knowledge’ (Savage 1994: 135–6). This knowledge represents the elemental component of professions. For example, it manages to couple competencies, to coordinate, and so on. It therefore offers the rationale for the existence of professions, and it provides for their cohesion and coherence – in a way, for their boundaries as well. In brief, then, professional capabilities are a form of capital representing the productive essence of the network, and more generally coevolving with the network itself. To change viewpoint on the matter, absent esoteric knowledge, professionals and, a fortiori, their coupling would not exist. We apparently face a situation where the division of knowledge (Hayek 1948: chapter 2) drives the division of labour. Professions – like most other organizational forms – then exist for epistemic reasons or, what boils down to the same thing, for Hayekian (qualitative) coordination, that is, for coordination beyond mere price and quantity (compare Malmgren 1961).14 An important corollary is that the fairly symmetric nature of capabilities present in professions assures that the ‘absorptive capacity’ (Cohen and Levinthal 1990) – i.e. some similar capabilities – is virtually always present. 15 Indeed, it is the spontaneous orchestration of knowledge generated by the symmetry of absorptive capacity that creates a profession’s complex self-organization – with, notably, absence of ownership structure16 – as well as its external economies, such as uncertainty reduction, mutual monitoring, incentive alignment, trust and, most important for our discussion (as we shall see shortly), reputation. These characteristics do not necessarily mean, however, that each professional has exactly the same capabilities; otherwise we would not be in the presence of complex self-organization. Rather, the point is that the capabilities share a rudimentary (esoteric) knowledge base – the core competence noted above – that affords the absorptive capacity, a necessary condition for spontaneous selforganization. In sum, the general organizational implications of Savage’s theory of professions are considerable. The most germane implications for our purposes seem to be the following: •
• •
The theory allows to define narrowly the area of operation of a profession because of its emphasis on core competencies – for example, pharmaceuticals, software, semiconductors, etc. – around which other capabilities and routines evolve and revolve. It allows to distinguish professions from other forms of organization, such as firms, because integration of ownership is not a conditio sine qua non. Professionals are autonomous and authoritative in their fields for their competencies allow them, on the one hand, ‘to solve routine problems easily
Open source software 53
•
•
and non-routine problems routinely’ (Savage 1994: 140) and, on the other, enable them to evaluate, and only be challenged by, other professionals. More concretely, they are independent yet interact in a coordinated and fertile fashion. Professions are decentralized networks in that there is not a central authority in command.17 The ‘organization’ of a profession is guaranteed by the exchange of knowledge that reduces uncertainty and stimulates trust among members. Professions are thus self-organizing. Relatedly, there is the role played by reputation as a signalling of quality, viz., reputation is a positive externality. Thus, professions can be interpreted as self-regulating organizations (a point we shall return to below).
The organizational workings of professions seem to well approximate, I think, some of the characteristics of the bazaar-style market for ideas that Raymond (2001) depicts in his descriptive analysis of open source. Similarly to open source, in fact, a profession is a capital network investing in network capital. Interestingly, the workings also seem to accord with the observation by Lerner and Tirole (2000: 8–9) that the core of the entire open source community seems to lie in the sophisticated individuals who compose it. Yet, the theory of professions allows us to illuminate mostly one facet of open source: the supply-side. In order to offer a more complete story of open source, we need to extend the theoretical framework of professions to incorporate more explicitly demand-side considerations. We also need, moreover, to endogenize technology. To this end, it seems necessary to bring together the theory of professions with that of clubs, and to consider the role played by technology.
The additional dynamics of a club In a seminal article published in 1965, ‘An economic theory of clubs’, Buchanan described and formalized the institutional properties of a new category of good (or product) lying between the public and private polar extremes, conventionally called shared good. The good is usually enjoyed only by members participating in a voluntary association – i.e. a club – whose membership may be regulated by some dues. The theory of clubs, in a nutshell, studies the different institutional arrangements governing the supply and demand of the shared good. Since then, the vast literature on clubs has mostly devoted itself to the study of positive and normative issues at the macro-level – for example, decentralization of government and fiscal federalism. But there have also been a few studies concerned with the firm. In particular, Antonelli and Foray (1992) propose a theory of technological cooperation among firms called ‘technological clubs’. By means of a simple comparative static model, they suggest that firms will cooperate in technological endeavours only if the benefits of cooperation outweigh the costs. This is the traditional result we would expect under familiar club models, where the amount of shared good decreases as the number of users increases (cf. Buchanan 1965: 2–6).
54 Giampaolo Garzarelli But, interestingly, Antonelli and Foray also underline that this logic is reversed in the case of network products, namely, when ‘the performance of the product as well as its utility increases with the increase of the community of users’ (Antonelli and Foray 1992: 40). If there are, for example, possible network effects generated by the output, by the process of production or by the technology of production (or all of these), familiar exclusion/congestion effects caused by increased club membership may not hold. Because of the necessary standardization that a network product requires, the possible exclusion/congestion effects generated by increased membership may be overcome by ‘the overall growth of the aggregate demand for the production induced by network [effects]’. Therefore, ‘the trade-off of the technological cooperation is reversed and now [a] firm [may choose whether] to enter a [technological club] and to standardize its own products according to the ratio between marginal costs of standardization and the marginal revenues of standardization’ (Antonelli and Foray 1992: 43). This begs the question of what the shared good is in our case. In the open source world, the shared good seems to be more than one: the software as well as the capabilities of production and of consumption. In the light of our discussion so far, this claim should not be too surprising because, first, software per se is an everevolving capital structure that embodies knowledge (Baetjer 1998) and, second, because in the open source community both software and capabilities are nonrivalrous (Raymond 2001).18 Indeed, if compared with proprietary software, open source would seem to assure an even more productive capital structure because of the free availability of the source code. Knowledge and software are then not only the common (spontaneous) standards, but also the network products.19 Now, were we in the presence of a more traditional organizational structure – such as one with a non-network product – we would have a congestion problem arising from the difficulty of capability transfer (compare Ellig 2001). But because, as we noted, for open source esoteric knowledge is in reality common, the congestion is actually determined by the technological state of the art. (We shall return to technology presently.)
Reputation and shared context These observations lead to another interesting issue. Open sourcers, we saw, are not in the trade to maximize profits. Although their first motivation to modify a program may originate from sheer need,20 their utility functions for sharing, as Raymond repeatedly emphasizes, exhibit maximization of reputation; that is, attempting to deliver an ever better product maximizes reputation.21 Algebraically, we can illustrate the process in terms of quality improvements as follows:
qt (S) = f (qt −1(S), Kt , Ht , Ct )
Open source software 55 where q(S) is the quality of software S, K is knowledge, H is the complementary hardware, C is complementary software, and t is a time index. The utility (U) function of the open sourcers is:
Ut (Sq ) = f (qt , Nt (Sq )) where N is the number of users of software S, q is quality, and t is a time index. The qualitative property that this trivial illustration is trying to convey is the following. The endogeneity of reputation captured by the quality of software increases the user base (positive externality) and the ‘utility’ of the open sourcers on both sides of the demand and supply equivalence. This implies that many traditional organizational stories centred on, for example, incentive alignment, monitoring, opportunism, and ownership structures are, at best, incomplete for they neglect true Marshallian external economies (or, if you prefer, knowledge spillovers) that act as, for example, self-regulatory monitoring and coordinating systems. The ‘poor beleaguered conventional manager is not going to get any [succour] from the monitoring issue; the strongest argument the open source community has is that decentralized peer review trumps all the conventional methods for trying to ensure that details don’t get slipped’ (Raymond 2001: 59, original emphasis). Interestingly, the discussion has brought us back to Hayek and to the problem of knowledge and its dispersion (Hayek 1948: chapters 2 and 4; Jensen and Meckling 1998 [1992]). That is to say, sometimes shared context may count more than hierarchy for the ‘efficient’ organization of production and exchange (Ghoshal et al. 1995).
On the role of technology The reader will have noticed by now that I have not yet said much about information and communication technology. We have talked about it in a standard comparative static fashion. But I have been deliberately vague about its endogenous role. This topic shall be briefly considered here. Arguably, the role played in our story by information and communication technology is one of ‘technological convergence’ (Rosenberg 1963). Or, to update Rosenberg’s notion somewhat, hardware and software represent a ‘general purpose technology’ (GPT) (e.g. Bresnahan and Traijtenberg 1995). Put simply, a GPT usually emerges to solve very narrow problems. Yet, in time its purposes diffuse to many other areas of application. For example, we have passed from the MIT PDP1 minicomputer in 1961 to the Defence Department’s Advanced Research Projects (DARPA) that created ARPANET, the first computer network, to today’s personal computer and the Internet. It would seem that even though GPT has greatly facilitated collaboration among a great variety and number of individuals, some of the economic interactions are very similar to what other forms of organization, such as professions and clubs,
56 Giampaolo Garzarelli already do. But thanks to online interactions in real time, it would also seem that GPTs might ultimately give a comparative advantage to a profession/club mode of organization over one of hierarchy. Indeed, a fundamental reason why classical markets often do not work well resides in the need to share rich information in real time (compare, e.g., Kogut and Zander 1992; Langlois and Robertson 1995).22 In the specific case of open source it appears that the transition to cyberspace to share rich information in real time was so, as it were, smooth because there already existed a more-or-less well-defined core competence and culture.
Policy considerations The first point to underline is probably that open source spontaneously solves the two fundamental organizational problems defined by Jensen and Meckling (1998 [1992]: 103): ‘the rights assignment problem (determining who should exercise a decision right), and the control or agency problem (how to ensure that self-interested decision agents exercise their rights in a way that contributes to the organizational objective)’. When specialized knowledge is symmetric, we saw, it spontaneously solves the agency problem by means of external economies (Savage 1994, 2000). And, just as Coase (1960) taught us, the ultimate result of this spontaneous interaction is a ‘collocation’ of production (and consumption) knowledge and decision power in the hands of those who most value it ( Jensen and Meckling 1998 [1992]; Savage 2000).23 If this is so, then open source is not only a self-organizing organization, but also a self-regulating one where, as Savage (2002: 19) points out, ‘self-regulation . . . means coordination of economic activity through voluntary association in an interdependent network, without interference from the government, and without resort to hierarchy’.24 Whenever organizational forms present rapid change because of their strong ties to technology, public policy issues are always thornier than usual. Indeed, historically, it seems that every time that there is the development of a new technology or production process, the government has to intervene in some fashion to regulate it or to extract rents from it. This point is well-encapsulated in the well-known catch-phrase attributed to Faraday. After Faraday was asked by a politician the purpose of his recently discovered principle of magnetic induction in 1831, he replied: ‘Sir, I do not know what it is good for. However, of one thing I am quite certain, some day you will tax it’. Since open source successfully developed in an environment of little government presence has generated benefits well beyond its organizational boundaries,25 the implications for policy are quite clear. The government must be sensitive to economic activity that is spontaneously productive, and one way to guarantee this is to preserve spheres of autonomy.26 Indeed, any intervention may suffocate the very motivation that drives these types of organization. At the same time, however, this is not to say that we should not think about the possibility of defining some Hayekian abstract rules for the interaction among new organizations as well as among new and more traditional types of organization (see Savage 2002).27
Open source software 57
Conclusions and suggestions for future research In a somewhat desultory fashion, I have attempted to describe some general organizational characteristics of bazaar-style software development. During the description a substantive lesson emerged. It appears that the emergence of spontaneous organization of work is facilitated mostly in those cases where the constituents at least to some degree already share productive knowledge or, if you like, where knowledge is already a standard. In the case of open source this is a fortiori so in that not only is the input (the esoteric knowledge) a sufficient statistic because of common absorptive capacity, but also because the output (the software, a network product) is itself essentially a standard-setting knowledge structure whose use is vast and whose reorganization is infinite. Relatedly, this suggests that in our story new information and communication technology – if now virtually indispensable – mostly performed the role of propagator rather than that of originator. Without wanting to make too much of the point, we should also notice that thanks to new technology the organizational economics of open source now seem to be closer to the putting-out system.28 The critical organizational difference between the putting-out and the bazaar being that in the latter there is no external authority controlling production.29 That is to say that the organization of work of open source is one where the labour force is dispersed but connected by means of new technology, and whose product supervision is (spontaneously) assured by reputation effects.30 The economics of open source is very complex. My analysis has scratched the surface. Future studies should more closely scrutinize the relation between the structure of software, namely, its modularity, vis-à-vis the organizational structure delineated in the previous pages.31 Further, they could explore the impact of type of licensing agreement on organization form, and study the relationship between the legal quandaries linked to traditional software32 and possible implications for open source and its organization. All these avenues of investigation would naturally lead to the very interesting issue of the origin and evolution of organizational form33 as well as to the one about the trade-off between coherence and flexibility of organization.
Notes I thank Cristiano Antonelli, Metin M. Cosgel, Steven R. Cunningham, George N. Dafermos, Roger Koppl, Jorge L. Laboy-Bruno, Yasmina Reem Limam, and Mark S. Miller. My intellectual debt to Richard N. Langlois is beyond the few references to his work. The paper was written while I was visiting the Economics Department of the University of Connecticut, Storrs, USA. 1 To be more precise, ‘Linux’ refers to the kernel of the operating system; while the entire project is called ‘GNU/Linux’ because it contains parts of the GNU project, started by Richard Stallman in 1984. See, for example, the chapter by Stallman in DiBona et al. (1999: 53–70). 2 See, for example, cWare (nd); but compare Eunice (1998). 3 For Microsoft’s reaction to the open source phenomenon see the ‘Halloween
58 Giampaolo Garzarelli
4
5
6 7 8 9 10 11 12
13
14 15 16 17 18 19 20 21
documents’ (some internal Microsoft memoranda that try to assess competition from open source that later became public): (accessed 10 February 2000). Take note that by ‘consumption of open source’ I refer to consumption on the supplyside: consumption by open source producers. I do not consider, in other words, ‘downstream’ consumption of open source, the one made by individuals using open source who are not at the same time involved in its production; even if these downstream consumers may suggest to the open source community about how to improve software. ‘Real time’ in the computer science sense of being able to do, evaluate or react to things as they are happening, without (much) delay. Two classic examples of real time behaviour are a telephone conversation and software that tries to constantly track weather conditions to attempt to offer forecasts. For an organizational application of this notion see Langlois and Robertson (1995: chapter 3). But compare Brooks (1975: 78ff.). In some cases there still is some authority, however. In the case of Linux, for example, Linus Torvalds (or a close collaborator) decides which software code to accept into the Linux kernel. Notably George N. Dafermos in private communication with the author. There are several licenses governing open source. Analyzing these in detail necessitates a study of its own. See especially DiBona et al. (1999: appendix B), Rosenberg (2000: chapters 6 and 7 and appendix A) and Raymond (2001: 73ff.). See in particular Raymond (2001: 1–17, 169–91). The image is Leijonhufvud’s (1986: 203). To clarify, next to competence, the literature also speaks of ‘routines’ (Nelson and Winter 1982), ‘capabilities’ or ‘dynamic capabilities’ (Langlois and Robertson 1995). Stricto sensu, routines are what an organization does, they are the economic equivalent of the biological genes or economic memory; capabilities/dynamic capabilities are what an organization can do, e.g. if circumstances change and redeployment of resources takes place – they are directly complementary to competencies; competencies are the core abilities that an organization possesses, i.e. what an organization specializes in depends on its competence (although, in time, competencies may change). To schematize: routines ∈ capabilities ∈ competencies. Yet, these categories are not mutually exclusive as the (illustrative) classification might suggest; in fact, all the notions are quite slippery. Polanyi’s tacit dimension that is opposite to the explicit one, is akin, in many ways, to Ryle’s (1971 [1946]) dichotomy between ‘knowledge that’ (explicit) and ‘knowledge how’ (tacit); the distinction made by de Solla Price (1965) between technological (how) and scientific (why) knowledge is also relevant here. A point of view, incidentally, compatible with Coase’s (1937) original story; see, for example, Langlois and Robertson (1995) and Garzarelli (2001). Although not widely remarked upon (an exception is Langlois and Robertson [1995]), this is, in effect, the flip-side of competencies, at least in normal periods of production and exchange, i.e. those involving little radical innovation. Contra Hansmann (1996). See especially Langlois and Robertson (1995: chapter 5). The suggested idea of sharing of capabilities shares some of the properties of user-based innovation and user-to-user assistance described in Lakhani and von Hippel (2000). On open standards and networks compare for example, Garud and Kumaraswamy (1993) and West and Dedrick (2001). Indeed, this is the first lesson offered by Raymond. ‘Every good work of software starts by scratching a developer’s personal itch’ (Raymond 2001: 23, emphasis removed). For example, the ‘ “utility function” Linux hackers are maximizing is not classically economic, but is the intangible reward of their own ego satisfaction and reputation among other hackers’ (Raymond 2001: 53).
Open source software 59 22 As Raymond (2001: 224, note 10) observes in a related context, the ‘open source community, organization form and function match on many levels. The network is everything and everywhere: not just the Internet, but the people doing the work form a distributed, loosely coupled, peer-to-peer network which provides multiple redundancy and degrades very gracefully. In both networks, each node is important only to the extent that other nodes want to cooperate with it’. 23 Miller and Drexler (1988), in a classic essay that greatly influenced Raymond (2001: 225–6), make a similar point. Compare also Baetjer (1998). 24 Compare Raymond (2001: 57ff.). 25 The ‘greatest economic contribution’ of open source technologies ‘may be the new services that they enable’ (O’Reilly 1999: 34). The ‘Red Hat Wealth Monitor’ keeps track of the profits made by Red Hat thanks to open source software. This is an effort undertaken to encourage reinvestment in the community in order to try to generate even more wealth. Visit: (accessed 21 March 2000). See also Release 1.0 (1998); and compare Lavoie, Chapter 1, this volume. 26 For instance, ‘[w]eb computing fundamentally depends upon open access because more contacts lead exponentially to more potential value creation. For example, Bob Metcalfe, inventor of Ethernet technology, asserts [that] the value of any number of interconnections – computers, phones, or even cars – potentially equals the square of the number of connections made’ (D.H. Brown Associates, Inc. 2000: 8). This generalization should be readily contrasted to ‘Brooks’s Law’ (Brooks 1975). As Raymond acknowledges, in ‘The Mythical Man-Month, Fred Brooks observed that programmer time is not fungible; adding developers to a late software project makes it later. He argued that the complexity and communication costs of a project rise with the square of the number of developers, while work only rises linearly. This claim has since become known as “Brooks’s Law” and is widely regarded as a truism. But if Brooks’s Law were the whole picture, Linux would be impossible’ (Raymond 2001: 49–50; see also 220–1, note 4). Compare Langlois (2001). 27 In many ways, this is the classic problem of increasing complexity as division of labour increases; but in a new guise. See especially Leijonhufvud (1989) and Baetjer (1998). 28 Compare Leijonhufvud (1986) and Langlois (1999). 29 Or, to put it more precisely, authority is still present; yet it is internal in the sense that it is among peers, i.e. its legitimacy is mostly achieved by reputation. 30 This impression should be contrasted to more traditional comparative-static stories assuming exogenous inputs and outputs – hence constant technology and no knowledge combinatorics or growth (e.g. Gurbaxani and Whang 1991; Malone et al. 1987; Picot et al. 1996). These approaches are mostly influenced by the work of Williamson (e.g. 1985) for both method (i.e. the trichotomy among market, hybrid and hierarchy) and core variables (the transaction, asset specificity, contractual safeguard and opportunism); but compare also Williamson (1996). 31 Kogut and Turcanu (1999) emphasize the importance of modularity. For theories of organizational and technological modularity see, for example, Garud and Kumaraswamy (1993), Langlois and Robertson (1995: chapter 5) and Langlois (2001 and 2002). 32 On which compare Samuelson et al. (1994). 33 An issue that is relevant to the whole of this chapter. The Mozilla project by Netscape Communication, Inc. – that released the code of Netscape Communicator on 31 March 1998 (the first proprietary code to become open) – would be an interesting case study. See, for example, Rosenberg (2000: 33–8) and Raymond (2001: 61–3; 169–91). Also see: (accessed 31 May 2001).
60 Giampaolo Garzarelli
References Antonelli, C. and Foray, D. (1992) ‘The economics of technological clubs’, Economics of Innovation and New Technology 2: 37–47. Baetjer, H. Jr. (1998) Software as Capital: An Economic Perspective on Software Engineering, Los Alamitos, CA: IEEE Computer Society. Bresnahan, T.F. and Traijtenberg, M. (1995) ‘General purpose technologies: “engines of growth”?’, Journal of Econometrics 65(1): 83–108. Brooks, F.P. Jr. (1975) The Mythical Man-Month: Essays on Software Engineering, Reading, MA: Addison-Wesley. Buchanan, J.M. (1965) ‘An economic theory of clubs’, Economica, N.S. 32(125): 1–14. Coase, R.H. (1937) ‘The nature of the firm’, Economica, N.S. 4(16): 386–405. Coase, R.H. (1960) ‘The problem of social cost’, Journal of Law and Economics 3 (October): 1–44. Cohen, W.M. and Levinthal, D.A. (1990) ‘Absorptive capacity: a new perspective on learning and innovation’, Administrative Science Quarterly 35: 128–52. Cox, M.W. and Alm, R. (1998) The Right Stuff: America’s Move to Mass Customization. Federal Reserve Bank of Dallas Annual Report. Online. Available HTTP: (accessed 14 April 2001). cWare (nd) The Linux Storm. Online. Available HTTP: (accessed 19 March 2000). D.H. Brown Associates, Inc. (2000) Technology Trends Monthly, May newsletter. Online. Available HTTP: (accessed 8 July 2000). D.H. Brown Associates, Inc. (2001) Technology Trends Monthly, June newsletter. Online. Available HTTP: (accessed 9 August 2001). de Solla Price, D.J. (1965) ‘Is technology historically independent of science? A study in statistical historiography’, Technology and Culture 6(4): 553–67. DiBona, Ch., Ockman, S., and Stone, M. (eds) (1999) Open Sources: Voices from the Open Source Revolution, Sebastopol, CA: O’Reilly & Associates, Inc. Also online. Available HTTP: (accessed 21 July 2000). Dolan, K.A. and Meredith, R. (2001) ‘Ghost cars, ghost brands’, Forbes (April 30). Online. Available HTTP: (accessed 7 May 2001). Ellig, J. (2001) ‘Internal markets and the theory of the firm’, Managerial and Decision Economics 22(4–5): 227–37. Eunice, J. (1998) Beyond the Cathedral, Beyond the Bazaar (May 11). Online. Available HTTP: (accessed 19 March 2000). Garud, R. and Kumaraswamy, A. (1993) ‘Changing competitive dynamics in network industries: an exploration of Sun Microsystems’ open systems strategies’, Strategic Management Journal 14: 351–69. Garzarelli, G. (2001) ‘Are firms market equilibria in Hayek’s sense?’, Centro di Metodologia delle Scienze Sociali working paper No. 74, Rome: Luiss-Guido Carli. Ghoshal, S., Moran, P., and Almeida-Costa, L. (1995) ‘The essence of the megacorporation: shared context, not structural hierarchy’, Journal of Institutional and Theoretical Economics 151(4): 748–59. Gurbaxani, V. and Whang, S. (1991) ‘The impact of information systems on organizations and markets’, Communications of the ACM 34(1): 59–73.
Open source software 61 Hansmann, H. (1996) The Ownership of Enterprise, Cambridge, MA: The Belknap Press of Harvard University Press. Hayek, F.A. von (1948) Individualism and Economic Order, Chicago: University of Chicago Press. Jensen, M.C. and Meckling, W.H. (1998) ‘Specific and general knowledge and organizational structure’, in M.C. Jensen (ed.) Foundations of Organizational Strategy, Cambridge, MA: Harvard University Press. Originally published in W. Lard and H. Wijkander (eds) (1992) Contract Economics, Oxford: Blackwell. Kogut, B. and Turcanu, A. (1999) ‘The emergence of e-innovation: insights from open source software development,’ Wharton School paper, University of Pennsylvania (November 15). Online. Available HTTP: . (accessed 25 February 2001). Kogut, B. and Zander, U. (1992) ‘Knowledge of the firm, combinative capabilities, and the replication of technology’, Organization Science 3 (3): 383–97. Lakhani, K. and Hippel, E. von (2000) ‘How open source software works: “free” user-touser assistance’, MIT Sloan School of Management working paper 4117 (May). Langlois, R.N. (1999) ‘The coevolution of technology and organization in the transition to the factory system’, in P.L. Robertson (ed.) Authority and Control in Modern Industry: Theoretical and Empirical Perspectives, London: Routledge. Langlois, R.N. (2001) The Vanishing Hand: The Changing Dynamics of Industrial Capitalism (August 7), Centre for Institutions, Organizations, and Markets working paper 2001–01, University of Connecticut, Storrs. Online. Available HTTP: (accessed 12 August 2001). Langlois, R.N. (2002) ‘Modularity in technology and organization’, Journal of Economic Behavior and Organization 49(1): 19–37. Langlois, R.N. and Robertson, P.L. (1995) Firms, Markets, and Economic Change: A Dynamic Theory of Business Institutions, London: Routledge. Leijonhufvud, A. (1986) ‘Capitalism and the factory system’, in R.N. Langlois (ed.) Economics as a Process: Essays in the New Institutional Economics, New York: Cambridge University Press. Leijonhufvud, A. (1989) ‘Information costs and the division of labour’, International Social Science Journal 120 (May): 165–76. Lerner, J. and Tirole, J. (2000) ‘The simple economics of open source’, NBER working paper 7600 (March), Cambridge, MA: NBER. Malmgren, H.B. (1961) ‘Information, expectations and the theory of the firm’, Quarterly Journal of Economics 75(3): 399–421. Malone, Th.W., Yates, J., and Benjamin, R.I. (1987) ‘Electronic markets and electronic hierarchies’, Communications of the ACM 30(6): 484–97. Miller, M.S. and Drexler, K.E. (1988) ‘Markets and computation: agoric open systems’, in B.A. Huberman (ed.) The Ecology of Computation, Amsterdam: North-Holland. Also online. Available HTTP: (accessed 5 June 2001). Mises, L. von (1981). Epistemological Problems of Economics, New York and London: New York University Press (originally published 1933). Nelson, R.R. and Winter, S.G. (1982) An Evolutionary Theory of Economic Change, Cambridge, MA: The Belknap Press of Harvard University Press. O’Reilly, T. (1999) ‘Lessons from open source software development’, Communications of the ACM 41(4): 33–7.
62 Giampaolo Garzarelli Parnas, D.L. (1972) ‘On the criteria to be used in decomposing systems into modules’, Communications of the ACM 15(12): 1053–8. Parnas, D.L., Clemens, P.C., and Weiss, D.M. (1985) ‘The modular structure of complex systems’, IEEE Transactions on Software Engineering 11(3): 259–66. Penrose, E.T. (1995) The Theory of the Growth of the Firm, 3rd edn, with a new Foreword by the author, Oxford: Oxford University Press (originally published in 1959). Picot, A., Ripperger, T., and Wolff, B. (1996) ‘The fading boundaries of the firm: the role of information and communication technology’, Journal of Institutional and Theoretical Economics 152(1): 65–79. Polanyi, M. (1966) The Tacit Dimension, London: Routledge. Raymond, E.S. (2001) The Cathedral and the Bazaar. Musings on Linux and Open Source by an Accidental Revolutionary, revised edn, Sebastopol, CA: O’Reilly & Associates, Inc. Also online. Available HTTP: (accessed 5 February 2000). Release 1.0, The Open-source Revolution (1998, 19 November). Online. Dead HTTP: (accessed February 10, 2000); now available in PDF format from HTTP: (accessed 19 April 2002). Richardson, G.B. (1998) The Economics of Imperfect Knowledge: Collected Papers of G.B. Richardson, Cheltenham, UK: Edward Elgar Publishing, Inc. Rosenberg, D.K. (2000) Open Source: The Unauthorized White Papers, Foster City, CA: M and T Books. Rosenberg, N. (1963) ‘Technological change in the machine tool industry, 1840–1910’, Journal of Economic History 23(4): 414–43. Ryle, G. (1971) ‘Knowing how and knowing that’, in Ryle, G. (ed.) Collected Papers. Volume II: Collected Essays, 1929–1968, New York: Barnes and Noble, Inc (originally published in Proceedings of the Aristotelian Society 46 (1946)). Samuelson, P., Davis, R., Kapor, M.D., and Reichman, J.H. (1994) ‘A manifesto concerning the legal protection of computer programs’, Columbia Law Review 94 (December): 2308–431. Savage, D.A. (1994) ‘The professions in theory and history: the case of pharmacy’, Business and Economic History 23(2): 129–60. Savage, D.A. (2002) ‘How networks of professions regulate themselves’, Proceedings of the Fourth Triple Helix Conference, November 6–9, Copenhagen. West, J. and Dedrick, J. (2001) ‘Proprietary vs. open standards in the network era: an examination of the Linux phenomenon’, Proceedings of the Hawaii International Conference on System Sciences (HICSS-34) (January 3–6), Maui, Hawaii, IEEE. Online. Available HTTP: (accessed 5 June 2001). Williamson, O.E. (1985) The Economic Institutions of Capitalism, New York: The Free Press. Williamson, O.E. (1996) ‘The fading boundaries of the firm: comment’, Journal of Institutional and Theoretical Economics 152(1): 85–7.
3
The digital path Smart contracts and the Third World Mark S. Miller and Marc Stiegler
Introduction Hernando de Soto, in The Mystery of Capital (de Soto 2000), shows that the poor of the world have, in his terminology, assets vastly in excess of their capital. In one study, de Soto’s associates surveyed neighborhoods in various poor countries, assessing the value of buildings which were not formally titled. The extrapolated value of just the informally owned buildings in the Third World amounted to $9.3 trillion – more than half the combined value of all publicly traded US companies. In identifying a crucial mystery – the failure of these assets to serve as capital for their owners – de Soto has identified a great opportunity for economic betterment. De Soto’s focus is on the informal sector – that sphere of economic activity that occurs outside the official formal legal system. Most of the economic activity of the Third World’s poor occurs in the informal sector. Despite the nonofficial status of the informal systems of laws and property in this sector, they are nevertheless quite real, and form the foundations on which these informal economies function. However, the formal and informal sectors are not otherwise equivalent. The poor pay a great price for informality – most of all in the difficulty of capital formation. As a simple example, the house you live in, from which no one would attempt to evict you, is an asset. The recognition and sense of legitimacy in your local community of your claim to the house makes this asset effectively your property. A mortgage on that house would be capital. (In countries that have become rich, mortgages in particular have been a major source of highly decentralized investment, seeding many family businesses.) But just because no one can evict you from your house, this does not mean a bank dares accept it as collateral for a loan. The distinction is one of credibility of property rights transfer at a distance, i.e. the ability to engage in binding contracts such that the new owners could be confident they could indeed evict you as part of the contract, despite their distance from your community. De Soto hopes to bring the power of capital to the poor by strengthening their property rights in accord with historical precedents in the advanced economies – that is, by reforming and extending the legal systems and bureaucracies of their various national governments. This is no easy task: the benefits of reform flow to
64 Mark S. Miller and Marc Stiegler the now-marginalized property owners, but the benefits of inertia rest with the power holders. De Soto argues powerfully that property-system reform should be a primary political objective, and we concur. This chapter, however, explores another path to de Soto’s objective, one opened by new technologies of the Net. Given the exponential rate at which the cost of electronics and wireless communications are falling, the cost of the technology itself should rapidly become a non-issue, even for the world’s poorest. Because binding contracts for ownership transfer lie at the heart of capital formation, what if traditional contracts were supplemented and/or supplanted with smart contracts? Smart contracts will enable cooperation among mutually suspicious parties, often without need for legal recourse. Could such a jurisdiction-free contracting mechanism, accessible over the Net, dramatically increase capital liquidity, spawning a flood of new wealth in the poorest areas of the world? The rest of the chapter is organized as follows: •
•
•
•
Networks of trust synthesizes ideas from de Soto and Francis Fukuyama to suggest the strong role played by widely trusted intermediary institutions, or trust hubs, in forming working large-scale trust networks, especially the institutions of title and law. We explain de Soto’s program, which we call the governmental path, to address the absence of these institutions by bringing the informals into governmental systems of formal law and property. We make explicit the conflict faced on this path between local knowledge and global transferability. We propose an alternate jurisdiction-free digital path, made possible by new technologies, that could leverage the existing wide recognition of first world institutions to short circuit the slow growth process of the other paths. Smart contracts are contracts as program code, where the terms of the contract are enforced by the logic of the program’s execution. In a series of steps, from the basic metaphor of contracts as board games, through the nature of contractcreated derivative rights, to compositions of games to turn assets into capital, we explain how smart contracts can resolve the conflict – gaining the benefits of global transferability without sacrificing local knowledge. Backing and legitimacy: Why would a change of electronic title be locally honored as a transfer of control of the actual assets? The governmental path provides backing by coercive enforcement. How may the non-coercive digital path address these issues instead? By ratings – independent estimates of the likelihood that a title listing would be honored; like credit reports for systems of local law. And by video contracts to bridge between the above abstract world and local systems of largely unwritten arrangements. An example shows how the outcome of a smart contract may gain local legitimacy. Limitations and hazards: What other novel problems will we encounter on the digital path? First, smart contracts will be unable to express the subtle richness of contracts written in natural language, leading to techniques for combining the two kinds of contract elements into split contracts. Second, a
The digital path 65
•
•
naïvely deployed smart contract system could exacerbate rather than diminish the dangers of regulatory capture – the participants could end up with less vulnerability to local governments but more vulnerability to distant governments, over which they exert even less influence. This section explains how both of these problems may be addressed. Being untested, we can be sure the digital path will encounter further problems, indeed more problems than we can anticipate. This section just scratches the surface. Why the Third World first? Why may the Third World lead the first in the transition to smart contract-mediated global commerce? Because of differences in the nature of legitimacy in these two worlds. Because the first world has already paid the homogenization costs – it has already lost the diversity the digital path could have preserved. And because the Third World lacks a working installed base of law and property, which could lead to the leapfrogging seen with cell phones in Eastern Europe. Besides being first through this transition, the Third World’s success may cause the First World to follow along. The rule of law and not of men: In one respect at least, the character of life on the digital path may be more familiar to the past than the present. It would be an almost literal realization of the classical liberal ideal.
Networks of trust Why are some societies so much better able to generate wealth than others? During the great oppressions of the twentieth century, many people, including ourselves, held the naïve view that once oppression was out of the way, markets would bloom and take care of the people. This has proven tragically wrong. The years since then have shown that the absence of oppression was not enough. Free speech was not enough. The end of socialism was not enough. The universal desire for capitalism was not enough. In order to successfully help, we must first solve this riddle. Many people have tried, and the answers proposed by Hernando de Soto and Francis Fukuyama are especially insightful and complementary. Fukuyama’s Trust (Fukuyama 1995) makes many perceptive observations of how the world’s cultures differ regarding attitudes and proclivities towards trust – how easily, and under what conditions, members of a particular culture come to trust each other. In the high-trust societies, mutually trusting relationships are easily formed. In low-trust societies they are not. In Fukuyama’s taxonomy, the third major category, familial-trust societies (or Confucian societies), is not between the other two on a spectrum. Rather, it is a different stable pattern characterized by dense networks of high trust within families, but rather low trust between families. Fukuyama shows how these different patterns of trust seem to explain some of the observed differences in the patterns of businesses that arise in these cultures. Complex cooperative arrangements require trust, so not surprisingly, Fukuyama’s high trust societies are the ones best able to generate vast amounts of wealth.
66 Mark S. Miller and Marc Stiegler But to understand the success of the first world requires something more than Fukuyama’s analysis. No matter what the culture, simple cognitive limitations prevent any of us from knowing, much less trusting, more than a very small fraction of the members of our societies (Hayek 1937). Nevertheless, in the first world massive numbers of strangers meet, trade, do business, negotiate, and sign contracts, despite lack of any prior knowledge of, or reasons to trust in, each other. How is this possible? De Soto’s earlier book, The Other Path (de Soto 1989), tells a complementary story. De Soto can also be understood as explaining differences in economic organization according to differences in the possibilities for trust. However, de Soto’s emphasis is not culture but institutions, and their lack. De Soto’s portrayal of the poor within a Third World village is not one of culturally based low trust. Rather, it is the painful lack of the various widely trusted intermediary institutions that catalyze commerce at a distance, and that we normally take for granted in the First World. Trust relationships can be thought of as analogous to the airport hub and spoke pattern (Figure 3.1). Many small local networks are interconnected on a wider scale through major hubs. Although this pattern is a partial centralization, it is not a hierarchy – there is, for example, no central hub of hubs. Logically, it is peer to peer, but it is built with a backbone architecture due to the economics of the system. The Net itself has mostly the same architecture, as does the highway system. In all cases, a sparsely connected actual network acts for most purposes like a densely connected network. For example, from any airport you can fly to any other airport, almost as if there were flights between every pair of airports. For many purposes, it acts like the network shown on the right of the figure. Similarly, in the First World, two strangers can meet and conduct business as if they had prior knowledge of and trust in each other, by virtue of their reliance on a mutually recognizing backbone of widely trusted intermediary institutions, or trust hubs. These hubs are in the business of securing these relationships to minimize the risks their customers face from each other; this often requires them to absorb some of these risk on to themselves. The economies of scale available
Figure 3.1 Hubs of high trust.
The digital path 67 to a hub can help tremendously with these risks. Historically, western societies have developed specialized hubs that bundle trust with other expertise: one trusts Citibank not only because Citibank has a demonstrated history of reliably backing their loans, but also because they are experts in loan and risk management, which are necessary elements of reliability in that field: an organization that attempted to engage in banking without expertise in these fields could not be trustworthy no matter how honorable the employees and executives of the organization might be. Other examples of familiar trust hubs include title companies, insurance, escrow, exchanges and auction houses, underwriters, consumer reports, Roger Ebert, notaries, arbiters, courts and cops, money, etc. The list is endless. From a simple graph-theoretic point of view (inspired by Granovetter 1973; London 1997, personal communication), we can analyze the Fukuyama low-trust world (in which the fanout from each node is small) and the de Soto missinghub world, and immediately recognize which effect has the greater impact on a society’s effectiveness: it is the absence of the hubs that ultimately prevents large-scale complex cooperative arrangements from forming. Even with high cultural proclivity for individual-to-individual trust, in the absence of hubs, the resulting virtual network would at best form small islands of densely connected networks, only loosely connected to each other. The resulting picture resembles both Fukuyama’s portrayal of familial-trust societies, and de Soto’s portrayal of networks of villages of informals. The division of labor is limited by the extent of the market. (Adam Smith) Modern civilization has given man undreamt of powers largely because, without understanding it, he has developed methods of utilizing more knowledge and resources than any one mind is aware of. (Friedrich Hayek 1978) The virtual network of Figure 3.1 forms one large market with great extent, enabling a great division of knowledge and labor. The virtual network of Figure 3.2 is one of many separate markets, barely connected to each other, and each individually of minor extent. But should not this situation be an ideal growth medium for hubs? If there is great market need for them, then surely there is great demand and great opportunity. Indeed, this is the situation from which the hub backbone grew spontaneously in the West. With the absence of government oppression we should indeed expect it to grow here as well. However, the West’s backbone of hubs are the result of slow growth processes; they build slowly over time. The widespread trust needed by these institutions can be seen as a form of capital that takes a long time to accumulate. One of the depressing features of the pictures painted by both Fukuyama and de Soto is that the only hope they see for these societies is home-grown, with each
68 Mark S. Miller and Marc Stiegler
Figure 3.2 Low-trust tragedy.
individual Third World nation bootstrapping itself through all these steps, including, for de Soto, the evolution of their own hubs; or the reform and transformation of each society’s national government into a system that may be widely trusted. Well, it took a long time for the West. If they must recapitulate our path, it will take them a long time as well, a time during which desperate poverty will remorselessly prevail. What enablers are available now that were unavailable when the West made this transition? Might these enablers be used to help accelerate today’s poor through this part of the process of capital formation? The special roles of title and law In The Other Path, de Soto explained the informal economy and its lack of institutions in general. Eleven years later, after much investigation both of the phenomenon of persistent poverty, and an extraordinary uncovering of the history of how the west overcame these problems, in The Mystery of Capital de Soto has narrowed his focus to the crucial role played by the institution of title transfer. While all these hubs are valuable for the formation of wealth-creating societies, not all are equally crucial. De Soto’s analysis suggests that the institutions most needed to get the ball rolling, and which are most painfully absent in the informal sectors of the Third World, are credible systems of title transfer. It is through widely trusted title registries that banks can become confident that the collateral against which they make a loan will indeed become theirs in the event of default. This requires not just trust in the title company itself, but credibility that a transfer of title on the books will be honored as a transfer of ownership in reality. Title registries with this level of credibility enable rights transfer at a distance: people who have never met one another and probably never will can engage in asset transfers and capital formation with the confidence that they will acquire the goods specified in the contract. Although de Soto documents the creation of extralegal title companies in the informal sector, these do not currently seem able to provide the credibility at a distance needed for this transition.
The digital path 69 The governmental path Among currently existing choices, perhaps the only organizations that can fill this crucial titling role in a nation are formal ones backed by government itself. A government bundles together widespread recognition, some sense of legitimacy, and powers of enforcement. This is the strategy de Soto has adopted, converting extralegal assets, village by village, into officially acknowledged parts of the formal economy. The strategy has been wildly successful in bringing the poor into the modern world. Working with the government of Peru, over 4 years, de Soto’s organization helped a quarter of a million people formalize many of their assets, creating new capital, and producing $2.1 billion of new tax revenues for the government of Peru. One may hope and expect that these demonstrated tax revenues, if nothing else, will tempt other governments to follow suit. De Soto’s is not the first attempt to title the informals’ property and bring them into the formal sector, but it is the first such attempt in the Third World to work on a large scale. De Soto documents previous well-intentioned efforts, with surveyors, geographic information systems, interviews of informals to ascertain who owns what, and formal title registries backed by the formal legal system – all the obviously necessary ingredients. Why did these previous attempts all fail? Because the formals did not appreciate that the informals already had worked out systems of law, rights, and obligations, negotiated over time and idiosyncratic village by village. These systems are sometimes called the people’s law. Instead, the formals’ approach to the situation was We have a legal system. You don’t. Here’s ours. Although the title listings reflected a snapshot of who owns what, the legal system governing these title listings did nothing to reflect the complex informally negotiated arrangements needed to understand what rights someone actually held to a particular asset. Given this mismatch, the informals proceeded to ignore the formal title registries and trade assets in the way they always had. The title registries were not updated to reflect changes of actual ownership, and so rapidly became even more irrelevant. De Soto’s special insight in this situation has been that the government must discover and respect the local laws, and work out, at considerable cost in time and effort, a way to integrate those local laws with the national systems. The conflict: local knowledge versus global transferability The difficulty comes from an inherent conflict between local knowledge and global transferability – local knowledge of the idiosyncratic people’s law, conventions, and negotiated arrangements in force in each village, versus the need to move the governance of title transfer to hubs, whose wide scope would seem to require them to operate from a more homogenized set of rules. This tension is acute on the governmental path, as the homogenized set of rules is not even per title company, but is rather the official legal system itself. Governmental legal systems are hardly the wonders of adaptability de Soto’s program would seem to require. Even with the best of intentions, an accommodation between the two must
70 Mark S. Miller and Marc Stiegler rapidly turn into a procrustean bed. However, de Soto offers no alternative. Although difficult, he is somehow successfully making this path work, and he documents how it did work when formal US law, slowly and painfully, absorbed the informal Wild West. As if this path were not difficult enough, this whole process faces enormous obstacles from many different factions, notably bureaucracies and lawyers within the national sphere that see this as an assault on their prerogatives, as de Soto also documents. The process never becomes easy: each step of progress is another major upheaval in the perceptions and preferences of entrenched groups dedicated to protecting the status quo. The digital path Can we sidestep this brutally painful process? Perhaps with the Net, cryptography, and capability-secure platforms – languages (Hewitt et al. 1973; Miller et al. 2000; Rees 1996; Tribble et al. 1995) and operating systems (Hardy 1985; Shapiro 1999) able to run hostile code safely and flexibly. Describing how existing incentives and new technologies could give rise to new wealth-creating institutions may help us coordinate our activities to move the world in this direction. We describe these possibilities in mostly abstract terms until the section on video contracts, in which we introduce another technological ingredient – one for connecting this abstract world to the concrete lives of the poor. National borders aren’t even speed bumps on the information superhighway. (Tim May) Due to the Net, purely electronic goods and services can now be purchased from across the world as easily as from next door. Consumers of these goods and services have already escaped old limits of geography and jurisdiction (Johnson and Post 1996). If the functions normally provided by trust hubs were offered by well-known First World trust hubs as purely electronic services, those in need of such widely trusted intermediary services could escape as well – escape from the crushing assumption that such services can only be provided by institutions backed by their own governments. Instead, they could reach across the Net to use these services, and begin to bootstrap themselves out of their poverty by participating in the global networks of commerce (Figure 3.3). Many First World trust hubs are already widely known and plausibly trusted in the Third World because of the pervasive spread of western media: shows ranging from CNN to Dallas and Baywatch have granted name recognition and an aura of respectability to First World organizations that most governments can only envy. (However one may feel about this process, it is occurring, so we may as well put it to good use.) Using First World hubs, villages on a global scale could become part of a global trust network. For example, if a person in village A wants to sell a tractor to a person in village D, a couple of villages away, they could easily use a title registry run by Citibank in New York to execute the transfer. In a similar
The digital path 71
First World
Third World
Figure 3.3 Remote-trust bootstrapping.
fashion, the tractor may be securitized, transforming it into capital. And in a state such as Russia, a title listing with Citibank would, ironically, have more legitimacy because the titling institution is beyond the reach of their own government. Once the villages of the world join this global village, it will be much easier for them to grow their own high-trust hubs as well: an entity becomes widely trusted by consistently and visibly performing in accordance with various contracts – contracts being managed by hubs that are already widely trusted. With a working trust backbone, highly trustworthy behavior gets the visibility it needs to more rapidly accumulate its own reputation-capital.
Smart contracts How might such hubs deal with the idiosyncrasies of each village’s people’s law – the idiosyncrasies that sabotage traditional governmental attempts to capitalize village assets – without taking on the impossible burden of learning all this local knowledge itself, and without imposing the costs of homogenization? By the use of smart contracts. In smart contracts, a software program is the operational embodiment of a contract (Szabo 1997), where the program’s behavior enforces the terms of the contract, or at least raises the cost of violating the contract. Szabo introduces the concept with the example of a drink vending machine. A drink vending machine is a very primitive example of a smart contract being executed on a contract host – the vending software executing on the vending machine hardware. This combination of contract and contract host is a partially trusted intermediary between the drink manufacturer and the purchaser. It escrows drinks and money, and performs an exchange of those goods when both have been presented. There is even a rollback process, in which it returns the money if the drink cannot be delivered. Traditional contracts are understood to be backed by a coercive enforcement system made of courts and cops. However, the vending machine does not have the option of such recourse following a breach. In what sense is it a contract?
72 Mark S. Miller and Marc Stiegler The vending machine as contract would indeed require separate enforcement if it dispensed the drink first and then demanded payment. However, by escrowing both drinks and payment before dispensing either, it also dispenses with the need for separate enforcement. Instead of enforcement, the contract creates an inescapable arrangement. It cannot prevent the customer from walking away before the game is over, but a customer who walks away from a contract in progress leaves behind any assets escrowed by the contract at that point (Miller et al. 2000). (The vending machine is partially and asymmetrically trusted, as both parties know that it is ultimately an agent only of the drink manufacturer. Other smart contract scenarios involve more symmetric mutually trusted third parties.) Although conventional coercive recourse is still often possible on the Net, for more and more Net commerce these costs are too great, and the jurisdictional issues potentially too messy. Instead, Net businesses have been engaging in rich and rapid experimentation with cooperative arrangements that require no coercive recourse (Krecké, Chapter 14, this volume). The most common arrangements involve not actual escrow, but reputation feedback and credit (Friedman 2001; Steckbeck and Boettke, Chapter 10, this volume). This has a similar logic, in that a participant effectively secures their good performance with the value of their reputation capital. Such arrangements are messier and less amenable to automation than escrow, but they do substantially reduce capital costs. Both kinds of arrangements have their place and will compete in the market. In this paper we explore escrow-based smart contracts, not because we expect this form to dominate, but because their logic is vastly easier to explain; because they are easier to build, and so will occur sooner; and most of all because they apply to participants with no prior reputation, which helps lower barriers to entry. Likewise, for the electronic systems of title (called issuers), in this chapter we assume systems that provide for instant settlement (e-gold). Although delayed settlement may substantially reduce capital costs (Selgin 2001, personal communication), they would turn smart contracts into explosions of complexity. Contracts as games A basic metaphor for smart contracts is the board game. When two people negotiate a contract, they are jointly designing the rules of a game they would both be willing to play. Once they commit to playing this game, the players may then make moves, but only moves judged legal by the rules given the current board state. Each move potentially changes the board state, changing which moves are legal during the next turn. For example, Figure 3.4 shows the six possible board states of a simple negotiation and exchange game (Tribble and Farmer 1991). Let us say Alice is playing the left side of the board and Bob the right. In the initial board state, A, neither of the pieces is on the board. The gold bar, representing money, is off the board on the left, which portrays its possession by Alice at this time. For concreteness, let us say the knight represents stock. Bob might offer a certain amount of stock to Alice by placing it on the right square of the board, taking us
The digital path 73
Figure 3.4 Simple negotiation game.
to board state B. Alice might not respond soon enough, in which case Bob may withdraw his offer by taking back the knight, bringing us back to state A. That is why the first transition arrow is shown as bidirectional. Or Alice may respond to Bob’s offer with a certain amount of money, by placing it on the left square of the board, taking us to state D. At this point, either party may still decide they are unsatisfied, withdraw their piece, and re-enter the loop of bidirectional arrows (at B or C). Or, while in board state D, Bob may pick up the money offered by Alice, accepting Alice’s offer. This is the irreversible commitment step shown by the bold unidirectional arrow, and takes us to state E. From here, the only possible move is for Alice to pick up Bob’s knight, leaving the game in terminal state F. How is this contract self-enforcing? What prevents cheating? Who needs to trust whom with what? To answer these questions, we must start by explaining what is happening on whose computer. We assume that each player trusts their own computer (a dangerous assumption, but we cannot proceed without it). We also assume that player X cannot trust player Y’s computer any more or less than they trust player Y. Under these assumptions, we can treat a computer and its proprietor as a single unit for purposes of analysis. The execution of this game actually involves five parties, as illustrated in Figure 3.5. The two players of course, Alice and Bob. The contract host serves the same role as our vending machine – it is the third party mutually trusted to execute the contract/program faithfully. The contract can be any program Alice and Bob mutually agree on, written to run on a capability system – a secure programming language or operating system suitable for writing smart contracts (Miller et al. 2000). This contract/program functions as the board manager for the game they have agreed to play. (For example, a board manager for chess is a program that enables two people to play with each other, maintains the board state, and only allows legal moves. A board manager does not itself play the game.)
74 Mark S. Miller and Marc Stiegler
Figure 3.5 Separation of duties.
Unlike the vending machine, the contract host need not have any prior knowledge of the contract. Once Alice and Bob agree on the text of a board manager and on a mutually trusted contract host, they upload the board manager to the contract host, which then verifies for them that they have agreed on the same contract, and dispenses to each the right to play their respective sides of the game. These rights are shown as the arrows pointing at the respective chairs. The contract can embody the knowledge of acceptable arrangements local to Alice and Bob, local custom, prior handshakes, etc., limited only by what Alice and Bob can agree on, and what they can manage to express in this new medium. To the extent the contract host can be trusted at all, it can be trusted to run this contract faithfully, despite its ignorance of the local knowledge that gives this contract meaning. This is the first step in resolving the conflict between local knowledge and global transferability. (Telling the tale this way hides a further division of labor. Most players will not program up their own custom contract, but will instead select a “boilerplate” contract/program off the shelf and fill in the blanks. These customizable contracts may have been contributed by earlier players who did write their own contract, or they may be created by specialists, either speculatively or for hire. Or perhaps they will use a simplified contract construction kit, whose user interface might resemble a drawing package specialized for drawing our board-state-transition diagrams. Such an interface may even enable some players to overcome hurdles of language and literacy. In any case, the simplified story of the custom contract still shows well the logic by which the system operates, without needing to speculate on the possible organization of the market for smart contract creation.)
The digital path 75 The two remaining players, the “$-issuer” and the “stock-issuer,” transform the movement of pieces into a transfer of third-party-assayable transferable electronic rights, or erights. A $-issuer, or more conventionally a bank, is effectively a title company for money. For money on record at the bank, the rights to the money changes hands by the transfer of quantity between accounts – shown in the figure as purses within the issuers. When Alice places the gold bar on the board, her computer, the $-issuer, and the contract host engage in a three-way cryptographic transaction that brings about the transfer of title, at the $-issuer, of that much money from Alice to the contract host. An honest contract host would consider this money to be only a piece on the board, which can be picked up (transferred to the possession of a player) according to whatever may be the rules of the game. We refer to this as oblivious escrow – the contract host, merely by running the contract, ensures that erights in escrow can only be released under the agreed conditions, without needing to understand those conditions or those erights. A dishonest contract host could abscond with the money instead, which is why contract hosts need to be widely trusted. A widely trusted contract host presumably has a valuable reputation at stake, which helps secure its honest behavior. (More sophisticated cryptographic protocols are possible which further limit a player’s vulnerability to a dishonest contract host or issuer (Beaver and Wool 1998), but these are beyond the scope of this chapter.) The money is an eright in part because it is third-party assayable. Even though Bob does not currently possess this eright, Bob, through the contract host, can determine which issuer backs this eright, and what the value of this eright is according to this issuer. With such assayability, the $-issuer, the stock-issuer, and the contract host each need not have any prior knowledge or trust of any of the other four players. For the game to be meaningful, Alice and Bob must have prior knowledge and trust in both issuers and the contract host, but not in each other. Even if Alice and Bob are both use-once pseudonymous identities with no apparent physical location (Vinge 1984), under these conditions, they can transact with each other as if they fully trusted each other. The hubs – the issuers and the contract host – thereby succeed at providing virtual trust connectivity between their spokes: Alice and Bob.
Assets + Contracts × Time + ?? = Capital The smart contracts explained so far – the vending machine and the exchange game – cannot turn assets into capital. To do so requires contracts that unfold over time, like a mortgage. To explain how such unfolding creates ever more abstract forms of property, we step through a simpler example contract, the covered call option. Alice has an option when she has the right, but not the obligation, to engage in some action at some agreed price before some deadline. Alice has a call option when she has the right to buy some agreed asset, let us say stock, at some agreed price before the deadline. The option is a covered call option when Alice’s counterparty, Bob, escrows up front the stock Alice may decide to purchase.
76 Mark S. Miller and Marc Stiegler
Figure 3.6 Covered call option.
We may visualize this as the game shown in Figure 3.6. In the initial board state A, the stock is already on the board. While in this state, neither Alice nor Bob may pick up the stock. A new element in the game is the game clock, attached to a transition arrow, that only permits that transition after a deadline has expired. Should the deadline expire while the game is still in state A, Bob could then pick up his stock and go home, leaving the game in terminal state B. Or, before the deadline expires, Alice may decide to exercise the option. She may place a gold piece on the left square. Unlike the previous game, in this game the acceptable amounts are predetermined by the rules. The left square only accepts the amount of money agreed on when the game was constructed. If Alice places this amount of money on the board, this is the irrevocable commitment step shown as the bold unidirectional arrow, taking us to state C. After this move, the only remaining legal moves are for Alice to pick up the stock, and for Bob to pick up the money. What is so different about this contract? During the interval of time from the start of the game until the game leaves state A (whether by expiration or exercise) Alice has something valuable. During this interval, Alice has the option to buy this stock for a given price. This is a very different kind of value than the value of stock or money themselves. The value of this new right derives from the value of the stock and the money, but whereas they are very simple literal kinds of rights, this new right is somehow more abstract. In Wall Street terminology, the new right is a derivative of the more literal underlying rights. In de Soto’s terminology, if the literal instruments are physical assets, then abstract rights derived by contracts about these instruments are a step towards being capital. However, there is something missing from this picture. Through the system so far depicted, Alice can trade those erights managed by issuers: money and stock. The contract host, despite its complete ignorance that it has done so, has created a new valuable right, owned by Alice. But in the picture so far, Alice has no ability to trade this new right. This needs to be repaired, in order for these new rights to truly be capital, and in order for yet more abstract forms of capital to be derived
The digital path 77 from them. De Soto’s capital, besides being an abstraction built from widely tradeable rights, is also itself widely tradeable, enabling further abstraction – the creation of yet more forms of capital. Networks of games To make this new right Alice holds widely tradeable, we need to turn this right – the right to continue playing the left side of this ongoing game – into an eright, a third-party-assayable transferable electronic right. To do so requires an issuer for this new right. Since the contract host is already managing access to this chair, we may as well have it double as the issuer for the eright to sit in this chair. Just as Alice could tell the $-issuer to transfer some of her money from her purse to someone else’s, we can enable Alice to tell the contract host to transfer her eright to sit in this chair to someone else. The contract host would then revoke Alice’s access to the chair and issue fresh access to the other player, much as the $-issuer would with Alice’s money. With this ability to compose networks of games, it seems we have the ability to express the full range of contract layering used in modern finance. Not that modern finance is directly relevant to the needs of the poor, but it is a good test of the generality of our framework. But wait. In order to resolve our conflict, this structure decouples knowledge in a way quite different than anything in the financial world. We have the contract host issuing rights it does not understand, since these rights are produced by games it runs, but does not understand. The contract host is not in a position to vouch for any meaningful property of the rights it is issuing, so how can widespread trust in the contract host translate into credible global transferability of derived rights? How can this derived right be assayable if even its issuer does not understand it? Let us walk through the example depicted in Figure 3.7. Alice starts out simply as a player of the original options game, hosted by contract host 1, whose five participants are enclosed in the horizontal rectangle. While Alice finds herself in the resulting valuable situation – while the options game is in state A – she encounters Fred, who would also find this situation valuable. Fred, were he convinced that Alice’s chair sitting rights mean what Alice says they mean, would be willing to play a different game on contract host 2, perhaps our original negotiation game, in which this right, issued by contract host 1, would appear as a movable piece. Alice appears in the same role in both these games – as a player. Contract host 1 also appears in both these games – it appears as the contract host in game 1, and it appears as an issuer in, game 2, of the right to sit in the left chair of game 1. Hence the tilted overlap of the rectangles. Unfortunately, having just met, Fred and Alice do not trust each other any more than Alice and Bob do. Fortunately, Fred does have prior knowledge and trust of contract host 1. Unfortunately, contract host 1 has no idea if the right to sit in this chair of this game means what Alice claims it means, or anything else. Fortunately, with Alice’s consent, Fred can ask contract host 1 for the text of the contract (the source code of the program), the current state of the board,
78 Mark S. Miller and Marc Stiegler
Contract Host 1
Stock Issuer
$ Issuer
Op Iss tion ue s r
Alice
Bob
Co
nt
ra c
tH
os
t2
$I
ssu
er
Fr ed
Figure 3.7 Layered games.
and the assays of all the pieces presently on the board (i.e. escrowed by the game). In theory, these should be sufficient for Fred to figure out what these derived rights are, so third-party assayability has been provided in theory, but not yet in practice. Alice, who presumably understands the game she is playing, can help Fred figure this out, and Fred can accept Alice’s help, without any trust required between Fred and Alice. Should the contract truly be idiosyncratic to local knowledge shared by Alice and Bob, knowledge to which Fred has no access, he may not find the derived rights comprehensible at reasonable cost and move on. More commonly, if the contract is understandable to some number of others, including some Fred trusts, Fred may turn to them for advice on the contract’s meaning – the analog of legal advice. With this step, derived rights become erights about which other contracts can be written, deriving yet further erights.
The digital path 79 Resolving the conflict The above steps, applied to informally owned assets, could resolve the conflict de Soto explains, providing global transferability without resort to procrustean homogenization of legal systems. The first step, the separation of the contract from the contract host, allows the contract to specialize in capturing local knowledge, while the contract host specializes in providing the trust connectivity needed for global transferability. Our new technological enablers – the ability of the contract host safely and faithfully to run rights-handling code it neither trusts nor understands – allows these two specialties to be combined without conflict. Our second step is contracts that unfold over time, creating derived rights. This step would be applied twice. First, to model the base rights as if they were derived from prior simpler rights to more literal physical objects. Informally owned assets are not literal physical objects themselves, they are rights derived from these objects according to informal local laws and negotiated arrangements. To the extent the logic of these arrangements can be successfully expressed in the new language of smart contracts – with more literal physical objects represented as underlying assets, as if these had once been separately owned – then the smart contract will have preserved this local knowledge in a form that can be uploaded to widely trusted contract hosts. The third step turns the local-knowledge embodying rights created in the previous steps into widely tradeable erights. This step allows these three steps to be applied repeatedly, creating complex networks of contracts that build on each other. This allows the second step to be applied yet again to create yet more abstract erights, like mortgage, setting loose the power of capital formation. The collateral would be, not the physical property itself, but the rights to the property held by the original property holder, according to customary law in that community, as recorded at the time of titling. Although the conflict is resolved in this scenario, there will remain pressures for homogenizing capital-creating contracts; but these are normal market pressures. In the above scenario, Alice will no longer lose the sale to Fred through Fred’s lack of trust in Alice. But she is still in danger of losing the sale through Fred’s difficulty understanding the contract’s meaning. Further, only standardized contracts can create fungible assets tradeable on large exchanges. Fortunately, these pressures can gradually work themselves out over time, and in competition with the benefits provided by custom contracts, well after starting on the digital path. By contrast, on the governmental path pervasive rule homogenization is the necessary first step, and therefore also a major barrier to starting on that path.
Backing and legitimacy For purely electronic assets, like fiat money and stock, at this point we have, perhaps, an adequate picture. The title listing for these assets is the reality of ownership – there is no separate issue of physical control that may or may not follow
80 Mark S. Miller and Marc Stiegler along. To explain how smart contracts may be applied to Wall Street, we could perhaps stop here. But we have not yet succeeded at the mission we set out on – to establish credibility of title transfer at a distance, in order to unlock the potential capital in the poor’s $9.3 trillion dollars of extralegal buildings, land, etc. Actual control of such physical assets is determined, not by their title listing, but by consensus of the governing community in question. Why wouldn’t the digital path fail in the same way the pre-de Soto governmental attempts failed? Why would these communities consider these title transfers legitimate and honor them? The governmental path has an advantage here. Governments employ a vast coercive apparatus to enforce the outcomes they claim are legitimate, such as eviction following a title transfer. Still, even with this advantage, the previous attempts failed when title listings not locally seen as legitimate were not honored. Under these circumstances, even governments mostly understood that coercive enforcement would have had too terrible a cost. De Soto documents well the repeated victories of squatters over governmental law. The missing ingredient was the need to accommodate pre-existing local arrangements. These are the source of the legitimacy governing current control of the property, and any new system can only be seen as legitimate if it incorporates and builds on the old. On this issue, the digital path has the huge advantage over the governmental one explained previously. But what of the disadvantage? The Net is a purely non-coercive medium, it transmits only information – effectively speech – but cannot transmit force. Smart contracts can change their electronic records which claim to be about the world – such as title – but they cannot force the world to follow along (Friedman 2001). Unlike government-based title transfer, these changes of title are not backed by a coercive enforcement apparatus. How may we compensate for this lack? For concreteness, in order to establish possibility, we propose here two complementary techniques, but we do not presume to foresee what the actual outcome of the market discovery process will be. This should be an area ripe for entrepreneurial invention. Ratings The issue of credibility does not require all title listings to have high credibility. Rather, it is adequate for distant traders, who have no knowledge of a particular governing village, to nevertheless have some reliable basis for judging the credibility of particular title listings. An obvious answer is to introduce another trusted intermediary institution into this market – title insurance. Once the digital path matures, this solution may be ideal, but as a way to get started it has a fatal problem – it requires a crippling up front capital investment, in order to cover the massive potential liabilities. Rather, another familiar form of intermediary may be adapted to this situation. Bond-rating agencies provide the market with an estimate of the likelihood that a business or government will actually meet the obligations represented by its outstanding bonds. The rating agency does not attempt to estimate what the price of the bond should be – that is left to the market, which takes these likelihoods into
The digital path 81 account. A bond-rating agency does not put its money where its mouth is – it does not issue bond insurance to back its likelihood estimates. Rather, it backs its estimates with its reputation, which can become quite valuable, but is still cheaper than issuing insurance. Similarly, in our situation, we can imagine a market of third-party raters that post judgments of the likelihood that transfer of a given title will actually be honored (Stanley 2001, personal communication). Simply recording each village’s track record of honoring past title transfers, and assuming the future will be like the recent past, is a low overhead procedure that is plausibly adequate. And it places each village in an iterated game with the system as a whole, providing it an incentive to treat these titles as legitimate claims, subjecting them to the local tradition’s means of enforcement. We can think of this as a credit report, not for an individual, but for a village and its system of local law. Video contracts But legitimacy is more than just preserving local arrangements, and it is more than just incentives or force. It is knowing in your bones what the right thing is, based on the kinds of evidence you are used to taking into account. One day a distant voice on the phone tells Bob to vacate his family home because, it claims, his late father Sam took out a loan from a distant bank Bob has never heard of. Bob believes that if this were true, Bob would have heard about it from Sam. The evidence presented by these disembodied strangers are impersonal electronic records Bob can hardly understand, much less verify. Perhaps Bob is told the veracity of these records can be verified by cryptographic means. Why should Bob believe them? Absent coercive power to evict, why should Bob even take them seriously? Why not just stay in his home? Bob’s community does have power of various sort over him, probably including the power to evict, if there is sufficient local consensus on the legitimacy of the claims of the outsider. However, Bob is a known member of the community. Why would they take an outsider’s word over his? The incentives produced by rating agencies are far too weak an answer. Into this situation, let us introduce Szabo’s Video Contract (Szabo 1998). As Szabo explains, a contract is supposed to represent a meeting of minds, but complex lawyer-written contracts on paper no longer plausibly meet this standard. Verbal contracts often do at the moment of the handshake – both parties have just had a rich conversational interaction, full of all the conscious and subconscious cues we use to understand what each other understands – so they plausibly have a good sense of what they jointly mean to agree to. However, memory is fleeting, so people turned instead to paper to record agreements, trading away richness, sincerity, and vividness in order to get permanence. At the time it was a necessary trade. But no longer. Using video cameras, also rapidly dropping in price, contracting parties could now record their conversation about the contract’s meaning, and store it with the contract for future reference. Ideally, a title listing could store
82 Mark S. Miller and Marc Stiegler the entire chain of videos for the chain of contracts by which the property changed hands – starting with the initial video interview at the time the initial title record was created. The outsider could show, to both Bob and his community, the video of the conversation with Sam about the meaning of the contract. For most communities, given general confidence the video is not fake, seeing Sam explain clearly what rights he’s trading away will be enough to establish legitimacy. After all, that’s Sam talking, not some outsider making dry claims about Sam’s past intentions. Such recordings should also help overcome barriers of language and literacy. Open areas to explore include technological means to inhibit forgery (like time-stamping digital notaries), and how one presents the results of such integrity checks as evidence properly credible to non-computer people.
Limitations and hazards The governmental path, having been previously navigated, has limitation, problems, and dangers we can anticipate. The digital path is untrod and mostly unknown, and subject to the blind spots of wishful thinking. In this section we take a first stab at some of the problems lurking ahead of us, but many more issues remain. Incorporating human language, perception, and judgment Many pre-existing and desired arrangements will not be expressible purely as smart contracts. Conventional contracts make use of the rich expressiveness of human language, perception, and judgment; all of which are vastly more subtle and sophisticated than any currently automatable alternative. Between the poles of fully human contracts and fully automated contracts is a spectrum of arrangements we call split contracts – the contract is split between automated and non-automated parts. The first smart contracting system, Amix (Miller 1999; Walker 1994), demonstrates some of the ways to design split contracts so the two parts can play together well, enabling us to take advantage of the strengths of both. For example, a split contract could consist of: • • •
an automated game, adequate if a dispute does not arise; natural language text expressing what the game could not, relevant only during a dispute; an agreement on which person or institution should read the text and arbitrate the outcome of a dispute.
From a game-centric perspective, the ability to declare the outcome to be in dispute is only another move in the game, and the arbiter is only another player. From a paper-contract-centric perspective, the text as interpreted by the arbiter is the outcome of last resort, and so is the “real” contract – the game is only a lighterweight approximation for typical non-disputed cases.
The digital path 83 Another form of split would be by layers. Whereas the logic of a mortgage game may be fully automated, the rights being put up as collateral are limited to those held by the original property holder according to the governing village’s law. The contract needed to represent these latter may often remain mostly nonautomated. In Figure 3.7, contract host 1, as issuer of these locally defined rights, would provide Fred as well with the text or video, and the identity of the agreed arbiter. Fred would then take these into account in assessing the value of Alice’s chair – the rights Alice would like to use as collateral. Regulatory capture versus regulatory arbitrage Historically, the growth of widely trusted large-scale institutions in the West – and the corresponding partial concentration of economic activity into the trust backbone – made possible the rise of the large regulatory state. This concentration, despite its benefits, also dramatically lowered the cost of regulation, as there were far fewer places in the economy that needed monitoring, such as the banks. These concentrations made economic activities of various sorts subject to regulatory capture. Although the Net has allowed the consumers of electronic goods and services to escape limitations of geography and jurisdiction, so far it has not provided the same escape for producers, especially those with worldwide name recognition. These remain tied to some government, and subject to its decrees. On de Soto’s governmental path, the informals come to be dependent on the integrity of their own governments, in danger of local regulatory capture. On the digital path, by having their contracts rely on the trustworthiness of FirstWorld trust hubs, have we not just transferred this vulnerability from their own governments to those of the First World, over which they have even less influence? The Net treats censorship as damage and routes around it. (John Gilmore) The nature of the dangers depends on the nature of the architecture (Lessig 1999). The architectures of the first generation of electronic media – radio and television – amplified censorship and diminished free speech (de Sola Pool 1984). The architecture of the Net has dramatically turned this around, creating actual freedom of speech more absolutely than even the best constitutions. Might a decent architecture for distributed smart contracting treat regulation as damage and route around it? The most powerful answer is already implicit in the architecture of the digital path – a diversity of contract hosts, spread across competing jurisdictions, themselves competing to establish a reputation for operating honestly. Any one government going bad would endanger many contracts and much property, but will cause a flight of electronic business towards climates expected to remain freer.
84 Mark S. Miller and Marc Stiegler The field of fault tolerant computing studies how to build reliable systems from unreliable components. For example, for certain demanding applications an individual actual computer may be considered unreliable, but a reliable virtual computer may be synthesized from several actual computers by comparing the outcome of each step in a kind of voting process. Due to dangers of regulatory capture as well as internal corruption, an actual First-World trust hub may be considered an analogously unreliable component. From a set of these we may synthesize a reliable virtual trust hub in a variety of ways, such as a voting protocol in which a quorum of, let us say, five out of seven actual contract hosts have to agree on an outcome in order for it to be considered an outcome of the synthetic virtual contract host; and in order for that game’s issuers to honor the outcome. These issuers themselves can be virtual reliable issuers in this same sense (Szabo 1999). Making such technologies work is tricky, so we should not try to achieve transjurisdictional fault tolerance before we get started, but we should also make sure not to paint ourselves into a corner – we need to understand how a simpler working system could incrementally grow to support such fault tolerant protocols.
Why the Third World first? This new world of Net-based jurisdiction-free coercionless smart contracting – the digital path – is an option for the First World as well as the Third. Both groups stand to gain tremendously by this transition. Virtually all progress to date towards the digital path (Johnson and Post 1996; Krecké, Chapter 14, this volume; Lessig 1999) has been in the First World. Nevertheless, once technology costs become inconsequential, we expect the Third World to overtake and then lead the First in making this transition. Why? Comparative legitimacy Primarily because, once again, of the issue of legitimacy. The character of legitimacy in the first world is quite different to the legitimacy we have been discussing among the Third World’s informals. First-World societies, having made the transition to the governmental path long ago, have enjoyed great wealth, but have paid a subtle price in flexibility. In most rich First-World countries the issue of legitimacy is inextricably coupled to legality. For a business in these countries to be judged legitimate by the culture, and for others to consider their dealings with this business to be legitimate, the business must be legal – it must operate within the formal legal system. By contrast, among the informals the formal legal system has no monopoly on legitimacy – many extralegal institutions enjoy widespread popular legitimacy. As explained above, a new system of law will only be seen as legitimate if it accommodates, incorporates, and builds on pre-existing systems of legitimacy. The pre-existing formal and informal systems each have a very different kind of great complexity; and the effort to accommodate this complexity, to express these rules in the language of smart contracts, can seriously impede these transitions.
The digital path 85 Among the informals the great complexity comes from the sheer number of local arrangements that need to be expressed. The people’s law of each individual village, being largely unwritten, may be simpler than formal law. More important, the very informality of these systems allows them to compromise. As long as an adequate spirit of the law is uploaded, imperfect expressions can often be judged to be good enough. This lets the transition get started incrementally, village by village, and imperfectly. By contrast, although there are far fewer separate systems of formal law, each is a vast growth of complexity that no one even pretends to understand. However, because of the formality with which this law is administered, and the absence of competitive pressures, this system brooks no compromise except through politically driven change. This is a high enough bar that a smart contract system, to be legitimate by this standard, might emerge so slowly as to be a non-issue. Homogenization costs The costs of rule homogenization discussed above, to be paid on the governmental path, is a cost that has already been paid and largely forgotten in the First World. The First World has already lost this great source of diversity, so the digital path’s option to avoid paying these costs is not a selling point there. Cell phones in eastern Europe Cell phones first became society-wide hits in poor countries with terrible telecommunication, not in rich high-tech societies. (My own observations of Prague versus Silicon Valley in 1998 corroborate this – cell phones were everywhere in Prague.) Not having a working phone network, cell phones offered a huge advantage over their prior situation. They offered a much more minor improvement in societies where the land lines work. Those without an adequate prior system were able to more quickly leapfrog over to a better system. There is no necessary sequence of telecommunication systems that each society must separately recapitulate. Likewise, the informals have no access to working global networks of trust and commerce. The digital path offers them tremendous new opportunities. In the West, it provides a smaller improvement, and an improvement over a system many consider imperfect but adequate. Saving the First World Should the Third World be the first to succeed at the digital path, and should this in fact unleash their potential capital, causing markets to bloom, creating vast wealth, how would this effect the First World? Formal laws in the First World do change under political pressure. One of the more effective sources of pressure is that which stand to gain from large-scale commerce with the rest of the world. Once a significant part of the world’s economy
86 Mark S. Miller and Marc Stiegler is occurring in the digital path, First-World businesses would then face a choice – trade with these networks or stay legal-legitimate. This is an unpleasant choice both for them and their governments. The pressures will be great to legalize trade with these jurisdiction-free networks of commerce. Once such unregulatable trade is made legal, the dam will have burst. What will be the character of the resulting world?
The rule of law and not of men Surprisingly perhaps, the character of the digital path may best be described as a pure form of the classical liberal ideal – the rule of law and not of men. Indeed, the digital path could more literally realize the meaning of those words than anything the original classical liberals could possibly have conceived. This is not just a cheap play on words. The ideal they were describing was of a neutral simple framework of rules, enforced impartially and justly, providing for cooperation without vulnerability – protecting individuals from each other while enabling them to cooperate with each other. A key means of enabling cooperation was the original right of contract, where almost any mutually acceptable arrangement could be made binding, with the law serving as the mutually trusted intermediary for securing the arrangement. With smart contracts, the encoded rules themselves become the logic of their own enforcement, subject only to the honesty, not the judgment or skill, of a diverse market of competing contract hosts. This competition forms a vastly stronger and fully decentralized system of checks and balances. The Third World could rise on an enhanced version of the principles on which the West grew rich.
Acknowledgments These ideas have formed over much time and many valuable conversations, for which we thank Darius Bacon, Jack Birner, Greg Burch, K. Eric Drexler, Charles Evans, Robert Gerrard, John Gilmore, Michael Glenn, Ian Grigg, Robin Hanson, Doug Jackson, Ken Kahn, Don Lavoie, Ted Nelson, Zooko (Bryce Wilcox-O’Hearn), Gayle Pergamit, Chris Peterson, Jonathan Shapiro, Terry Stanley, Nick Szabo, E.-Dean Tribble, Bill Tulloh, Ka-Ping Yee, and the members of the e-lang mailing list.
References Beaver, D. and Wool, A. (1998) “Quorum-based secure multi-party computation,” in Lecture Notes in Computer Science, Springer Verlag. Online. Available HTTP:
(accessed May 2001). de Sola Pool, I. (1984) Technologies of Freedom, Cambridge, MA: Harvard University Press. de Soto, H. (1989) The Other Path, New York: Harper and Row.
The digital path 87 de Soto, H. (2000) The Mystery of Capital, New York: Basic Books. Chapter 1 online. Available HTTP: (accessed May 2001). Friedman, D. (2001) “Contracts in cyberspace,” The Berkeley Law and Economics Working Papers vol. 2001, no. 2. Online. Available HTTP: (accessed Feb 2003). Fukuyama, F. (1995) Trust: The Social Virtues and the Creation of Prosperity, New York: Free Press. Granovetter, M. (1973) “The strength of weak ties,” American Journal of Sociology 78: 1360–80. Hardy, N. (1985) “The KeyKOS architecture,” Operating Systems Review September: 8–25. Updated version online. Available HTTP: (accessed May 2001). Hayek, F. (1937) “Economics and knowledge,” Economica (reprinted in J.N. Buchanan (ed.) Essays on Cost, LSE (1973), Weidenfeld and Nicolson). Online. Available HTTP: (accessed May 2001). Hayek, F. (1978) New Studies in Philosophy, Politics, Economics and the History of Ideas, Chicago: University of Chicago Press. Hewitt, C., Bishop, P., and Stieger, R. (1973) “A universal modular actor formalism for artificial intelligence,” International Joint Conference on Artificial Intelligence (August), San Francisco: Morgan Kaufmann, pp. 235–45. Johnson, D.R. and Post, D.G. (1996) “Law and borders – the rise of law in cyberspace,” Stanford Law Review 1367. Online. Available HTTP: (accessed May 2001). Lessig, L. (1999) Code, and Other Laws of Cyberspace, New York: Basic Books. Excerpts online. Available HTTP: (accessed May 2001). Miller, M.S. (1999) “Observations on AMIX, the American information exchange.” Online. Available HTTP: (accessed May 2001). Miller, M.S., Morningstar, C., and Frantz, B. (2000) “Capability-based financial instruments,” Proceedings of Financial Cryptography 2000, Springer Verlag. Online. Available HTTP: (accessed May 2001). Rees, J. (1996) “A security kernel based on the lambda-calculus,” MIT AI Memo No. 1564, MIT, Cambridge, MA. Online. Available HTTP: (accessed May 2001). Shapiro, J.S. (1999) “EROS: A capability system,” unpublished Ph.D. thesis, University of Pennsylvania. Online. Available HTTP: (accessed Feb 2003). Szabo, N. (1997) “Formalizing and securing relationships on public networks,” First Monday 2(9). Updated version online. Available HTTP: (accessed May 2001). Szabo, N. (1998) “Video contracts,” Online. Available HTTP: (accessed May 2001). Szabo, N. (1999) “Secure property titles with owner authority.” Online. Available HTTP: (accessed May 2001). Tribble, E.D. and Farmer, R. (1991) Derived from work done for AMIX, The American Information Exchange California: Palo Alto.
88 Mark S. Miller and Marc Stiegler Tribble, E.D., Miller, M.S., Hardy, N., and Krieger, D. (1995) “Joule: Distributed application foundations.” Online. Available HTTP: http://www.agorics.com/joule.html (accessed May 2001). Vinge, V. (1984) True Names, Bluejay Books. Online. Available HTTP: (accessed Feb 2003). Walker, J. (1994) “Understanding AMIX,” in J. Walker (ed.) The Autodesk File, 4th edn. Online. Available HTTP: (accessed May 2001).
Part II
Some history
4
High-tech Hayekians Don Lavoie, Howard Baetjer, and William Tulloh, with comments by Howard Baetjer, Marc Stiegler, and Pietro Terna
Original text of “Prefatory note: the origins of ‘The Agorics Project’ ” by Don Lavoie published in Market Process (1990) Vol. 8, Spring, pp. 116–19. Readers of this journal are accustomed to cross-disciplinary explorations from economics to a number of other sciences, but until now, to my knowledge, there have not been any attempts to communicate with the field of computer science. In September of 1989 at George Mason University there began something that is being called the “Agorics Project” in which graduate students from the Artificial Intelligence Laboratory at GMU’s Department of Computer Science joined several economists at the Market Processes Center to investigate a number of topics of mutual interest. The name “agorics” is borrowed from some research that moves in the opposite direction across that disciplinary boundary, trying to use economics in computer science, but the aim of our group is to explore some ways that economics might benefit from looking into certain developments in computer science. The accompanying article is the product of several months of conversations among computer scientists and market process economists who have begun to delve into the five distinct research areas the article describes. The substance of the topics we have been discussing is summarized there, but in this note I would like to supply a bit of historical background that might help to explain how it happened that several of us here at the Market Processes Center have suddenly found ourselves looking into these new research areas. I will tell the story in a rather autobiographical manner, not so much to try to claim all the credit for bringing computer science issues to the attention of market process economists as to try to share the blame. Although I had always wanted to find a way to take advantage of my background in computer science, it was only as a result of numerous discussions I have had with the various individuals I will be mentioning that I finally decided to return to the computer field. Only time will tell what gains may result from this sort of intellectual interchange. Computers are my first love. The field of computer science in which I have been the most interested is artificial intelligence. When I was first introduced to computers while still a high-school student, I was trying to design a program to play poker. As an undergraduate majoring in computer science at Worcester Polytechnic Institute, I did a senior
92 Don Lavoie, Howard Baetjer, and William Tulloh project with two other students that produced a program to “simulate” the process of music composition. Based on discrete event techniques, the program wrote “music” step by step according to the rules of the Baroque fugue. I presented the results with my co-authors at a professional meeting of computer scientists, who seemed to like it, but we were acutely aware that if it were a meeting of musicians it would have not been such a hit. Whether trying to play card games or compose music on computers, the experience of trying to simulate what human minds do was humbling. There is probably no better way to realize the serious limitations of computers and the amazing subtleties of human intelligence than to attempt to reproduce the thinking process. It was easy to get the programs to mechanically obey specific rules, but it was something else to try to replicate Bach. While at college, I first became interested in economics. Ever since, I have wanted to find a way to legitimately use computers in economics. My attraction to the market process school came from the way it addresses the questions about the nature of human knowledge which I had been thinking about from the standpoint of artificial intelligence. Most of the ways computers have been used by economists have not interested me in the slightest, because of the way they treat action as purely mechanical. But I was not convinced that computers were necessarily “mechanistic” in the relevant sense. I began to explore the possibility of designing a non-mechanistic kind of simulation with which market process economics could be comfortable. My exposure to discrete event simulation techniques as an undergraduate suggested to me that such aspects of choice as creativity and divergent perceptions, which have not been modeled well by mainstream economics, might be “simulatable” with such methods. It might be possible to construct computer simulations that could be used as mental experiments for developing market process theory. When I began graduate school in economics at New York University I continued to think about the possibility of such computer simulations, and intended to write my dissertation on the topic. The seminar I gave on the idea to the weekly Austrian colloquium at NYU was not exactly met with enthusiasm, but it was greeted with enough openness and tolerance to encourage me to keep thinking about it. I continued working on it for about a year, getting so far as to design the main structure of the program. Eventually, however, I talked myself out of the project. I was sufficiently concerned that I would get nothing useful out of it that I decided not to risk my Ph.D. on it, and chose a less risky topic in the history of thought. (The first economics paper I had written was for a computer science class arguing, rather unpersuasively, why computers could not answer Mises’s 1920 challenge to socialism, and I had always wanted to make that case more complete.) So until recently, computer programming has been merely the way I worked myself through graduate school, and has had nothing to do with my economics research. A few years later, as a more secure, soon-to-be-tenured faculty member at GMU, I began thinking about the idea again. I started discussing it with a graduate student, Ralph Rector, who also had computer programming experience, and who took an immediate interest. We had regular sessions for a couple of
High-tech Hayekians 93 months, exploring some of the possibilities as well as the apparent obstacles and methodological objections. As a result of these discussions my thinking advanced considerably about how such simulations might be constructed. For the first time, I had found someone to talk with at length about the kind of project I had in mind, and I was reinforced in my belief that in principle it might be feasible. Ralph developed some of the main components of an algorithm for decisionmaking under radical uncertainty, and our discussions helped to resolve several of the conceptual difficulties I had originally worried about. Yet I still had serious doubts, and never really turned my research back in the direction of computer science. Although Ralph had not really given up, I ended up talking him out of the simulation project on the grounds that it was too high a risk for a dissertation topic. Now I am rejuvenating the idea for a third time. Above all I have one of my graduate students, Bill Tulloh, along with two computer scientists from Palo Alto, Mark Miller and Eric Drexler, to thank for getting me back into computer science in general, and the simulation idea in particular. Bill became interested in the Miller and Drexler articles as a result of his explorations of recent developments in the study of complex phenomena and self-ordering systems, an interest stemming from a paper he wrote for Jack High’s graduate course in market process economics. Bill alerted me to two path-breaking papers by Miller and Drexler, which used Hayekian ideas to attack certain classic problems within computer science. The understanding these software engineers exhibited of Hayek seemed to me to be far more sophisticated than that of most economists, and the number of ways in which contemporary computer science appeared to be closely related to Hayekians themes was astonishing. Bill also introduced me to dozens of interesting articles in Artificial Intelligence and other areas, which renewed my interest in my first field of scholarship. He also introduced me to Bob Crosby, secretary of the Washington Evolutionary Systems Society, whose networks of contacts and enthusiasm for our work has helped to further fuel this interest. A group of graduate students at the Center, including Ralph Rector, as well as Howard Baetjer, Fred Foldvary, and Kevin Lacobie, were also impressed by Miller and Drexler’s work. I wrote an enthusiastic letter to the co-authors, who responded in kind, and suggested that a few of us come out to Silicon Valley to visit them. They also introduced us to a third person, Phil Salin (see his article on “The Ecology of Decisions” elsewhere in this issue of Market Process). Salin had introduced Miller and Drexler to Hayek. I began a series of lengthy phone conversations with the three of them, and decided to take them up on their offer to visit them so that we could talk over the simulation idea, as well as several other possible research topics their papers hinted at. It turns out that Ralph was already far along on another dissertation topic and unable to join us, but I was able to bring Bill Tulloh and Howie Baetjer with me. The trip was co-sponsored by the Market Process Center and the three hosts, Miller, Drexler, and Salin. The trip was, in our judgment, a resounding success. Indeed, I think I can say that Howie, Bill and I had one of the most intellectually stimulating experiences of our lives.
94 Don Lavoie, Howard Baetjer, and William Tulloh The accompanying paper is a product of that stimulation. The literature the paper surveys was mostly unearthed by Bill, who almost single-handedly wrote the bibliographic appendix and collected the references. The main body of the text was mostly put together by Howie and myself. The underlying ideas, as we point out, are mostly those of our gracious hosts, as reinterpreted in our own words. Although as a result of our remarkable visit I have added several new research areas to my interest in the computer idea, they have not supplanted but only reinforced that original interest. The last of the five possible research topics described in the co-authored article sums up this idea of using computer simulations as mental experiments for economic theory. Although building such simulations for the purpose of doing economic theory was certainly not the point of the Miller and Drexler papers, their work has obvious implications for this kind of research, and it has, for me, put a whole new slant on the simulation idea. Many of the reasons that had twice dissuaded me from attempting to develop the simulations no longer seem so daunting. For one thing, Artificial Intelligence (AI) research has matured a great deal since the days I studied it in college, and the techniques being used now in machine learning seem much more promising than anything I had known about in my earlier flirtations with the simulation idea. Conversations I have had with Mark Miller after our trip have convinced me that it is an idea worth exploring more seriously. So we have begun to do just that. The Agorics Project, an informal discussion group that is looking into ways computer science might be useful to market process economics, was launched at the Center for the Study of Market Processes in the fall 1989 semester. The team has so far been composed of Howard Baetjer, David Burns, Kevin Lacobie, Kurt Schuler, Bill Tulloh, and myself, of the GMU Economics Department, and Hugo de Garis and Pawel Stefanski, of the Department of Computer Science. We have had on our discussion agenda all five of the topics mentioned in the co-authored article, though so far the one we have talked about the most is the simulation idea. Two distinct programming simulation projects aimed at the replication of a Mengerian process for the evolution of money have been initiated by the team, one by de Garis and myself, and one by Stefanski and Baetjer. Although these simulations are still at a rather rudimentary stage, we have been encouraged by the results – and even more by the process. The process of trying to design programs that articulate the logical structure of our economic theories has been a wonderfully educational experience. We don’t feel so much that we are learning computer science, although that must be happening too, but that we are learning Mengerian economics. The group is continuing to meet on an informal basis throughout the spring 1990 semester and this summer, and in the spring of 1991 I will be teaching a new special topics graduate course on “Economics and Computer Science,” in the hopes of investigating the various topics more systematically. We welcome any suggestions our readers may have as we begin to explore the possible gains from intellectual exchange in this new form of interdisciplinary research.
High-tech Hayekians 95 Original text of “High-tech Hayekians: some possible research topics in the economics of computation” by Don Lavoie, Howard Baetjer, and William Tulloh published in Market Process (1990) Vol. 8, Spring, pp. 119–46. In a thoroughly intriguing set of papers recently published in a book edited by B.A. Huberman entitled The Ecology of Computation, two computer scientists, Mark S. Miller and K. Eric Drexler, have made major advances in what might be called the Economics of Computation, a new, largely unexplored subdiscipline on the interface of economics and computer science. What makes this research especially interesting to several of us at the Center for the Study of Market Processes is that the kind of economics Miller and Drexler are using to overcome their own problems in computer science is “our kind.” This was no accident. It turns out that Miller and Drexler were specifically alerted to the economic writings of Hayek by Phil Salin. Reading Miller and Drexler’s papers, we were struck by the depth of understanding these computer scientists have of our literature, especially the work on property rights economics and the spontaneous order theory of F.A. Hayek. The success of these pioneering papers suggests that there may be some interesting research possibilities in computer science for the application of market process ideas. In August of 1989, the three of us made a trip to Palo Alto to visit Miller and Drexler at the Xanadu Operating Company and the American Information Exchange Corporation, or “AMIX,” the two software design companies where Miller and Salin work, and for whom Drexler is a consultant. We were extremely impressed. Our time in Palo Alto was so replete with ideas, discussion and new insights that we can hardly begin to summarize them all. We came hoping to find one or two possible research topics. We returned with too many possible topics for the three of us to handle. Throughout the visit the conversations always involved Hayekian ideas, but at the same time they almost always involved computer science. Miller, Drexler, Salin et al. consider market process ideas to be of enormous practical usefulness in the design of commercial software. In this note we would like to introduce the reader to the most important ideas we encountered, and indicate why we think market process economists should look into this fascinating work. In all, we spoke to some twenty to thirty young computer scientists who are energetically applying market process insights to some very ambitious software ventures. They are not primarily academics, but they have academic interests. These academic interests, however, are focused on practical, real-world applications. How shall we describe this group? Each of them seems to be a rich brew of scientist, entrepreneur, programmer, and philosopher. Each has a firm understanding of market principles and a deep appreciation for the general social principle of non-intervention, for reasons to be described shortly. The organizations they are involved in are start-up companies that are developing software to help people manage information in ways that enable knowledge to evolve more rapidly. We visited them in their homes and in their offices. The animation of our discussions rarely slackened, even in the cars on the way to restaurants.
96 Don Lavoie, Howard Baetjer, and William Tulloh The three people most intensely involved with market process ideas are as follows: Mark S. Miller: software engineer formerly with Xerox Palo Alto Research Center, flowing fountain of ideas and enthusiasm, co-author of four of the essays in the Huberman book, and now chief architect for Xanadu. Mark has derived many insights about how to design computer programs from studying Hayek. K. Eric Drexler: formerly with the artificial intelligence laboratory at MIT, now with the Department of Computer Science at Stanford University, and author of Engines of Creation: The Coming Era of Nanotechnology, an investigation of the nature and implications of startling advances in technology toward which we are moving. Nanotechnology refers to the next step beyond microtechnology, the ability to achieve technological control over matter at the molecular level. With serious interests ranging from molecular engineering and nanomachinery to Austrian economics, Drexler defies categorization in customary terms. He might best be described as a 1990s version of a Renaissance thinker. He assists with the Xanadu design work, and with his wife Chris Peterson runs the Foresight Institute, an organization devoted to preparing the world for nanotechnology. Phil Salin: an entrepreneur, formerly in the telecommunications and space transportation industries and now on Xanadu’s board of directors. He came to see the importance of Hayek’s thought for understanding the business world. He was instrumental in bringing the Xanadu group to Palo Alto. We also got the chance to visit with Ted Kaehler, a software engineer with Apple Computer Corporation and an expert on the field of Artificial Intelligence known as “machine learning,” especially the approach called “neural networks.” Ted is very interested in using market principles in the development of AI systems. We also spoke with Marc Stiegler, vice-president of Software Engineering at Xanadu and a professional writer of science fiction. He brings to Xanadu a constant reminder of the practical steps that need to be taken on a daily basis to reach the company’s ambitious goals. We spoke to many other programmers on the Xanadu and AMIX teams. All shared an expectant outlook on the future, for computer science and for society. And yet these individuals are software engineers focused on practical concerns. They are not just speculating, but busy with the details of building systems that are immensely practical, however astounding they may be. These individuals are coauthors with us, in a sense, of this paper, since the whole is a summary of our discussions with them. The development of the research ideas outlined below was very much a team effort. As usual with such efforts, the total output exceeds the sum of the inputs. We are grateful to have been on the team. It would be useful to distinguish five different kinds of research areas that our discussions hit upon, within each of which we found several specific possible projects. Any given project may easily overlap these research areas, but it will help to keep them distinct, separating them by what each takes as its primary focus.
High-tech Hayekians 97
Process-oriented case studies of the computer industry As with most media from which things are built, whether the thing is a cathedral, a bacterium, a sonnet, a fugue or a word processor, architecture dominates material. To understand clay is not to understand the pot. What a pot is all about can be appreciated better by understanding the creators and users of the pot and their need both to inform the material with meaning and to extract meaning from the form. There is a qualitative difference between the computer as a medium of expression and clay or paper. Like the genetic apparatus of a living cell, the computer can read, write and follow its own markings to levels of self-interpretation whose intellectual limits are still not understood. Hence the task for someone who wants to understand software is not simply to see the pot instead of the clay. It is to see in pots thrown by beginners (for all are beginners in the fledgling profession of computer science) the possibility of the Chinese porcelain and Limoges to come. (Alan Kay 1984: 53)
One type of research project which frequently came up was simply to use the tools of market process economics to examine the specifics of the computer industry. Such studies would be in keeping with the overall thrust of the research combining theoretical and empirical study that has been going forward at the Market Processes Center over the last several years. Not the least important reason economics might study computation is that a growing and increasingly important part of the economy consists of the computer industry. One example of the kind of topic we have in mind here is considering software as capital. It is a cliché that we are entering the information age, and much of the capital “equipment” that counts today is in the form of instructions to computers. Software is a special kind of capital good with some economically interesting characteristics. Market process economics, since it concentrates on the way knowledge is used in society, may in fact find this industry especially intriguing. Increasingly computers play a central role in the process of knowledge transmission. Studying software as capital may help us illuminate the market process approach to capital theory. Few kinds of tools today are more important than software, for software increasingly directs our “hard” tools. But software does not fit the assumptions made about capital goods in mainstream theory. It is reproducible at negligible cost; its use by one agent does not preclude its use by another; the major costs associated with it are information costs (often neglected in neoclassical theorizing); it does not wear out when used. Market process economics emphasizes that capital is a complex structure, involving time and uncertainty, and it views markets as disequilibrium processes. The capital goods we call software are especially heterogeneous; not only do different programs accomplish entirely different tasks, but they are written in many different, incompatible languages. The patterns of complementarity of software-capital are exceptionally complex: most programs can run only on certain kinds of machines, performing tasks useful only in certain kinds of endeavors. A given CAD–CAM (computer aided design– computer assisted manufacture) system, for example, may require a particular computer with particular specifications, and a particular manufacturing device. In the software industry the detailed timing of events is particularly important. Change
98 Don Lavoie, Howard Baetjer, and William Tulloh is so rapid that a product not released in time is doomed, and capital destruction, through the advance of knowledge, occurs constantly. Old programs and machines created at great cost and bought at great price become obsolete in months. Conceiving of software as capital goods calls into serious question the common treatment of capital in economic models as accumulating solely out of saving. In many growth models, for instance, new capital in a given period is defined as the unconsumed product of the previous period. This model implies that the most important thing new capital requires is previous physical output. It ignores what the Austrian view emphasizes: the role of knowledge and creativity. Beer barrels and blast furnaces are more than the saved wood and steel from a previous period. They are additionally the embodiment of careful plans and much accumulated knowledge. The new capital is a combination of the physical wood and steel with this essential knowledge. With software, the knowledge aspect of capital reaches its logical extreme. Software has virtually no physical being. It is essentially a pattern of on and off switches in the computer. As such, it is pure knowledge. No physical output from a previous period is required to produce it, except perhaps for the Coca-Cola (aka “programming fluid”) and Twinkies the software engineer consumes as he generates the code. The knowledge and creativity of the programmer are crucial. Other studies that fall within this research category include studies of the evolution of standards and of the development of interoperability. What becomes an industry standard is of immense importance in determining the path an industry takes. As, say, some particular operating system appears to be outstripping others in popularity, as MS-DOS did in the microcomputer industry, for example, the first versions of new programs tend to be written for that operating system, to improve their chances of acceptance. This is often the case whether or not that operating system is really the best available for the program’s purpose. In this way, as software developers bet on an uncertain future, industry standards develop. “Interoperability” means making it possible for software that is built to one standard to be used with another standard via “software adapters.” With interoperability, now available in limited degree, software developers have less at stake in using some less-preferred programming language or operating system that better suits their needs. Decisions to choose on the merits rather than by projections of others’ preferences will tend to influence the evolution of industry standards. What are the economic causes and consequences of the evolution of a particular industry standard? What would be the economic consequences of more widespread interoperability? These are interesting questions worthy of research. In sum, studying the computer industry seems likely to inform our understanding of economic processes from a new, illuminating perspective.
Information technology and the evolution of knowledge and discourse Knowledge evolves, and media are important to the evolution of knowledge. Hypertext publishing promises faster and less expensive means for expressing new
High-tech Hayekians 99 ideas, transmitting them to other people, and evaluating them in a social context. Links, in particular, will enable critics to attach their remarks to their targets, making criticism more effective by letting readers see it. Hypertext publishing should bring emergent benefits in forming intellectual communities, building consensus, and extending the range and efficiency of intellectual effort. (Drexler 1987: 16)
Both the Xanadu and AMIX groups pay much attention to evolutionary processes, which is one reason Hayek’s work is so attractive to them. They are particularly interested in the evolution of knowledge. One of the crucial questions which they focus on is the evolution of ideas in society. In this regard, the hypertext publishing system under development at Xanadu is of great importance. It will, they hope, provide a market for information of a scientific and scholarly kind that will more closely approach the efficiency standards of our current markets for goods and services. The market for scholarly ideas is now badly compartmentalized, due to the nature of our institutions for dispersing information. One important aspect of the limitations on information dispersal is the one-way nature of references in scholarly literature. Suppose Professor Mistaken writes a persuasive but deeply flawed article. Suppose few see the flaws, while so many are persuaded that a large supportive literature results. Anyone encountering a part of this literature will see references to Mistaken’s original article. References thus go upstream towards original articles. But it may be that Mistaken’s article also provokes a devastating refutation by Professor Clearsighted. This refutation may be of great interest to those who read Mistaken’s original article, but with our present technology of publishing ideas on paper, there is no way for Mistaken’s readers to be alerted to the debunking provided by Clearsighted. The supportive literature following Mistaken will cite Mistaken but either ignore Professor Clearsighted or minimize her refutations. In a hypertext system such as that being developed at Xanadu, original work may be linked downstream to subsequent articles and comments. In our example, for instance, Professor Clearsighted can link her comments directly to Mistaken’s original article, so that readers of Mistaken’s article may learn of the existence of the refutation, and be able, at the touch of a button, to see it or an abstract of it. The refutation by Clearsighted may similarly and easily be linked to Mistaken’s rejoinder, and indeed to the whole literature consequent on his original article. Scholars investigating this area of thought in a hypertext system would in the first place know that a controversy exists, and in the second place be able to see both (or more) sides of it with ease. The improved cross-referencing of, and access to, all sides of an issue should foster an improved evolution of knowledge. A potential problem with this system of multidirectional linking is that the user may get buried underneath worthless “refutations” by crackpots. The Xanadu system will include provisions for filtering systems whereby users may choose their own criteria for the kinds of cross-references to be brought to their attention. These devices would seem to overcome the possible problem of having charlatans
100 Don Lavoie, Howard Baetjer, and William Tulloh clutter the system with nonsense. In the first place, one would have to pay a fee for each item published on the system. In the second place, most users would choose to filter out comments that others had adjudged valueless and comments by individuals with poor reputations.1 In other words, though anyone could publish at will on a hypertext system, if one develops a bad reputation, very few will ever see his work. Another difficulty of today’s paper publishing environment is that turn-around times are extensive. Mistaken’s persuasive but flawed article can be the last word on a subject for a year or so, before Clearsighted can get her responses accepted in a journal and then published. Even then, many of those who have read Mistaken may not have ready access to the journal in which Clearsighted is published. In a hypertext publishing environment, by contrast, Clearsighted’s responses can be available literally within hours of the publication of Mistaken’s original article. Thus a hypertext system seems able to inhibit the spread of bad ideas at their very roots. Refutations of bad ideas could immediately become known, and unorthodox new ideas, if sensible, could more rapidly gain the support of fair-minded thinkers. The research program for economists that is suggested by hypertext publishing falls within the field of the philosophy of science. It involves interpreting and explaining the effects of technology on the shape of economic research in the past, and projecting how advances such as hypertext might reshape it in the future. In the early days of economics as a profession, book printing was the leading edge of technology, and accordingly most economics consisted of verbal arguments. In more recent years the advent of number-crunching computers has enabled the development of more complex mathematical modeling and econometric techniques. But today’s microcomputers, through word processing, have also begun to dramatically enhance economists’ ability to present verbal arguments. (This very sentence, for instance, was all but effortlessly inserted in the fourth draft of this article, with no necessity for retyping or cutting and pasting.) The advent of hypertext might revitalize the rich literary tradition of economics in at least three ways. First, it would facilitate research by drastically reducing time spent among library shelves. Second, it would enable easier reference to, and more rapid dissemination of, verbal arguments. Third, it would provide a medium through which any number of economists might easily carry on open-ended written discussions with one another, with no inconvenience of time and distance.
Complexity, coordination and the evolution of programming practices Two extreme forms of organization are the command economy and the market economy. The former attempts to make economic tradeoffs in a rational, centrallyformulated plan, and to implement that plan through detailed central direction of productive activity. The latter allows economic tradeoffs to be made by local decision makers, guided by price signals and constrained by general rules. Should one expect markets to be applicable to processor time, memory space, and computational services inside computers? Steel mills, farms, insurance
High-tech Hayekians 101 companies, software firms – even vending machines – all provide their goods and services in a market context; a mechanism that spans so wide a range may well be stretched further. (Miller and Drexler 1988b: 137)
The type of research that would be most directly a follow-up of the paper “Markets and Computation: Agoric Open Systems” by Miller and Drexler, would involve examining the history, and the possible future, of computer programming practices as illustrative of economic principles. Inside computers things are going on which have some amazing similarities – and of course some significant differences – to what goes on in human economies. Many of the problems that arise in human economic systems have their analogs in well-known problems in computational systems. Economics has been conceived by many of its practitioners (e.g. Ludwig Mises, Lionel Robbins, Gary Becker) as applicable in general to the study of choice, the making of decisions, the application of scarce means to valued ends. Programmers are faced with difficult choices of how to make the best use of scarce computational resources. F. A. Hayek has recast the economic problem in terms of how societies may make effective use of the knowledge that is so widely dispersed among all the people they comprise. Hayek has argued that coordination in complex systems such as human economies exceeds the capabilities of central planning and direction. Coordination is achievable in complex economies only through decentralized decision-making processes: through specialization and the division of labor, through property rights, and through the price system. Programmers are now facing similar problems of complexity. As programs and distributed computation systems grow larger, they are outrunning the capacity of rational central planning. Coping with complexity seems to depend on decentralization and on giving computational “objects” property rights in their data and algorithms. Perhaps it will even come to depend on the use of price information about resource need and availability that can emerge from competitive bidding among those objects. As the following paragraphs indicate, there is much that could be done in the direction of elaborating on these analogies between economics and programming practices, and using them to develop a better understanding of economics. It doesn’t matter where one stands on the question of how similar computational and market processes are for one to see the possible value of this research program. Even if the differences are enormous, explaining exactly what they are could be exceedingly valuable to both fields. The division of labor and modularity An appreciation of the advantages of the division of labor is embodied in the programming principle of modularity. The earliest programs were linear, undivided sequences of instructions, but with the evolution of programming, practical considerations forced a dividing up of the problem into discrete modules. The extensive use of subroutines and structured programming enhanced the
102 Don Lavoie, Howard Baetjer, and William Tulloh ability of programmers to solve their problems. They broke down the whole into manageable chunks, as it were, whose activities were known and clearly bounded. For the various subroutines to operate effectively, they need a certain amount of autonomy – if other parts of the program interfere with them in unexpected ways, the result is a crashed program or nonsense. Or, we might say, the subroutines’ “rights” to what is “theirs” need to be respected. The analogy to property rights is very close. Property rights and object-oriented programming The practical advantages that property rights give the economy can be provided for computer programs by what is called “object-oriented programming” (the common acronym in the computer language literature is OOPS, for objectoriented programming systems). In object-oriented programming, the different kinds of tasks that the program must carry out are assigned to “objects,” essentially autonomous sections of code whose workings cannot be interfered with by other parts of the program, because the boundaries between objects are clear and respected. One subroutine’s data cannot, “intentionally” or by accident, interfere with data “belonging to” another subroutine. One advantage to the programmer is that he need not hold in his mind all at once the myriad possible options and combinations of calculations and data. The programmer need not know how an object works, only what it does. Another advantage is that if an object is directed to do something it cannot do, it simply returns a message that it “does not understand,” instead of acting on the bad instructions in a senseless or destructive way. The program’s “labor” is thus not only divided up among many parts, but the “rights” of these parts are respected. The integrity of the various data structures and algorithms in OOPS provides especially welcome clarity in very large programs written by teams of programmers. In this setting the likelihood of mutual interference would be very great – analogous to the tragedy of the commons – without the “property rights” structure of OOPS. The use of knowledge in computation At one point Roger Gregory, a founder of Xanadu, made a comment which helped us understand why his group is so interested in market process economics. The reason is that programmers, in their day-to-day experience, cannot help but learn the severe difficulties in getting large, centrally planned systems to work properly. The bigger and more complex their own programs, the more difficulty they have with bugs. Virtually never does a large program work as intended the first time, even though the programmer has absolute control over every aspect of the system. Programmers spend many frustrated hours debugging their own work. Accordingly, they tend to be very dubious about the ability of government planners to develop successful systems in human society, where the complexity is far greater and the ability to control is far less.
High-tech Hayekians 103 What this suggests of course, is Hayek’s line of thought that the more complex a system is, the more necessary it becomes that the orderliness of the system grow out of the interaction of relatively autonomous parts. The nature of complexity is arguably the central issue in computer science today, as it is in Hayekian economics. As Mises and Hayek have emphasized, the reason that central planning of economies does not and cannot work is that human society is too complex. Programmers are beginning to realize that “central planning” of computational systems is fraught with the same difficulties. So far we have been describing economic insights that are already widely appreciated by computer scientists, albeit not in the same terms economists use. The most innovative aspect of Miller and Drexler’s papers is their introduction of “agoric systems.” Derived from the Greek word for marketplace, agoric systems aim at solving the problem of maintaining coordination in complex computational systems by the same means as in complex economies: by a price system. As Miller and Drexler (1988b: 163) put it: Experience in human society and abstract analysis in economics both indicate that market mechanisms and price systems can be surprisingly effective in coordinating actions in complex systems. They integrate knowledge from diverse sources; they are robust in the face of experimentation; they encourage cooperative relationships; and they are inherently parallel in operation. All these properties are of value not just in society, but in computational systems: markets are an abstraction that need not be limited to societies of talking primates. Miller and Drexler are concerned about the efficiency of computer resource use, both within particular programs running on single computers and across extended computational networks over large geographical areas. In both cases they envision allocation of scarce computational resources such as disk or memory space and processor time – being determined by a market process among computational objects. As in the economy, it is not enough that agents have property rights; it is necessary also that they be able to communicate their special knowledge of time and place. Traditional programming practices have been based on a central planning approach, deliberately deciding on tradeoffs, such as that between the speed at which the program completes its task with the core space it takes to do so. Computer time has traditionally been allocated on a time-share or first-comefirst-served basis, or by some other fixed prioritizing system. Miller and Drexler wish to persuade the computer community to drop this central planning model for allocating computational resources. Instead, programs should be designed so that their different parts would “bid competitively” for, say, the “rental” of memory space, which would be more expensive per millisecond than disk space, just as downtown property rents at a higher rate than rural land. Likewise, in large, distributed systems, the various firms, individuals, research centers and so on would bid for the computational goods they need. Presumably this bidding and asking
104 Don Lavoie, Howard Baetjer, and William Tulloh would be carried out by the computers themselves, according to pre-programmed instructions. We might find certain computational resources in extremely high demand, or in volatile markets, changing their prices several times a day – or several times a second. Imagining computation markets of the future Miller and Drexler envision the evolution of what they call agoric open systems – extensive networks of computer resources interacting according to market signals. Within vast computational networks, the complexity of resource allocation problems would grow without limit. Not only would a price system be indispensible to the efficient allocation of resources within such networks, but it would also facilitate the discovery of new knowledge and the development of new resources. Such open systems, free of the encumbrances of central planners, would most likely evolve swiftly and in unexpected ways. Given secure property rights and price information to indicate profit opportunities, entrepreneurs could be expected to develop and market new software and information services quite rapidly. Secure property rights are essential. Owners of computational resources, such as agents containing algorithms, need to be able to sell the services of their agents without having the algorithm itself be copyable. The challenge here is to develop secure operating systems. Suppose, for example, that a researcher at George Mason University wanted to purchase the use of a proprietary data set from Alpha Data Corporation and massage that data with proprietary algorithms marketed by Beta Statistical Services, on a superfast computer owned by Gamma Processing Services. The operating system needs to assure that Alpha cannot steal Beta’s algorithms, that Beta cannot steal Alpha’s data set, and that neither Gamma or the George Mason researcher can steal either. These firms would thus underproduce their services if they feared that their products could be easily copied by any who used them. In their articles, Miller and Drexler propose a number of ways in which this problem might be overcome. In independent work, part of the problem apparently has already been overcome. Norm Hardy, senior scientist of Key Logic Corporation, whom we met at Xanadu, has developed an operating system called KeyKOS which accomplishes what many suspected to be impossible: it assures by some technical means (itself an important patented invention) the integrity of computational resources in an open, interconnected system. To return to the above example, the system in effect would create a virtual black box in Gamma’s computer, in which Alpha’s data and Beta’s algorithms are combined. The box is inaccessible to anyone, and it self-destructs once the desired results have been forwarded to the George Mason researcher. In the sort of agoric open systems envisioned by Miller and Drexler, there would be a vigorous market for computational resources, which could be sold on a peruse basis, given a secure operating system. Royalties would be paid to the owners of given objects, which might be used in a variety of applications. Programmers would be able to develop software by adding their own algorithms on to existing
High-tech Hayekians 105 algorithms. They would not need to understand all the workings of what they use, only the results. Among other advantages, this would save the tremendous amount of time now used by programmers in the trivial redevelopment of capabilities that have already been well worked out. Most important, however, is the increased rapidity with which new products could be developed.
Mind as a spontaneous order: what is (artificial) intelligence? If multilayered networks succeed in fulfilling their promise, researchers will have to give up the conviction of Descartes, Husserl, and early Wittgenstein that the only way to produce intelligent behavior is to mirror the world with a formal theory in the mind. Worse, one may have to give up the more basic intuition at the source of philosophy that there must be a theory of every aspect of reality – that is, there must be elements and principles in terms of which one can account for the intellegibility of any domain. Neural networks may show that Heidegger, later Wittgenstein, and Rosenblatt were right in thinking that we behave intelligently in the world without having a theory of that world. (Dreyfus and Dreyfus 1989: 35)
The field of computer science that market process economists would be apt to find the most fascinating is Artificial Intelligence (AI). The traditional approaches to AI, still dominant in the specialization area known as “Expert Systems,” takes intelligence to be an algorithmic, mechanical process. Although there are many commercially successful applications of these traditional AI systems, they have been extremely disappointing in terms of their ability to exhibit anything that deserves the name “intelligence.” Indeed, precisely the aspects of intelligence that market process economists consider the most important, such as learning, creativity, and imagination, have proven to be the most difficult to produce artificially. Over the past decade, however, a revolution has been occurring in AI researchers’ thinking about thinking. The newer approaches, sometimes called Emergent AI, conceive of mental processes as complex, spontaneous ordering processes. Emergent AI traces its origins to early contributions to neural networks such as those of Donald O. Hebb (1949), whom Hayek cites favorably in The Sensory Order, and Frank Rosenblatt (1958, 1962). These efforts had at one time been discredited by the more rationalistic approaches, but they are today making a dramatic comeback. As Sherry Turkle (1989: 247–8) put it in an article contrasting “the two AIs”: Emergent Al has not been inspired by the orderly terrain of logic. The ideas about machine intelligence that it puts forward are not so much about teaching the computer as about allowing the machine to learn. This AI does not suggest that the computer be given rules to follow but tries to set up a system of independent elements within a computer from whose interactions intelligence is expected to emerge.
106 Don Lavoie, Howard Baetjer, and William Tulloh The critiques in the new AI literature of the failings of the rationalist approach to AI sound remarkably similar to Hayekian economists’ critiques of the rational choice model. Even the very same philosophical traditions market process economists have used – post-Kuhnian philosophy of science, the later Wittgenstein, and contemporary phenomenology and hermeneutics – have been used by the newer AI researchers as the basis of their critique of Cartesian rationalism. A great deal of work has been done in the new Emergent AI literature that simulates complex ordering processes. For example, there are the “genetic algorithms” and classifier systems approaches developed by John Holland, and the connectionist or neural networks approaches. A significant faction of the AI community thus finds itself arriving at essentially the same conclusions about the nature of the human mind as Austrian economists. The process of mind is a decentralized, competitive process. No CPU exists in the brain. Marvin Minsky’s conception of the way the mind works in The Society of Mind is remarkably similar to Hayek’s in The Sensory Order. Hayek draws significant methodological conclusions from his view of the nature of the mind, for example in his essays “Rules, Perception and Intelligibility” and “The Primacy of the Abstract.” Some of the interesting work that has been done within Al directly introduces market principles into the design of models of learning. Some initial research along these lines has been done by Ted Kaehler and Hadon Nash at Apple, in a system they named Derby. This approach tries to introduce market bidding processes and monetary cost calculations into a neural network model in order to generate artificial learning. In his work on classifier systems, John Holland makes explicit use of market processes for reinforcing a program’s successful rules of action. “We use competition as the vehicle for credit assignment,” he says. “To do this we treat each rule as a ‘middleman’ in a complex economy” (Holland et al. 1986: 380). He speaks of suppliers, consumers, capital, bids, and payments. This type of research involves using economic principles to advance AI research, but our interest in it is not primarily altruistic; the point is not so much to help AI achieve its aspirations, but to see what it can teach us economists about the nature of human intelligence. It is too early to tell whether these approaches to AI will be more successful in replicating learning, creativity, inductive thinking, and so forth than the traditional approaches, but it is already clear that the emergent AI approach is able to do some things the older approaches could not. Even if one believes many current approaches to AI are utterly unworkable, their very failures might be worthy of closer study. The fact that mainstream AI research has not yet been able to reproduce certain key aspects of human intelligence may be highly significant. Books critical of mainstream AI, such as Hubert Dreyfus’s What Computers Can’t Do, and Terry Winograd and Fernando Flores’s Understanding Computers and Cognition, present a powerful critique of the rational choice model, one from which we could borrow in our own efforts to challenge mainstream economics. Understanding how the human mind works is not only of interest in that the mind is conceived as a specific instance of spontaneous order processes, and as such may display some interesting analogies to market orders. There is also the
High-tech Hayekians 107 point that, for a subjectivist approach to economics, what the market order is, is an interplay of purposeful human minds. We need to know as much as we can about how human beings think and communicate for our substantive economics. AI, in its failures and its achievements, is wonderfully informative on these issues. Even the skeptic about AI research, and computer modeling in general, could see that these simulation methods raise some provocative research questions for market process economics. To what extent have genuine learning and creativity been simulated in this research? How much does it matter that natural language simulation has not yet gotten very far? Just how different are these different levels of spontaneous order, as we advance from biological to animal and human cognitive processes, and to market processes, and what properties do they share?
Computation as a discovery procedure: possibilities for agoric mental experiments Underlying our approach to this subject is our conviction that “computer science” is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology – the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects. Mathematics provides a framework for dealing precisely with notions of “what is.” Computation provides a framework for dealing precisely with notions of “how to.” (Abelson and Sussman 1985: xvi)
Perhaps the research direction we talked about which will prove to be most controversial involves not so much using economics to study computers as the other way around, the direct use of computer modeling techniques to develop economic theory. This would be a matter of expanding on ideas sketched in another of Miller and Drexler’s papers, “Comparative Ecology: A Computational Perspective.” Up to now we have been talking about applying more or less standard market process economics to some computer-oriented topics. Those first topics could be done in words, as it were, but this one could also involve actually doing some computer programming of our own.2 The idea here is that we could try to improve our understanding of the market order by developing spontaneous order simulations on a computer. We might be able at least to illuminate existing ideas in market process economics, and we might conceivably develop substantive new ones, by doing mental experiments within artificial minds. How, it might be asked, could a school which stresses the non-mechanical nature of human action find itself trying to simulate action on electronic machines? After all, market process economists think an alternative approach to the subject matter is necessary precisely because neoclassical economics has tried to subsume action into a mechanistic model. But what we mean by “mechanical” has been shaped by the age when machines
108 Don Lavoie, Howard Baetjer, and William Tulloh were extremely crude and rigid things. As computers advance, they increasingly challenge our ideas about what “machines” are capable of. In principle, there is no reason why computers themselves could not become sufficiently “nonmechanistic” to be of interest to market process economics. As the previous section pointed out, research in AI aspires to reproducing on electronic machines exactly the sorts of phenomena, such as creativity and learning from experience, in which market process economics is interested. The market process approach has never been against models as such, but has only objected to modeling that loses sight of certain non-mechanical aspects of human choice. If the aspects of human action that the school considers the most important cannot be handled with the modeling tools of the mainstream, it may be necessary to devise better tools. We have to admit at this point that we are not at all sure ourselves whether market process economics can usefully deploy computer simulation methods. But the only way to tell if computer simulations can serve as such tools would be to try to build some, and see what can be done with them. Market process oriented economists have often pointed out that the mathematics of differential calculus that has played such a central role in mainstream economics is not the appropriate mathematics for studying the economy. James Buchanan, for example, has suggested that game theory, since it can deal with the interplay of strategic choices through time, would constitute a more appropriate mathematics. Kenneth Boulding has pointed to topology as an alternative mathematics, because it can deal with shapes rather than quantities. Without disagreeing with the reasons game theory and topology might be useful formalizations, we suspect that the appropriate formalization for economics might not be a mathematics at all. Computer programming may constitute the kind of formalization most conducive to the development of market process theory (see note 2). It is a formal medium for articulating the “how to” of dynamic processes, rather the “what is” of timeless end-states with which mathematics is concerned. Mathematical modeling has often distracted economics from paying attention to the processes that market process economics emphasizes. Computer “simulations” of spontaneous order processes might prove to be the kind of modeling approach that is process-oriented enough to help rather than obstruct economic theorizing. Thus it could constitute a useful new complement to the traditional procedures of theorizing that market process economists now employ. It is important to be clear about just what is meant by “simulation” here. It certainly has nothing to do with efforts to build direct simulations of specific real world markets, or of the overall economy. The “worlds” in the computer would be radically simplified, and the “agents” would be only “artificially” intelligent, which is to say, at this point in AI research, they would be rather stupid. But these agents may still be more like humans than the optimizing agent of mainstream neoclassical theorizing: they may be equipped with the capacity to learn from experience. But there would be no pretensions of capturing the complexities of reality within a model, or of being able to derive predictions about reality directly from the simulation exercises. The notion here is rather of using computers as an aid for conducting complicated mental experiments. It would not be empirical but theoretical research.
High-tech Hayekians 109 On the other hand, it would be more experimental, in a sense, than most theoretical research is today. This kind of simulation would differ from most contemporary theorizing, in that the purpose of the modeling exercise would not be to devise a whole deterministic mechanism, such as is the goal of IS/LM and Rational Expectations models in macroeconomics, or general equilibrium theory in microeconomics. Rather, the aim would be to set up constraining conditions, specifying institutional environments or decision rules for agents, and then to run the simulation in order to see what happens. The idea is not to create a mathematical model that already implies its conclusions in its premises. Rather, it is to run the simulations as mental experiments, where what is of interest is not what the end results are so much as how the process works. And we, the programmers, would not know how the process was going to come out until we ran the mental experiments. The order would emerge not by the programmer’s design, but by the spontaneous interplay of its component parts. One of the first projects that needs to be undertaken is to see just how much there is in existing computer modeling, inside or outside of economics, which we might critically examine as illuminating the properties of spontaneous order processes. The various evolutionary process modeling strategies mentioned in the previous section, that are being used in AI, in theoretical evolutionary biology, and in theoretical ecology, could be reinterpreted as referring to economic institutions or economies instead of brains or species. Existing program designs could be examined from a Hayekian perspective and modified in order to illuminate selected properties of spontaneous order processes. Or completely new programs could be developed with markets more directly in mind. Imagine, for example, trying to contrive Mengerian simulations in which a medium of exchange spontaneously evolves, or free banking simulations that evolve clearinghouses and stable monies. In industrial organization, it might be possible to investigate how firm sizes vary with different industry characteristics, and how industries evolve as markets and technology change. Business cycle effects might be studied: could we, for example, observe changes in a simulated capital structure as a result of an injection of credit? We might probe constitutional economics by running a series of parallel simulations differing only in certain fundamental rules such as property rights and contract sanctity. What different emerging properties of the economic order would we observe? Of course we need to be careful about the unrealistic nature of these mental experiments, and not let the modeling become an end in itself. The crippling vice of most economic theory today is its “model-fetishism.” Economists get preoccupied with the game of modeling itself, and forget the need to interpret the mental experiment. Although most game theory in economics suffers as much from formalism as general equilibrium theory and macroeconomics do, the iterative game theory work of Robert Axelrod is very much the kind of research we are thinking of here. There, the computer tournament was couched in a substantive interpretive effort. The mental experiment was not just a game played for its own sake, but a heuristic vehicle, a mental experiment to help us to think about the evolution of human cooperation.
110 Don Lavoie, Howard Baetjer, and William Tulloh Other than Axelrod’s work, probably the closest thing in the existing economics literature to this kind of simulation would be what is called experimental economics. Whereas our simulations would not use human subjects, as does most economic experimentation, the experimental economists’ focus on the design and functioning of market institutions is very much in the spirit of what we have in mind. Moreover, the use of non-human agents would in many instances allow for greater flexibility in the design of spontaneous order simulations. lnstead of using rats, we could use artificial agents. As Ted Kaehler pointed out, this is a step down in many dimensions of intelligence, but there are other advantages of computer experiments which suggest that some interesting theory development might be possible along these lines. The more Hayekian contributors to the experimental literature, such as Vernon Smith and Ron Heiner, will undoubtedly have many useful ideas for us along these lines, but otherwise we believe this kind of theoretical spontaneous order simulation is unexplored territory for market process economics. Miller, Drexler, and Salin deserve our thanks for introducing Hayekian ideas to the computer science community, and we certainly encourage computer scientists to follow-up directly on their work. Conceivably, we economists might be able to introduce theoretical principles to computer scientists that could help them address their problems. But a more likely benefit of our taking up these questions is that, by applying our existing economics to the study of computational processes, we might help improve our economics, and that may help us think more clearly about human economies, which is, after all, what we are really interested in.
Notes Copyright 1990, Mercatus Center (formerly Center for the Study of Market Processes). Permission to reproduce this text is gratefully acknowledged. 1 Plans for Xanadu also include provisions for reader feedback as to the value of different articles or notes, and for royalty payments per use to all authors, even of short comments. 2 This is not necessarily as radical a change from writing “in words” as we are implying here. Computer programs are written in order to be read by people, and not merely to be run on machines. As Abelson and Sussman (1985: xv) put it in their classic textbook on programing, a computer program is not just a way of getting a computer to do something, it is “a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.”
Appendix: annotated bibliography The interface of computer science and market process economics provides a wealth of exciting research opportunities. By studying both the application of market process ideas to computer science and the ways in which developments in computer science enrich our understanding of market processes, we can expand the interplay between theoretical insights and empirical praxis. In this bibliographical note, we
High-tech Hayekians 111 hope to alert interested readers to some of these sources. While this list is far from exhaustive, we hope that it will provide a useful starting point for those wishing to pursue these research opportunities. Process-oriented case studies of the computer industry The computer industry, with its fast growth and rapid technological change, provides fertile soil for studying such traditional market process concerns as the role of entrepreneurship (Kirzner 1973), the production and use of knowledge and information (Hayek 1948), and the peculiarities of software as capital (Lachmann 1978). The computer industry, however, is not only an increasingly important sector in its own right, but also is one that is ushering in new forms of market interaction in a broad variety of industries. The emergence of electronic markets (Malone et al. 1989) in such areas as computerized airline registration systems, program trading on increasingly interlinked electronic exchanges, as well as computer-aided buying and selling through home-shopping systems, has become an important research and policy topic. More broadly, the rapid evolution of information and communication technologies and the peculiarities of information as a good underscores the important relationship between the legal environment and technical change (Liggio and Palmer 1988). The interplay of technological, contractual, common-law, and legislative solutions to such problems as the assignment of property rights to intangible goods, for example, the broadcast spectrum and intellectual property (Palmer 1988), is an exciting and relatively unexplored research opportunity. In addition, the blurring of the distinctions between the various communications media (print, broadcasting and common carriers), as evidenced by the recent breakup of AT&T, highlights the relationship between innovation, public policy, and market competition (Huber 1987; Mink and Ellig 1989; Pool 1983). A further example of the important dynamic interactions between technical and policy responses to change in the computer industry can be found in the emergence of industry standards, whether as the outcome of the market process, or imposed by legislation, or through agreement forged in standard committees (David, 1987; Katz and Shapiro 1985). The problems of compatibility and interoperability between various software and hardware components highlights the role of market dialogue in the shaping of expectations and the formation of consensus in reaching an industry standard. In addition, the process of standard formation is constantly being threatened by the entrepreneurial discovery process (Hayek 1978a), leading to the search for technical adapters and innovations which would make such standards obsolete. Moreover, this process may exhibit a high degree of historical path dependency, as the dominance of the technologically inferior QWERTY keyboard demonstrates (David 1986; Gould 1987). An additional area of interest to market process economists is the role that computers have played in the socialist calculation debate. For many theorists, the computer was seen as the answer to the challenge of Mises and Hayek to the workability of central planning (Lange 1967). In light of recent trends in computer
112 Don Lavoie, Howard Baetjer, and William Tulloh science towards decentralized means of coordination, the bold claims of future computopians takes on an ironic twist (Lavoie 1990). A recent attempt to implement a version of a computopian plan in Chile (Beer 1975) could be usefully examined by market process economists. Information technology and the evolution of knowledge and discourse While it is commonplace to hear how computers will transform our society, the reality seems to move much slower than the promised dramatic effects. Nevertheless, as computers become more widespread and interconnected, the use of computer-aided dialogue and communication could have important repercussions on the evolution of knowledge and scientific discourse (Bush 1945). These new knowledge media (Stefik 1988) could have an effect not seen since Gutenberg. As hypertext systems emerge (Drexler 1986, 1987; Nelson 1973), they could enhance the evolution of scientific knowledge through more rapid dissemination of knowledge, and more efficient means of criticism and debate. These developments could have important implications for the spread of economic ideas, and the pattern of discourse within economics (Colander and Coats 1989; McCloskey 1985; Tullock 1965). Complexity, coordination, and the evolution of programming practices As software applications become more advanced, and computer systems more powerful, a central problem that emerges is how to cope with the rapidly expanding complexity of software systems. In computer systems the knowledge embodied in the software is more important than the physical hardware upon which it runs. As Alan Kay expresses it, “architecture dominates material” (Kay 1984). Computer programming is properly seen as a medium of expression of ideas, a product of the human mind (Ableson and Sussman 1985). The techniques used to grapple with the intellectual complexity of large software systems (Brooks 1975) could be usefully studied in comparison to techniques in economics. While the complexity of an economic system is obviously much greater than even the most complex software system (Hayek 1967b), the methods used to maintain intellectual coherence and to improve the use of knowledge may have analogs in each system. Such issues and techniques as modularity, specialization, embodied knowledge, and use of local knowledge have their counterparts in each system. Miller and Drexler have usefully applied the analogy between economic institutions and programming practices through their insight that the latest development in programming languages (object oriented programming) can be viewed as being the reinvention of the concept of property rights in the domain of computer science (Miller and Drexler 1988b). Insights from the economics of property rights could perhaps be usefully applied to the development of software systems (Anderson and Hill 1975; Demsetz 1967; Hayek 1989).
High-tech Hayekians 113 Object oriented programming techniques such as encapsulation that allows for the separation of internal state from external behavior, as well as the coupling of data and procedures, promise to expand the range of a programmer’s conceptual control. Object-oriented programs perform computations by passing messages between various objects, which can be viewed in terms of their real-world analog (Cox 1986; Shriver and Wegner 1988; Thomas 1989). While current software systems can be very complex, their complexity is sure to increase as computation becomes more distributed across networks of heterogeneous computers with different users pursuing their own particular goals. As Miller and Drexler point out, central planning techniques are no longer adequate for coordination of these open systems, and a more decentralized coordinating mechanism is needed (Bond and Glassner 1988; Hewett 1985; Huberman 1988; Kahn and Miller 1988). An important aspect of these open, decentralized systems will be the need to maintain the security of the proprietary data and software of the different agents (Hardy 1988; Miller et al. 1987). An additional aspect of the combination of large distributed systems and object oriented programming is the promise it holds for the more rapid evolution of software applications (Drexler 1988, 1989; Miller and Drexler 1988b). Agoric open system can take advantage of object oriented programming’s ability to provide opportunities for easy reuse and recombination of components, and incremental improvements. Mind as a spontaneous order: what is (artificial) intelligence? A further area of interest for market process economists refers to the insights regarding the nature of rationality that have been achieved through the attempt to create artificially intelligent computers. While these attempts have yielded many interesting applications, they have had little success in creating anything resembling intelligence. However, much has been learned by the attempt. Economists have much to gain from the successes and failures of artificial intelligence. The work of Herbert Simon, of course, has been influential in both economics and computer science (Newell and Simon 1972; Simon 1983, 1985), and has been instrumental in bringing to the attention of economists the nature of the computational demands placed upon their perfectly optimizing agents. The inability of humans to fulfill the demands of the optimizing agents has become increasingly recognized (Kahneman et al. 1982) as well as the implications that these less than perfect agents have for economic theory (Heiner 1983). The limitations of attempting to design intelligence as a mechanistic, decision-making process has led to a shift towards a view of intelligence as being an emergent property of a complex learning process (Drexler 1989; Graubard 1989). The mind is seen as a spontaneous order process in which the resulting intelligence is greater than is possible by design (Hayek 1952). The mind is viewed as being subject to competitive and cooperative pressures like other complex, evolving systems. A variety of metaphors has been explored in attempting to create an emergent approach to artificial intelligence. Perhaps the best known are the connectionist or neural network approaches, which attempt to mimic the
114 Don Lavoie, Howard Baetjer, and William Tulloh neural process of the brain itself. A wide variety of connectionist approaches are currently being attempted (Cowan and Sharp 1989; Edelman 1987; Hillis 1985; McClelland and Rumelhart 1986; Reeke and Edelman 1989; Schwartz 1989), including an attempt that applies some of the agoric insights to the problem of attributing success to various “neurons” (Kaehler et al. 1988). In addition, to the neural metaphor, computer scientists have attempted to apply social metaphors, recognizing the greater social intelligence (Lavoie 1985) that emerges out of the interaction of less intelligent parts (Campbell 1989; Kornfield and Hewett 1981; Minsky 1986). Genetic and evolutionary analogies from biology have also been influential (Langton,1987). These approaches include the Eurisko project of Doug Lenat (Lenat 1983; Lenat and Brown 1988), and the genetic algorithm approach pioneered by John Holland (Booker et al. 1989; De Jong 1988; Goldberg 1989; Holland 1975). In addition, the classifier system, also pioneered by John Holland, has attempted to build a parallel rule-based learning system that combines the rule discovery properties of genetic algorithms, and an economic model for the problem of credit assignment (Booker et al. 1989; Holland 1986; Holland et al. 1986). These developments in computer science may improve our understanding of both a wide variety of spontaneous order processes, as well as the nature of intelligence. The failure of traditional Cartesian rationalist approaches to artificial intelligence has prompted a critique similar to much of the market process economists’ critique of neoclassical rationality (Dreyfus 1972; Dreyfus and Dreyfus 1985, 1986, 1989; Winograd and Flores 1986). These critiques have emphasized the important role that language and social interaction play in intelligence, as well as the limitations of the knowledge as representation approach (Dascal 1989), in capturing the tacit and context-dependent nature of knowledge (Polanyi 1962). Computation as a discovery procedure: possibilities of agoric mental experiments The advances in artificial intelligence and computer programming suggest that these techniques could be usefully applied to experimental modeling the complex processes of interaction that occur in economic systems. The goal of this type of modeling is not a predictive model that tries to simulate reality, but rather mental experiments to help us better understand spontaneous order processes. One of the closest examples to the approach being suggested here is the work of Robert Axelrod on the evolution of cooperation. Axelrod’s mixture of theoretical insights, a computer tournament, and empirical case studies has proved to be both influential and illuminating (Axelrod 1984). His work has inspired a wide range of further empirical work and theoretical insights (Axelrod and Dion 1988), including the use of genetic algorithms to generate strategies that in certain situations improved on the performance of “tit for tat” (Axelrod 1987; J. Miller 1989). Another area that highlights the importance of the dialogic interplay between theory and empirical observation is the fast-growing field of experimental
High-tech Hayekians 115 economics (Plott 1982; Smith 1982). While the majority of experiments to date has focused primarily on the relatively straightforward auction-type institutions, the experimental results have demonstrated the importance played by the exchange institutions – the system of property rights in communication and exchange. As Vernon Smith notes, “it is not possible to design a resource allocation experiment without designing an institution in all of its detail” (1982: 923). This detailed focus on the institutional constraints is perhaps the most valuable aspect of the experimental approach. The focus to date in this young field has been on relatively simple market institutions, and on static outcomes, not on the dynamic adjustment and learning processes (Heiner 1985). The complexity of keeping track of these adjustment processes suggests a fruitful role for computers. Computer-aided experimental markets, such as the computerized double auction mechanism (PLATO), have already helped to illuminate these dynamic processes (Smith et al. 1988). Furthermore, a group at the Sante Fe Institute has already combined Axelrod’s computer tournament model with the experimental focus on market institutions, by running a computerized double auction tournament (Rust et al. 1989). Motivating much of this interest in computer modeling of spontaneous order processes is a dissatisfaction with traditional equilibrium approaches to capturing the evolutionary and self-ordering aspects of the market process. The development of order analysis, as an alternative to equilibrium-bound theorizing can be enhanced by our better understanding the working of spontaneous order processes (Boettke et al. 1986; Buchanan 1982; Hayek 1973, 1989; Horwitz 1989), and the nature of the rule systems and institutional order that help guide the ordering processes (Brennan and Buchanan 1985; Buchanan 1986; Langlois 1986). This increasing interest in evolutionary approaches to economics (Allen 1988; Anderson et al. 1988; Day 1987; Holland 1988; Nelson and Winter 1982; Silverberg 1988) has been fueled in part by the developments in the new views of science (Lavoie 1989; Prigogine and Stengers 1984). Further work in developing “agoric mental experiments” can begin by examining the current work in computer science that uses market principles. Bernardo Huberman and his coworkers at Xerox PARC, building on the work of Drexler and Miller (1988), have developed a computerized market allocation mechanism called Spawn (Waldspurger et al. 1989), and have explored the dynamic properties of distributed computer systems (Cecatto and Huberman 1989; Huberman 1988, 1989a, b; Huberman and Hogg 1988; Huberman and Lumer 1989; Kephart et al. 1989a, b). Market-based models for computation have also been explored by Tom Malone at MIT (Malone 1988; Malone et al. 1988), and Ferguson (1989). Nascent attempts to apply these computer techniques to economics have been attempted by members of the Sante Fe Institute (Marimon et al. 1989), and by the Agoric Research Group at George Mason University (De Garis and Lavoie 1989).
116 Don Lavoie, Howard Baetjer, and William Tulloh
References Abelson, H. and Sussman, G.J. (1985) Structure and Interpretation of Computer Programs, Cambridge, MA: MIT Press. Allen, P. (1988) “Evolution, innovation, and economics,” in G. Dosi et al. (eds), Economic Theory and Technical Change, New York: Printers Publishers. Anderson, P.W., Arrow, K.J., and Pines, D. (eds) (1988) The Economy as an Evolving Complex System, Redwood City, CA: Addison-Wesley. Anderson, T.L. and Hill, P.J. (1975) “The evolution of property rights: A study of the American West,” Journal of Law and Economics, April. Axelrod, R. (1984) The Evolution of Cooperation, New York: Basic Books. Axelrod (1987) “The evolution of strategies in the iterated prisoner’s dilemma,” in L.D. Davis (ed.) Genetic Algorithms and Simulated Annealing, Los Altos, CA: Morgan Kaufmann. Axelrod, R. and Dion, D. (1988) “The further evolution of cooperation,” Science 242, 9 December. Beer, S. (1975) Platform for Change, New York: Wiley. Boettke, P., Horwitz, S., and Prychitko, D.L. (1986) “Beyond equilibrium economics: Reflections on the uniqueness of the Austrian tradition,” Market Process 4(2), Fall. Bond, A.H. and Glasner, L. (eds) (1988) Readings in Distributed Artificial Intelligence, San Mateo, CA: Morgan Kaufmann. Booker, L.B., Goldberg, D.E., and Holland, J.H. (1989) “Classifier systems and genetic algorithms,” Artificial Intelligence 40. Brennan, G. and Buchanan, J.M. (1985) The Reason of Rules: Constitutional Political Economy, Cambridge: Cambridge University Press. Brooks, F.P. Jr. (1975) The Mythical Man-Month, Reading, MA: Addison-Wesley. Buchanan, J.M. (1982) “Order defined in the process of its emergence,” Literature of Liberty, 5(4), Winter, reprinted in J.M. Buchanan, Liberty, Market and State, New York: New York University Press, 1986. Buchanan, J.M. (1986) Liberty, Market and State: Political Economy in the 80’s, New York: New York University Press. Bush, V. (1945) “As we may think,” Atlantic Monthly, July. Campbell, J. (1989) The Improbable Machine: What the Upheavals in Artificial Intelligence Research Reveal about How the Mind Really Works, New York: Simon and Schuster. Ceccatto, H.A. and Huberman, B.A. (1989) “Persistence of nonoptimal strategies,” Proceedings of the National Academy of Science 86, May. Colander, D. and Coats, A.W. (eds) (1989) The Spread of Economic Ideas, Cambridge: Cambridge University Press. Cowan, J.D. and Sharp, D.H. (1989) “Neural nets and artificial intelligence” in S.R. Graubard (ed.) The Artificial Intelligence Debate: False Starts, Real Foundations, Cambridge, MA: MIT Press. Cox, B.J. (1986) Object Oriented Programming: An Evolutionary Approach, Reading, MA: AddisonWesley. Dascal, M. (1989) “Artificial intelligence and philosophy: The knowledge of representation,” Systems Research 6(1). David, P. (1986) “Understanding the economics of QWERTY: The necessity of history,” in W. Parker (ed.) Economic History and the Modern Economist, Oxford: Basil Blackwell. David, P. (1987) “Some new standards for the economics of standardization in the information age,” in P. Dasgupta and P. Stoneman (eds) Economic Policy and Technological Performance, Cambridge: Cambridge University Press.
High-tech Hayekians 117 Day, R. (1987) “The evolving economy,” European Journal of Operations Research 30, June. De Garis, H. and Lavoie, D. (1989) “Evolution of the money concept: A computational model,” unpublished manuscript, George Mason University. De Jong, K. (1988) “Learning with genetic algorithms,” Machine Learning 3. Demsetz, H. (1967) “Towards a theory of property rights,” American Economic Review, May. Drexler, K.E. (1986) Engines of Creation: The Coming Era of Nanotechnology, New York: Doubleday. Drexler, K.E. (1987) Hypertext Publishing and the Evolution of Knowledge, Palo Alto, CA: Foresight Institute. Drexler, K.E. (1988) “Biological and nanomechanical systems: contrasts in evolutionary capacity,” in C. Langton (ed.) Artificial Life, Redwood City, CA: Addison-Wesley. Drexler, K.E. (1989) “AI directions,” Foresight Update 5, March. Drexler, K.E. and Dreyfus, S.E. (1985) Mind over Machine, New York: The Free Press. Drexler, K.E. and Dreyfus, S.E. (1986) “Why expert systems do not exhibit expertise,” Expert. Drexler, K.E. and Dreyfus, S.E. (1989) “Making a mind versus modelling the brain: Artificial intelligence back at a branchpoint,” in S. Graubard (ed.) The Artificial Intelligence Debate: False Starts, Real Foundations, Cambridge, MA: MIT Press. Drexler, K.E. and Miller, M.S. (1988) “Incentive engineering for computational resource management,” in B.A. Huberman (ed.) The Ecology of Computation, Amsterdam: NorthHolland. Dreyfus, H.L. (1972) What Computers Can’t Do: A Critique of Artificial Reason, New York: Harper and Row. Edelman, G.M. (1987) Neural Darwinism: The Theory of Neuronal Group Selection, New York: Basic Books. Ferguson, D.D. (1989) “The application of microeconomics to the design of resource allocation and control algorithms,” Ph.D. thesis, Columbia University. Goldberg, D.E. (1989) Genetic Algorithms in Search, Optimization, and Machine Learning, Reading, MA: Addison-Wesley. Gould, S.J. (1987) “The panda’s thumb of technology,” Natural History, January. Graubard, S.R. (ed.) (1989) The Artificial Intelligence Debate: False Starts, Real Foundations, Cambridge, MA: MIT Press. Hardy, N. (1988) “Key KOS architecture,” manuscript, Santa Clara, CA: Key Logic. Hayek, F.A. (1948) “The use of knowledge in society,” in Individualism and Economic Order, Chicago: University of Chicago Press. Hayek, F.A. (1952) The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology, Chicago: University of Chicago Press. Hayek, F.A. (1967a) “Rules, perception and intelligibility,” in Studies in Politics, Philosophy and Economics, Chicago: University of Chicago Press. Hayek, F.A. (1967b) “The theory of complex phenomena,” in Studies in Politics, Philosophy and Economics, Chicago: University of Chicago Press. Hayek, F.A. (1973) “Cosmos and taxis,” chapter 2 in Law, Legislation and Liberty: Rules and Order, vol. 1, Chicago: University of Chicago Press. Hayek, F.A. (1978a) “Competition as a discovery procedure,” in New Studies in Politics, Philosophy, Economics and the History of Ideas, Chicago: University of Chicago Press. Hayek, F.A. (1978b) “The primacy of the abstract,” in New Studies in Politics, Philosophy, Economics and the History of Ideas, Chicago: University of Chicago Press. Hayek, F.A. (1989) The Fatal Conceit: The Errors of Socialism, Chicago: Universityof Chicago Press.
118 Don Lavoie, Howard Baetjer, and William Tulloh Hebb, D.O. (1949) The Organization of Behavior, New York: John Wiley. Heiner, R.A. (1983) “The origin of predictable behavior,” American Economic Review, September. Heiner, R.A. (1985) “Experimental economics: Comment,” American Economic Review, March. Hewitt, C. (1985) “The challenge of open systems,” Byte, April. Hillis, D.W. (1985) The Connection Machine, Cambridge, MA: MIT Press. Holland, J. (1975) Adaptation in Natural and Artificial Systems, Ann Arbor: University of Michigan Press. Holland, J. (1986) “Escaping brittleness: The possibilities of general-purpose learning algorithms applied to parallel rule-based systems,” in R. S. Michalski et al. (eds) Machine Learning II, Los Altos, CA: Morgan-Kaufmann. Holland, J. (1988) “The global economy as an adaptive process,” in P. Anderson, K. Arrow, and P. Pines (eds) The Economy as an Evolving Complex System, New York: Addison-Wesley. Holland, J., Holyoak, K.J., Nisbet, R.E. and Thagard, P.R. (1986) Induction: Processes of Inference, Learning, and Discovery, Cambridge, MA: MIT Press. Horwitz, S. (1989) “The private basis of monetary order: An evolutionary approach to money and the market process,” Ph.D. dissertation, George Mason University. Huber, P. (1987) “The geodesic network: 1987 report of competition in the telephone industry,” Washington, DC: Government Printing Office. Huberman, B.A. (ed.) (1988) The Ecology of Computation, Amsterdam: North-Holland. Huberman, B.A. (1989a) “An ecology of machines: How chaos arises in computer networks,” The Sciences, July/August. Huberman, B.A. (1989b) “The performance of cooperative processes,” unpublished manuscript, Xerox PARC. Huberman, B.A. and Hogg, T. (1988) “The behavior of computational ecologies,” in B.A. Huberman (ed.) The Ecology of Computation, Amsterdam: North-Holland. Huberman, B.A. and Lumer, E. (1989) “Dynamics of adaptive controls,” unpublished manuscript, Xerox PARC. Kaehler, T., Nash, R., and Miller, M.S. (1988) “Betting, bribery, and bankruptcy: A simulated economy learns to predict,” unpublished manuscript. Kahn, K. and Miller, M.S. (1988) “Language design and open systems” in B. Huberman (ed.) The Ecology of Computation, Amsterdam: North-Holland. Kahneman, D., Slovic, P., and Tversky, A. (1982) Judgement Under Uncertainty: Heuristics and Biases, New York: Cambridge University Press. Katz, M.L. and Shapiro, C. (1985) “Network externalities, competition, compatibility,” American Economic Review, May. Kay, A. (1984) “Computer software,” Scientific American, September. Kephart, J.O., Hogg, T., and Huberman, B.A. (1989a) “Dynamics of computational ecosystems: Implications for distributed artificial intelligence,” unpublished manuscript, Xerox PARC. Kephart, J.O., Hogg, T., and Huberman, B.A. (1989b) “Collective behavior of predictive agents,” manuscript, Xerox, PARC. Kirzner, I.M. (1973) Competition and entrepreneurship, Chicago: University of Chicago Press. Kornfeld, W.A. and Hewett, C. (1981) “The scientific community metaphor,” in Transactions on Systems, Man, and Cybernetics. Lachmann, L.M. (1978) Capital and Its Structure, Kansas City: Sheed Andrews and McMeel.
High-tech Hayekians 119 Lange, O. (1967) “The computer and the market,” in C.H. Feinstein (ed.) Socialism, Capitalism and Economic Growth: Essays Presented to Maurice Dobb, Cambridge: Cambridge University Press. Langlois, R.N. (1986) Economics as a Process: Essays in the New Institutional Economics, New York: Cambridge University Press. Langton, C.G. (1987) Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems, Redwood City, CA: Addison-Wesley. Lavoie, D. (1985) National Economic Planning: What is Left?, Cambridge, MA: Ballinger Publishing Co. Lavoie, D. (1989) “Economic chaos or spontaneous order? Implications for political economy of the new view of science,” Cato Journal 8(3), Winter. Lavoie, D. (1990) “Computation, incentives, and discovery: The cognitive function of markets in market socialism,” The Annals, January. Lenat, D.B. (1983) “The role of heuristics in learning by discovery: Three case studies” in R.S. Michalski et al. (eds) Machine Learning: An Artificial Intelligence Approach I, Palo Alto, CA: Tioga Publishing. Lenat, D.B. and Brown, J.S. (1988) “Why AM and Eurisko appear to work,” in B. Huberman (ed.) The Ecology of Computation, Amsterdam: North-Holland. Liggio, L.P. and Palmer, T.G. (1988) “Freedom and the law: A comment on Professor Aranson’s article,” Harvard Journal of Law and Public Policy. Malone, T.W. (1988) “Modelling coordination in organizations and markets,” in A. Bond and L. Glassner (eds) Readings in Distributed Artificial Intelligence, San Mateo, CA: Morgan Kaufmann. Malone, T.W., Fikes, R.E., Grant, K.R. and Howard, M.T. (1988) “Enterprise: A marketlike task scheduler for distributed computing environments,” in B.A. Huberman (ed.) The Ecology of Computation, Amsterdam: North-Holland. Malone, T.W., Yates, J., and Benjamin, R. (1989) “The logic of electronic markets,” Harvard Business Review, May–June. Marimon, R., McGrattan, E., and Sargent, T.J. (1989) “Money as a medium of exchange in an economy with artificially intelligent agents,” unpublished manuscript, Sante Fe Institute. McClelland, J.L. and Rumelhart, D.E. et al. (1986) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vols. 1 and 2, Cambridge, MA: MIT Press. McCloskey, D. (1985) The Rhetoric of Economics, Madison: University of Wisconsin Press. Miller, J.H. (1989) “Coevolution of automata in the repeated prisoner’s dilemma,” unpublished manuscript, Sante Fe Institute. Miller, M.S., Bobrow, D.G., Tribble, E.D., and Jacobs, L. (1987) “Logical secrets,” in E. Shapiro (ed.) Concurrent Prolog: Collected Papers, Cambridge, MA: MIT Press. Miller, M.S. and Drexler, K.E. (1988a) “Comparative ecology: A computational perspective,” in B.A. Huberman (ed.) The Ecology of Computation, Amsterdam: NorthHolland. Miller, M.S. and Drexler, K.E. (1988b) “Markets and computation: Agoric open systems,” in B.A. Huberman (ed.) The Ecology of Computation, Amsterdam: North-Holland. Mink, P. and Ellig, J. (1989) “The courts, the congress, and the future of telecommunications,” Washington, DC: Citizens for a Sound Economy Foundation. Minsky, M. (1986) The Society of Mind, New York: Simon and Schuster. Nelson, R.R. and Winter, S.G. (1982) An Evolutionary Theory of Economic Change, Cambridge, MA: Harvard University Press. Nelson, T. (1973) Computer Lib, self-published.
120 Don Lavoie, Howard Baetjer, and William Tulloh Newell, A. and Simon, H. (1972) Human Problem Solving, Englewood Cliffs, NJ: Prentice Hall. Palmer, T. (1988) “Intellectual property,” Market Process, Fall. Plott, C. (1982) “Industrial organization theory and experimental economics,” Journal of Economic Literature. Polanyi, M. (1962) Personal Knowledge: Towards a Post-Critical Philosophy, Chicago: Chicago University Press. Pool, I. de Sola. (1983) Technologies of Freedom, Cambridge, MA: Harvard University Press. Prigogine, I. and Stengers, I. (1984) Order Out of Chaos: Man’s New Dialogue with Nature, New York: Bantam Books. Reeke, G.N. Jr. and Edelman, G.M. (1989) “Real brains and artificial intelligence,” in S.R. Grauman (ed.) The Artificial Intelligence Debate: False Starts, Real Foundations, Cambridge, MA: MIT Press. Rosenblatt, F. (1958) “The perceptron, a probabilistic model for information storage and organization in the brain,” Psychological Review 62: 386. Rosenblatt, F. (1962) “Strategic approaches to the study of brain models,” in H. von Foerster (ed.) Principles of Self Organization, Elmsford, NY: Pergamon Press. Rust, J., Palmer, R., and Miller, J. (1989) “A double auction market for computerized traders,” unpublished manuscript, Sante Fe Institute, August. Schwartz, J.T. (1989) “The new connectionism: Developing relationships between neuroscience and artificial intelligence,” in S.R. Graubard (ed.) The Artificial Intelligence Debate: False Starts, Real Foundations, Cambridge, MA: MIT Press. Shriver, B. and Wegner, P. (eds) (1988) Research Directions in Object Oriented Programming, Cambridge, MA: MIT Press. Silverberg, G. (1988) “Modelling economic dynamics and technical change: Mathematical approaches to self-organization and evolution,” in G. Dosi et al. (eds) Economic Theory and Technical Change, New York: Printers Publishers. Simon, H. (1983) Reason in Human Affairs, Stanford, CA: Stanford University Press. Simon, H. (1985) Sciences of the Artificial, Cambridge, MA: MIT Press. Smith, V.L. (1982) “Microeconomic systems as an experimental science,” American Economic Review. Smith, V.L., Suchanek, G.L. and Williams, A.W. (1988) “Bubbles, crashes, and endogenous expectations in experimental spot asset markets,” Econometrica, September. Sokolowski, R. (1989) “Natural and artificial intelligence,” in S.R. Graubard (ed.) The Artificial Intelligence Debate: False Starts, Real Foundations, Cambridge, MA: MIT Press. Stefik, M.J. (1988) “The next knowledge medium,” in B.A. Huberman (ed.) The Ecology of Computation, Amsterdam: North-Holland. Thomas, D. (1989) “What’s in an object,” Byte, March. Tullock, G. (1965) The Organization of Inquiry, Durham, NC: Duke University Press. Turkle, S. (1989) “Artificial intelligence and psychoanalysis: A new alliance,” in S.R. Graubard (ed.) The Artificial Intelligence Debate: False Starts, Real Foundations, Cambridge, MA: MIT Press. Waldspurger, C.A., Hogg, T., Huberman, B.A., Kephart, J.O., and Stornetta, S. (1989) “Spawn: A distributed computational economy,” unpublished manuscript, Xerox PARC. Winograd, T. and Flores, F. (1986) Understanding Computers and Cognition, New York: Addison-Wesley.
High-tech Hayekians 121
Comment after twelve years by Howard Baetjer On rereading “High-tech Hayekians,” twelve years after we published it, I feel very lucky to have been involved in the agorics project, then and now. The exposure to both the ideas and the people involved has strongly shaped my career. I must have written the first section, “Process-oriented case studies of the computer industry,” because many of its recommendations for further study I soon after developed into my dissertation, and from there into my book, Software as Capital, An Economic Perspective on Software Engineering. We say in the article, “Studying software as capital may help us illuminate the market process approach to capital theory”; it does. In particular, because “software has virtually no physical being,” thinking about software as a category of capital good very usefully puts our attention on the knowledge embodied in all capital goods. My book develops this idea at length. Indeed, my book argues that capital is best understood as embodied knowledge: the knowledge is the key aspect of any capital good; the physical steel or copper or silicon in which that knowledge is embodied is secondary. In both the book and more recent work I have been developing this idea into a critique of mainstream growth theory, which treats technology and capital as entirely separate, independent variables. From the high-tech Hayekian viewpoint, this is wrongheaded: technology is in the capital goods (including “human capital”) we use; to separate technology and capital is to obscure one of the main processes that drives economic growth. The second section was about the promise of hypertext for improving human discourse and the evolution of knowledge. While we were mistaken in the course we expected hypertext development to take, we were correct about the tremendous importance of hypertext to the evolution of knowledge and society. I cannot help smiling that when we wrote the article, we were clueless that the Web was just around the corner. Few knew about hypertext then. Now, everyone uses it. We had high hopes that Xanadu would be built as envisioned by Mark Miller and others; we did not imagine what we should have – that world-wide hypertext publishing would not be consciously planned and built, but instead would evolve in a highly distributed way, a classic Hayekian spontaneous order. Yet, to those of us enchanted by the vision of Xanadu, the Web seems a primitive, frustrating tool. One very important capability it lacks, that Xanadu promised, is backlinking, the ability for a reader to see links to (rather than from) a document (with appropriate filtering, of course). Our comments about how such a capability would enhance scholarly exchange (in the second paragraph of the section) are as important today as they were then. Don Lavoie and I (with Don in the lead as always) investigated the capabilities of hypertext in scholarly learning pretty far, in our hypertext-based graduate courses. Never have I had so much fun – such excitement – in developing courses. The hypertext software we used was Folio VIEWS. We would put our syllabus, software demos, space for student work, course readings (including whole books
122 Don Lavoie, Howard Baetjer, and William Tulloh – VIEWS is a marvelously efficient platform) – everything – into a Folio “infobase,” one per course that all the students shared. From within this infobase, students could create and see links to any part of any course document, and to comments by other students. In the most successful graduate course I ever taught, I had ten students literally around the world, and the hypertext-based discussion was as rich and illuminating as any I have experienced. It spoiled me. As I write this note, I am frustrated that I cannot give my readers context by linking them directly to particular passages in the text of “High-tech Hayekians,” as I could have in my Lavoie-style hypertext course. That experience was real proof-of-concept. I look forward to further evolution of the Web, in hopes that the kinds of tools we developed (inspired by Xanadu), and others we never imagined, will become generally available. The third section deals with the idea that most fascinated the economists among us – Miller and Drexler’s anticipated “agoric open systems.” These are operating systems for computation that would use internal pricing of computational resources such as processor time and space in RAM. While we correctly anticipated that object-oriented programming would greatly enhance software development by giving software the benefits of a kind of property rights, so far the further step to networked markets for computational resources has not emerged. I wish I had the time to investigate why not . . . or why not yet. The last section proposed Austrian, process-oriented computer simulations of economic processes. In this area, too, I had the privilege of working under Don Lavoie’s guidance. Kevin Lacobie and I, with expert guidance from Mark Miller and Dean Tribble, actually built a couple of programs that aimed at simulating Carl Menger’s story of the evolution of money from barter. We each gave the effort a couple of terms, devoting most of our time to developing simple agents capable of barter. Neither our agents nor the environments in which they interacted ever developed very far, and money certainly never emerged. Nevertheless, I believe the general enterprise of developing Austrian economic simulations is full of promise. I have a strong hunch that it can complement the experimental economics being done by Vernon Smith and his team. With more time, programming expertise, and learning from experience, immensely valuable demonstrations might be developed. Despite missing our goal of seeing money emerge, I found the experience of working on the simulations extremely valuable (as well as great fun). Why? Because it rubbed my nose in the marvelous complexity of even the simplest aspects of everyday exchange. Consider preference at the margin, reciprocating, valuing one good in terms of another, anticipating what A might want that B offers. Commonplace human action, yes? But try to embody them in code! I found myself awed by the fantastic complexity of both the individual decision-making and the interpersonal institutions that underlie human action we take for granted, and carry out effortlessly.
High-tech Hayekians 123
Hayek’s road by Marc Stiegler Hayek was, is, and will continue to be, painfully right. Twelve years ago, the high-tech Hayekians knew that the people of the former Soviet Union had a long hard road to travel to reach a future in which they worked with the power of free markets. However, we had no idea just how difficult a road we ourselves would have to walk to align our efforts with those same free markets. In our arrogance, we believed that we could be, in our own way, outliers in the system: we would exploit the fact that occasional great leaps could be taken in a Hayekian world even though most progress is made in tiny steps. We would become “great leapers.” And so we built Xanadu and AMIX, the elegant complete solutions to the problems of global hypertext and electronic markets. And we failed. Instead, as Hayek himself would have predicted, the world was immensely enriched by the HTML browser and EBay. And though the HTML browser/Web server is a cartoonist’s caricature of Xanadu, it is good enough that it transformed society. It has taken a full decade for some of us to align our efforts with the intellectual knowledge we had even then. Indeed, it can be argued that we still have not completed our difficult journey. Though we have tried ever harder to give up the “great leaps,” you can still see large leaps trying to break free from our projects at the slightest encouragement. These attempts to leap far ahead are perhaps a result of seeing a bright future clearly, and of being too impatient to take a thousand little steps where just one, it would seem, could suffice. Yet clarity of vision and a certain impatience are trademarks of the high-tech Hayekian. For this reason it may be that, although our leaps may improve in style and grace, they will never end. Fortunately, just as Hayek would predict that most of these leaps will fail, he would also predict that, just once in a very rare while, they will succeed. After all, to some extent this is how Hayek himself succeeded, at the end of a long hard road.
A comment on “high-tech Hayekians” in the perspective of agent-based simulation of economic realities: sound results urgent nos by Pietro Terna [T]he aim would be to set up constraining conditions, specifying institutional environments or decision rules for agents, and then to run the simulation in order to see what happens. The idea is not to create a mathematical model that already implies its conclusions in its premises. Rather, it is to run the simulation as mental experiments, where what is of interest is not what the end results are so much as how the process works. And we, the programmers, would not know how the process was going to come out until we ran the mental experiments. The order would emerge not by the programmer’s design, but by the spontaneous interplay of its component parts. (Lavoie et al. 1990: 135)
124 Don Lavoie, Howard Baetjer, and William Tulloh I am this kind of programmer and I was beginning to work in this perspective in the early 1990s, when the article was published: what are the most important changes that can we can observe after more than a decade? First of all, it is now absolutely clear (Gilbert and Terna 2000) that, as Ostrom 1988 first observed, three different “symbol systems” are now available to social scientists: the familiar ones of verbal argumentation and mathematics, but also a third way: computer simulation. Computer simulation, or computational modeling, involves representing a model as a computer program, and computer programs can be used to model either quantitative theories or qualitative ones. The logic of developing models using computer simulation is not very different from the logic used for the more familiar statistical models. In either case, there is some phenomenon that we as researchers want to understand better. The key question now is: which tool is the most suitable for this “third way,” and what skills should researchers possess? Pace Luna and Stefansson (2000), we need a common language to aid economists to employ simulations as one of their tools of analysis: a sort of lingua franca that is widely understood and relatively easy to adopt. However, “easy” is a term that covers a lot and in our case it does not eliminate the necessity of excellent programming skills, also for social scientists. Is the lack of this common language a cause of the slow (terribly slow) progress in obtaining results in our field, despite the quantity and the quality of the work done in the last 10 years? In the mean time a lot of work has been done, as reported recently by Tesfatsion (2001); we have also two specialized electronic journals: the Journal of Artificial Societies and Social Simulation (JASSS) (jasss.soc.surrey.ac.uk), and the Electronic Journal of Evolutionary Modeling and Economic Dynamics (e-JEMED) (www.e-jemed.org/) but the real problems are the results. A strong internal critique is that by Pryor (2000) in an auto-ironic paper: looking backwards, an unknown author in 2028 (when an asteroid crashed on Earth), observes “that in a typical complexity book in the late 1990s . . . almost all of the essays have no real empirical applications, aside from a few interesting anecdotes.” We urgently need to obtain sound results, useful for the real world, and agent-based simulations of enterprise behavior may help. In this field it is possible to move from pure abstract theory to concrete problems of the organization of the firm and knowledge problems (for an outline of some work in progress, look at web.econ. unito.it/terna/jve/jve.pdf). Next to the skills required to develop this kind of modeling technique and the need to find workable applications to the real world there is the important problem of the interdisciplinary methodology adopted. It is well-known that interdisciplinary and exploratory studies have less impact than monodisciplinary mainstream ones. This is another reason why finding concrete results become more and more necessary. As a positive conclusive remark it is worth noticing that we are now working toward a unification goal, where the complexity paradigm confirms Hayek’s theory of spontaneous order, as well as his laissez-faire economic philosophy (Kilpatrick 2001): a good goal for programmers working on agent-based simulations!
High-tech Hayekians 125
References Gilbert, N. and Terna, P. (2000) “How to build and use agent-based models in social science,” Mind & Society 1: 57–72. Kilpatrick, H.E. Jr. (2001) “Complexity, spontaneous order and Friedrich Hayek: Are spontaneous order and complexity essentially the same thing?” Complexity 6(4): 16–20. Lavoie, D., Baetjer, H., and Tulloh, W. (1990) “High-tech Hayekians: Some possible research topics in the economics of computation,” Market Process 8, Spring: 120–47. Luna, F. and Stefansson, B. (eds) (2000) Economic Simulations in Swarm: Agent-Based Modelling and Object Oriented Programming, Dordrecht and London: Kluwer Academic. Ostrom, T. (1988) “Computer simulation: The third symbol system,” Journal of Experimental Social Psychology 24: 381–92. Pryor, F.L. (2000) “Looking backwards: Complexity theory in 2028,” in D. Colander (ed.) The Complexity Vision and the Teaching of Economics, Cheltenham: Edward Elgar, pp. 63–9. Tesfatsion, L. (2001) “Agent-based computational economics: Growing economies from the bottom up,” ISU Economics Working Paper no. 1, .
5
The new economy as a co-ordinating device Some Mengerian foundations Elise Tosi and Dominique Torre
Introduction Recent historical works have stressed the relevance of Hayekian suggestions to analyse the co-ordination patterns of market economies (Aimar 1999; Bianchi 1994; Garretsen 1994; Kirzner 1997). Simultaneously, standard views on market co-ordination have largely been influenced by the renewed interpretation of Mengerian theory of money. Over several decades, under different circumstances, Menger has repeatedly exposed his genetic conception of the ‘origin of money’ (Menger 1985, 1994, 1963, 1892a, 1892b). A few years after the first edition of Jevons’ Theory of Political Economy, but without explicit reference to Jevonian developments on barter and monetary exchange, he then provides a long-lived solution for the problem of the ‘double coincidence of wants’. His nowadays wellknown methodology (see Aimar 1996; Arena and Gloria 1997; Hodgson 1992; O’Driscoll 1986), consists in focusing on the self-reinforcing process of emergence of money from the set of commodities, according to its initial properties of marketability: Men have been led, . . . , each by his own economic interests, without convention, without legal compulsion, nay, even without any regard to the common interest, to exchange goods destined for exchange for other goods equally destined for exchange, but more saleable. (Menger 1892a: 248) The microeconomic foundations Menger provides for the use of money are free from any assumption on the necessary disposal of some intertemporal reserve of value. They cope rather well with the line of analysis selected by the more advanced version of monetary search models introduced by N. Kiyotaki and R. Wright (Kiyotaki and Wright 1989, 1991), then extended to different applications of the monetary form of exchange. These works have developed without excessive difficulty the Mengerian principles of analysis, applying them to a world with fiat money, credit and remunerated assets. The current stage of development of what could be considered as one of the more active testimonies of the Mengerian tradition in contemporary economic theory, motivates a correlated research on
The new economy as a co-ordinating device 127 Menger’s conceptions of other forms of co-ordination. When Menger refers to these social links, he explicitly refutes all holist or historicist interpretation of the role of the State, and more generally of interindividual arrangements involving law and positive institutions. An essential part of the Problems of Sociology (Menger 1963) is devoted to present an organic conception of these other economic ‘institutions’: The question of the origin of a number of social structures can be answered in a similar way. These likewise serve the common welfare, indeed, even cause it, without being regularly the result of an intention of society directed toward advancing this welfare. (Menger 1963: 155–6) The present-day system of money and markets, present day law, the modern state, etc., offer just as many examples of institutions which are presented to us as the result of the combined effectiveness of individually and socially teleological powers, or, in other words, of ‘organic’ and ‘positive’ factors. (Menger 1963: 158) The remainder of this chapter is devoted to investigate the possibility of founding the analysis of the new economy following Mengerian suggestions. Our intuition is that this new form of economic activity cannot be reduced to a simple evolution in techniques and goods’ content. Therefore, the second section explains the essential arguments of the technological approach of the new economy. The third section deals with information economics approach, applied to the analysis of the nature and the consequences of information goods. The fourth section, more in harmony with Menger’s point of view, develops the impact of the specific co-ordination device corresponding to network architecture in the new economy. The final section presents the main conclusions.
The new economy as a new form of production A basic form of analysis of the new economy develops the incidence of the characteristics of immaterial goods from the consumer’s and the producer’s points of view. When consumption only is considered, immaterial goods are submitted to a principle of subordination in the Gorman–Lancaster consumptionset. In the hierarchy of wants, they emerge at an advanced stage of economic development. More fundamentally, focusing on the production of these new goods, one could point out their specificity in terms of costs. Fixed costs are very important; conversely, variable, then marginal costs are zero. Whenever the nature of the good is information, ‘information is costly to produce, but costless to reproduce’ (Shapiro and Varian 1998: 9). The first unit has a positive cost but the following are free of cost. The economy of the Internet and other communication technologies appear at this point as forms of production of immaterial goods that deepen some
128 Elise Tosi and Dominique Torre previously existing properties of modern production technologies. From this point of view, there would be no major discontinuity between the IBM model and the Microsoft/AOL one. In this line of reasoning, the IBM-generation technologies could only be considered as a sort of transitional pattern between the old ‘Fordist’ model and the new Microsoft/AOL standard. The former takes advantage from the generalization of constant returns to scale while the latter radically improves the costs-minimizing strategies at work by generating new needs able to be satisfied by adequate immaterial goods. This form of industry would thus radicalize a more general tendency to develop newly adapted methods of production free from variable costs and mainly submitted to development– research and other fixed costs. The new economy would precisely be defined as the sector generating purely input-saving methods of production, thus economies of scale. The technologies of the new economy also rely for a good part on technological complementarities: for instance, hardware and processors, hardware and software. Then, the compatibility of the different systems becomes a key issue and leads to the redefinition of producers’ strategic behaviour. This is a source of increasing returns too. Moreover, in order to increase their market shares, suppliers have a strong incentive to exploit technological flexibility; they produce different versions of the same good and by the way enlarge the demand basis. This specific structure of costs and returns has important consequences for the competitive process between firms in the new economy. One usually distinguishes two possible market structures. A monopolistic market may emerge, dominated by the largest (but not necessarily the most efficient) firm. Conversely, differentiated products may be supplied. In this case, differentiation has a relatively low cost and prevents the exit of the major part of competitors. High fixed costs render competition actually violent. When nobody is able to produce a sufficient scale to enjoy a constant average cost, the introduction of a new fixed cost-saving method is sufficient to dismiss certain competitors. But when one of them currently operates under decreasing costs, he can enjoy a monopoly situation. Generally, ‘the Internet makes it cheaper to design products remotely; reduces the need for vast inventories; provides a better means to target, communicate with, and service customers; cuts the costs of delivering many services and entertainment; and helps companies remove layers of bureaucratic fat’ (Litan 2001: 16). The immaterial nature of the goods of the new economy displays consequences for activity location. One of the most immediate consequences of the immaterial nature of the Internet is that transportation costs do not matter in the setting up of a new firm. Empirical evidence nevertheless confirms that the location of activities is maintained in the production sectors concentrating on the use of Internet connections. The relevance of cluster emergence in activities free from variable costs challenges the approach purely founded on the importance of increasing production returns (Quah 2000) and suggests that extra-technological reasons should be required to explain the geography of the Internet.
The new economy as a co-ordinating device 129
Information economics and the new economy Information economics provides some apparently adequate reference for the analysis of the new economy in the most advanced textbooks (see Shapiro and Varian 1998) as in the research programmes applied to the Internet strategies. Information is relevant when economic performance exhibits some imperfections. Information has an individual utility as soon as all agents do not share the same knowledge or all the knowledge related to both the economic goods that are supplied and their economic partners. Now, in order to account for these possible forms of lack of information, the relevant typology distinguishes between symmetric information and asymmetric information. Information is symmetric when the level of uncertainty is the same for all agents. In the case of asymmetry, certain agents or certain classes of agents possess information of better quality than others. The new economy presents relevant examples of these two kinds of informational contexts. Risk and uncertainty in the activities of the new economy are at the origin of imperfect (symmetric) information. A large part of the start-ups have a very short life duration. This excessive impredictability of the start-ups future seems to be the consequence of two phenomena. As other new firms involved in new sectors, Internet’s actors have to face important uncertainty about future activity outcomes. Moreover, according to their cost structure, they must spend long years of negative returns before reaching their breakpoint. The result is the difficulty of funding their activity during the first years of their existence. Banks and stock markets do not offer relevant solutions whereas new financial intermediaries as venture-capitalists or business angels substitute for them more or less efficiently. While this kind of uncertainty is initially symmetric, the difficulties of raising funds induce start-up entrepreneurs to hide some part of their information. In a sense, the new economy is a world of ‘crooks and heroes’ (Volle 2000). The new firm must take high risks and seeks to make them supported by creditors. These creditors have generally no real possibility of distinguishing between the risk level of the project and of evaluating the quality of their investment. Their own protection is first a higher interest rate and second an adequate level of risk diversification. At this point, the relationships between start-ups of the new economy and their fund-providers are largely constrained by information asymmetry. Moral hazard and adverse selection are the well-known consequences of this kind of uncertainty. At the first stage of its development, information economics only considered information as given data of the economic context. Then, it was observed that the amount of information available could be increased, a cost being incurred as a counterpart. The so-called new economy can indeed be considered as the economic sector producing information, then bearing the cost of information. At this stage, there are two different reasons to refer to information economics when dealing with the new economy: information is the content of the produced goods and imperfect information governs the relationship between producers and their economic partners.
130 Elise Tosi and Dominique Torre Information goods are usually allocated to two categories. In the case of search goods, the quality of the good is directly observable and it is disclosed after simple inspection. For the consumer, search goods only require low information costs measurable in terms of search time. Experience goods are, in contrast, those goods such that their qualities cannot be disclosed by inspection but only after a precedent consumption. The quality of the information provided by Internet sites is not enforceable before a time lag. When the market is imperfect, the informational content of experience goods takes a long time before becoming publicly available. When interindividual relationships are established in a network, the opportunities to publicly reveal information are multiplied. The reputation phenomenon is enhanced. If the produced good has an informational content, there is then no reason to suppose that this information is of poor quality. If that were the case, suppliers would be immediately penalized by a bad reputation effect. Private experience of a good is then not really useful on the Net since public information generally reveals the level of quality of the available goods and services. The specialized literature frequently refers to the episode of Intel’s Pentium processor in 1994. During that year, Intel commercialized this processor that was supposed to be highly reliable. Private experience – the processor’s failure experience – led consumers to react on the Internet and Intel was forced to improve its components. Reputation is not only the consequence of a good circulation of information between agents on the network. Reputation frequently requires specific investments, able to increase the identification of the brand by consumers. For these reasons, one generally considers that reputation implies imperfect competition, positive profits and a large sized firm. Mobile telephony presents a very small number of operators, hardware producers, infrastructure suppliers using reputation as a strategic weapon. The information content of goods in the new economy has other well-identified consequences on competitive processes. Sufficient level of specialization can be desired by competitors on the Internet, in order to increase the level of their reputation on particular sections. Modularization is another feature of the Internet economy. ‘The modularization of the industry is one of the essential replies to the increasing demand for diversified and customised products. This enables to mass-produce basic components, that are assembled along different ways to get a wide variety of products and services’ (Brousseau 2000: 3). Economic considerations on uncertainty and information are not absent from Mengerian developments. According to Menger, uncertainty finds its roots in the capitalistic form of production: uncertainty with respect to the quantity and the quality of product one has at one’s disposal through possession of the corresponding goods of higher order is greater in some branches of production than it its in others . . . The greater or less degree of uncertainty in predicting the quality and quantity of a product that men will have at their disposal due to their possession of the goods of higher order required to their production, depends upon the
The new economy as a co-ordinating device 131 greater or less degree of completeness of their knowledge of the elements of the causal process of production, and upon the greater or less of control they can exercise over these elements . . . This uncertainty is one of the most important factors in the economic uncertainty of man. (Menger 1950: 51–2) This form of production has no apparent similarities with the production of information goods. Nevertheless, it reveals that for Menger, the levels of uncertainty and information are linked with the form of the production process. The longer the roundabout in production, the higher the uncertainty that arises when the output depends on stochastic components of the economic context. The new economy does have specificities related to the form of the costs. These specificities enhance uncertainty in the same way the time of production does in Menger’s examples. Moreover, the new goods introduced in many sectors of the new economy present a level of intrinsic uncertainty as regards their marketability, in a way comparable to Menger’s examples of ‘a grower of hops, a hunter and even a pearl-fisher’ (Menger 1950: 51). An analysis that was only founded on the information content of the goods of the new economy would not insist sufficiently on the specific nature of the link these goods create between agents. The most interesting line of analysis of the new economy therefore insists on the network form of co-ordination induced by production and circulation of information goods.
The new economy as a new form of co-ordination Prior to developments relative to the role of information goods, Menger already suggests a somewhat academic distinction between material and immaterial goods. No fundamental distinction related to the nature of the good would be relevant except for the four prerequisites which ‘must be simultaneously present: (i) a human need, (ii) such properties as render the thing capable of being brought into a causal connection with the satisfaction of this need, (iii) human knowledge of this causal connection, (iv) command of the thing sufficient to direct it to the satisfaction of the need’ (Menger 1995: 35). Nevertheless, his own illustrations concerning immaterial goods have something different from the traditional examples. Of special interest are the goods that have been treated by some writers in our discipline as a special class of goods called ‘relationships’. In this category are firms, goodwill, copyrights, patents, trade licenses, authors’ rights, and also, according to some writers, family connections, friendship, love, religious and scientific fellowship, etc . . . The fact [that these things] are actually goods is shown, even without appeal for further proof, by the fact that we often encounter them as objects of commerce. (Menger 1995: 38–9)
132 Elise Tosi and Dominique Torre One of Menger’s goals is to depart from a view whose main disadvantage would be its inadequacy to his gestating organicist conceptions. Therefore, he does insist on the ability of the notion of good to integrate such ‘relationships’: If, as is true of customer good-will, firms, monopoly rights, etc. . . . , these useful actions or inactions of such a kind we can dispose of them, there is no reason why we should not classify them as goods, without finding it necessary to resort to the obscure concept of ‘relationships’, and without bringing these ‘relationships’ into contrast with other goods as special category. (Menger 1995: 39–40) However, these ‘actions’ are of the range of things that we now consider as means of co-ordination between individual decisions. A few pages later, after the developments concerning the need of information in the roundabout production process, Menger introduces a new topic really in harmony with the paragraphs consecrated to ‘relationships’. These lines are devoted to the importance of middlemen, without considering transaction costs, transportation costs and other trade imperfections, but stressing the informational role of these intermediaries: Operators of this ‘sector’ could thus correspond to some members of this ‘special professional class which operates as an intermediary in exchanges and performs for the other members’. (Menger 1995: 69) As soon as a society reaches a certain level of civilisation, the growing division of labour causes the development of a special professional class which operates as an intermediary in exchanges and performs for the other members (shipping, distribution, the storing of the goods, etc. . . . ), but also the task of keeping records of the available quantities. Thus, we observe that a specific class of people has a special professional interest in compiling data about the quantity of goods, so-called stocks in the widest sense of the word, currently at the disposal of the various peoples and nations whose trade they mediate. The data they compile cover trading regions that a smaller or larger (single counties, provinces, or even entire countries or continents) according to the position the intermediaries in question occupy in commercial life. They have, moreover, an interest in many other general kinds of information, but we will have occasion to discuss this at a later point. (Menger 1995: 69) The place these intermediaries occupy in the production and trade processes are not very far from the mission of our Internet. Information which is the major content of the services supplied by the interconnections is able to render more easily other economic activities as Mengerian intermediaries would have done. As intermediaries, the network belongs to the category of ‘relationships’. Now, we do not consider it as a good, but we are interested in the Mengerian
The new economy as a co-ordinating device 133 distinction between direct and indirect disposal of some good that displays important consequences in terms of information: A person with consumption goods directly at his disposal is certain of their quality. But a person who has only indirect command of them, through possession of the corresponding goods of higher order, cannot determine with the same certainty the quantity and quality of the goods of first order that will be at his disposal at the end of the production process. (Menger 1995: 51) As Mengerian specialists, the Internet has to improve the information of the agents whose position could be assimilated to the ‘persons who have only indirect command’ on some kind of good. This ‘specific class of people [that] has a special professional interest in compiling data about the quantity of goods’ is now a network composed of complementary nodes and links. This network generates externalities as the higher the value of a subscription, the larger the number of subscribers. In the same order of ideas, the network growth is fed by positive feedback effects, stemming from the existence of complementarities and intrinsically linked with network organization. The value of any item is enhanced by the increased availability of complementary goods, supply positively acts on demand through feedback effects. In other words, the feedback effects result from the existence of scale economies on the demand side: The driving force behind the Internet is Network externalities – the fact that the value of a set of computers increases with the number of computers that are interconnected. The value of the connectivity arises . . . from the benefits for each individual to be able to communicate with others; the more users on the network, the more benefit . . . There are also indirect benefits associated with a large network. The more members of a network, the more likely it is that new services will be offered over the network. (Cave and Mason 2001: 3) A direct consequence of these externalities is the so-called Metcalfe’s law: the value of a network increases like the square of the number of its participants. Considering the challenge represented by the size of the firm, the competitors may be engaged in a rather complex strategy. At the first stage, co-operation rather than competition is the means to ensure the network’s performance and reliability, indeed, it is the relevant solution in order to provide the necessary compatibility between standards, and also to avoid inefficient demand lock-ins. When the initial supply of all competitors is insufficient, the weakness of increasing returns would be unable to prevent the network imploding. Therefore, a minimum level of customer base is required to ensure the profitability of the network. ‘In the presence of strong network externalities, a monopolist exclusive holder of a technology may have an incentive to invite competitors and even subsidise them’ (Economides 1995: 22).
134 Elise Tosi and Dominique Torre Then, when the network is installed, firms begin competing in order to control the whole system. In the case of the Internet, and according to the Katz and Shapiro model of sponsorship, each backbone has an installed base and otherwise competes for unattached customers. The benefit derived by a customer from joining a backbone is an increasing function of the size of his or her backbones and of the quality of interconnection with the other backbones. This quality of interconnection is a strategic variable, and because ‘it takes two to tango’, the quality of interconnection is governed by the preferences of the backbone which values interconnection the least. (Cremer et al. 1999: 2) Recent literature extends the analysis of such a competitive process. The resulting conclusions put into balance the advantages for the competitors gaining a larger share of the network and the penalties induced by the simultaneous withdrawal of some customers as a consequence of a lesser compatibility of existing standards. In this setting, innovation is responsible for standards differentiation, it is motivated by ‘network shares’ seeking activity: When connectivity between the two networks is degraded, both backbones face a demand reduction, as their customers’ access to each others deteriorates. Second, a degradation of connectivity creates a quality differentiation between the two networks. The larger backbone, which relies relatively less on access to the other backbone’s customers, gains a competitive advantage, and competition between the two backbones is softened. However, when other backbones are present, a similar quality differentiation effect handicaps both backbones relative to the other ones. (Cremer et al. 1999: 2) When the backbones are not too few, and when they are of equal size, none of them has an incentive to degrade interconnection. A backbone that would degrade the quality of its interconnection with an equal-sized backbone would not gain a sufficient advantage over this backbone, and its position deteriorates according to the installed base of the other backbones. Multihoming strategy from the customers’ part becomes the essential response of demand to the rational choices of backbones in terms of standard. During the different stages of a network’s life (adoption, performance), some trade-offs emerge between flexibility/system openness and control. ‘Innovation requires openness. In order to stimulate innovation; the existing institutional framework has to be flexible and incomplete to be able to welcome new practices’ (Brousseau 2000: 4). Thus, the co-operative stage induces competitors to adopt adaptive technologies which increase their compatibility potential. When the network reaches a sufficient size, less flexibility is needed but, as a counterpart, the degree of control of the existing standard must increase in order to maintain the leadership position.
The new economy as a co-ordinating device 135 Prices are not the only way to compete in the new economy. Indeed, price differentiation is bounded by a tendency to supply free products. When the network stands in a transitory position, between two temporary equilibria, some adequate price-war behaviours develop among competitors. Special low prices may be decided to capture extra demand share, while the incurred costs would be paid back by the exploitation of more profitable segments related to the same standard. Flat prices may be the ending of sequential prices contraction. The emergence of a standard puts limits on differentiation and increases price competition: when compatibility problems are solved and when network externalities are enjoyed, it becomes very difficult to implement a differentiation strategy. Then, price war remains the single opportunity. Although marginal cost may be zero, fixed costs have to be incurred: advertising substitutes to final consumers’ payments and oligopolists may be incited to compete in their call for advertising. This competition shows contrasting effects on consumers’ well-being: consumers gain from network externalities, but they lose from the lack of variety and from the remaining imperfections in technological compatibility. Whatever the originality of the competitive behaviours within the new economy, at this stage of the reflection, network analysis could only offer some incremental developments of imperfect markets chapters in industrial organization handbooks. But the Internet and more generally information and knowledge-based networks reveal a fundamentally different content. As Mengerian ‘relationships’, this kind of network can be considered as a specific configuration of the market, and even more, as a singular co-ordination device. Standard microeconomic analysis elaborates a widely accepted distinction between activities and forms of reallocation or co-ordinating processes. Consumption, production, financing, etc. are elementary activities which contribute both to the individual satisfaction of agents and to national wealth and growth. Market, State, contractual agreements are the more representative forms of co-ordination. Relationships are the essential component of these interindividual connections. Information and knowledge-based networks are another form of economic connection between decentralized agents. But this form of co-ordination seems to break the usual distinction between activities and co-ordinating processes. Consumption goods generally have characteristics that are independent from the way they are allocated. Information goods have just the opposite property. Their value is essentially determined by the form and the size of the network of which they constitute a fundamental component. The representation of the market is founded on the interactions of the two opposite forces of supply and demand which contribute through different equilibrium categories to define the level of co-ordination. Within information and knowledge-based networks, supply and demand cannot generally be isolated; in fact, they lose all relevance. Consumers do not consume search engines, access providers, sites, etc. but use them in order to get the goods they have decided to purchase or to access the services they need. Relationships of this kind can be considered as specific intermediaries that form a sort of bijection with the network as middlemen do with the market.
136 Elise Tosi and Dominique Torre If real markets were of the Walrasian kind, there would be no room for middlemen. Conversely, middlemen exist because of market imperfections, like transaction costs, missing information or poor quality information (Rubinstein and Wolinsky 1987). But Internet intermediaries exist as constitutive elements of the Web architecture. Indeed, the total decentralization of subscribers requires the intervention of a large class of operators which guarantees the reliability and the efficiency of the network. In a world characterized by the wide diffusion of Internet services, ‘the introduction of advanced information technologies greatly reduces the cost of search, leaving little room for [other] match-making middlemen. These middlemen services are being replaced by capital-intensive services that rely on large databases and sophisticated search engines’ (Wimmer et al. 2000: 411). ‘The main role of intermediaries is to gather information on users . . . so as to help different classes of agents, in particular buyers and sellers of one specific good, to find each other’ (Caillaud and Julien 2000: 1). The activity of middlemen involves specific competitive relationships between users and intermediaries. ‘The combination of . . . asymmetric network externality and third degree price discrimination opens the possibility of cross-subsidisation among the others . . . An intermediary may subsidy the participation of some agents in order to increase its attractiveness for other participants’ (Caillaud and Julien 2000: 2). The network’s configuration has important effects on pricing in the new economy. In the case of the Internet, free access is a sort of norm but takes different modalities. Recent literature evokes three categories of prices that McKnight and Bailey (1995) have dealt with. Flat-rate pricing corresponds to the payment of a fee to connect the network, without considering the connection time. Usagesensitive pricing makes the price depend on the degree of congestion on the demand side. Finally, in the transaction-based model, pricing is a function of the characteristics of the transaction. With flat-rate pricing, access is unrestricted for customers, the marginal cost of the connection is zero and there is no way to avoid congestion. This pricing method is rather well established in a large part of the Internet, in the US as in Europe: The underlying economic problem is an old one, known as the tragedy of the commons . . . Users of any common and freely-accessed resource have a tendency to over-exploit: each will use the resource until the private (marginal) cost of doing so equals the private (marginal) benefit, ignoring the social consequences of their actions. (Cave and Mason 2001: 26) The other pricing patterns are economic responses to deal with the inefficiency of flat-pricing. Mackie-Mason and Varian (1997) suggest solving the problem of pricing Internet services by introducing a ‘smart market’ principle. This solution involves a zero usage price when there is no congestion. Only with the congested part of the network is a priority order introduced according to bids attached to the messages by users. This solution is hard to implement but copes well with the tendency of considering the Internet as a free access co-ordinating
The new economy as a co-ordinating device 137 scheme, excepted in the more congested (attractive?) part of the network. Other pricing formulas have been introduced in the literature, such as the ‘Paris Metro pricing’ proposal of Odlyzko (1997) which suggests differentiating between two types of access: a zero price for congested access and a non-zero price for less congested access, without any other differentiation of the supplied services. Recent development of mathematical models, able to test the properties of these different principles, have not yet provided a definitive solution to the problem of selecting among these pricing rules the optimal one (compare Gupta et al. 1997 and Mason 2000). The new economy has special kinds of intermediaries; it also generates special motives of co-ordination failures. While the network structure is compatible with a high degree of connectivity between agents and, thus, apparently easily reaches Pareto optimality, the structure of costs and even more the effect of externalities induces underexploitation of the network’s capacities. Pre-emption and replacement are the more representative effects occurring during the adoption stage. For a technology consumer, the decision to adopt a new technology only depends on the relative weights of fixed and variable advantages generated by the adoption. When the firm already runs an old technology and faces adjustment costs, its natural decision is to postpone the adoption of the new technology. Conversely, when the firm enters the network, it has some tendency to pre-empt the adoption of the new technology in order to avoid some subsequent adaptation costs. None of these decisions are socially efficient. When the technology has already been adopted, other kinds of co-ordination failures are likely to emerge. Some of them stem from the free disposal, while sunk costs are to be borne by advertisers. Every site could be considered as being composed of two elements: the information needed by the consumer and the advertising message which covers the cost. While the total utility of each consultation is the sum of the utility resulting respectively from the consumption of the informational content and of the advertising message, the fee paid by advertisers is directly connected to the second component of the site, the former determining only the demand size. In such a situation, individual optimality of the advertisers’ choice does not correspond to social efficiency, both because of flat marginal cost and because of the neglect of utility resulting from the former component of the site. More generally, ‘the current Internet pricing structures are unlikely to be optimal. The large heterogeneity in uses of the Internet means that equal treatment of all sources (the current Internet practice) is inefficient’ (Cave and Mason 2001: 26). In a recent work, Cremer focuses on a special consequence of network externalities. When we get close to the universal access of the network, ‘there are new services which can be offered, or rather there exist new opportunities to deliver . . . broadcasting services’ (Cremer 2000). Thus, during the process of access diffusion, the adoption returns are initially decreasing when the newly connected people attach less and less value to the network. Then, the network generates increasing returns when it can be used for the systematic diffusion of large-scale messages like professional correspondence, phone bills, electricity invoices.
138 Elise Tosi and Dominique Torre Increasing returns are at the origin of multiple equilibria and subsequent inefficiencies. These co-ordination problems call for active policies. On the one hand, such policies must ‘guarantee the consistency of the property rights system, and the enforceability of contracts. This is essential to enable innovators, that create intangibles, to efficiently exchange an recombine those intangibles’ (Brousseau 2000: 4–5). On the other hand, they have to guarantee the Pareto efficiency of the network. If there are new relationships in our modern economies, the kind of intermediation generated by the new economy is undoubtedly the more relevant one. As money and middlemen are the way to co-ordinate market transactions, the Internet structure is the appropriate form of co-ordination of network relations. The process of price determination in the new economy is quite different from the one at work in the ‘old economy’. However, networks representative of the new economy as markets of the ‘old economy’ are imperfect co-ordination devices. Both systems are compatible with the existence of other ‘organically created institutions’ (Menger 1963: 57) like State, or contracts Menger would have considered as a way to improve the co-ordination of economic activities.
Conclusion This chapter investigates the relevance of a Mengerian methodology to analyse the microeconomic foundations of the new economy. This new configuration of economic activity can be considered as an extreme consequence of technological properties of new methods of production. This view, recalled in the second section, cannot distinguish new economy from other forms of increasing returns production processes. Another way of rationalization of new economy is provided by information economics. In adequacy with its previous developments, this currently accepted microeconomic approach presents the new economy as the expression of more general concepts as experience goods or search goods, underlining the relevant consequences of uncertainty and asymmetry on the properties of these goods. The fourth section tries to base an alternative approach of the new economy on Mengerian developments related to relationships, special kinds of goods, according to Menger, special forms of intermediation that could be as much relevant when markets co-ordinate economic actions as when networks provide the essential economic link between agents. Finally, it may be suggested that the so-called new economy is an adequate solution to the information problem arising both from the increasing complexity of the roundabout production processes and from the enlargement of qualities of consumption goods. Operators of this ‘sector’ could thus correspond to some members of this ‘special professional class which operates as an intermediary in exchanges and performs for the other members’ (Menger 1950: 69).
The new economy as a co-ordinating device 139
Acknowledgments The authors would like to thank J. Birner, J.P. Centi and C. Schmidt for their helpful critiques and suggestions. They are particularly grateful to B. Caldwell for his comments on a first version of this chapter. The usual disclaimer applies.
References Aimar, T. (1996), ‘Money and uncertainty in the economic thought of Carl Menger’, in C. Schmidt (ed.) Uncertainty in Economic Thought, Aldershot: Edward Elgar. Aimar, T. (1999), ‘Time, co-ordination and ignorance: A comparison between Hayek and Lachmann’, History of Economic Ideas VII(1–2): 137–65. Aréna, R. and Gloria, S. (1997) ‘Menger and Walras on money: A comparative view’, History of Economic Thought Conference, Bristol, 3–5 September. Bianchi, M. (1994), ‘Hayek’s spontaneous order: The correct versus the corrigible society’, in J. Birner and R. van Zijp (eds) Hayek, Co-ordination and Evolution, London: Routledge, pp. 232–51. Brousseau, E. (2000), ‘E-economy, new growth regime and public policies’, The United Nations Economic and Social Commission for Western Asia (ESCWA), Beirut, Lebanon, 15–16 May. Caillaud, B. and Julien, B. (2000) ‘Competing cybermediaries’, mimeo, September. Cave, M. and Mason, R. (2001) ‘The economics and regulation of the Internet’, paper prepared for the OXREP issue on economics and the Internet, mimeo. Cremer, J. (2000) ‘Network externalities and Universal Service Obligation in the Internet’, European Economic Review 44. Cremer, J., Rey, P. and Tirole, J. (1999) ‘Connectivity of the commercial Internet’, GREMAQ, May. Economides, N. (1996) ‘The economics of networks’, International Journal of Industrial Organization 14(2). Garretsen, R.W. (1994) ‘The relevance of Hayek for mainstream economics’, in J. Birner and R. van Zijp (eds) Hayek, Co-ordination and Evolution, London: Routledge, pp. 94–108. Gupta, A., Stahl, D.O. and Whinston, A.B. (1997) ‘A stochastic equilibrium model of Internet pricing’, Journal of Economic Dynamics and Control 21(4–5): 697–722. Hodgson, G.M. (1992) ‘Carl Menger’s theory of the evolution of money: Some problems’, Review of Political Economy 4(4). Kirzner, I. (1997) ‘Entrepreneurial discovery and the competitive market process: An Austrian approach’, Journal of Economic Literature 1, March, 60–85. Kiyotaki, N. and Wright, R. (1989) ‘On money as a medium of exchange’, Journal of Political Economy 97(4): 927–54. Kiyotaki, N. and Wright, R. (1991) ‘A contribution to the pure theory of money’, Journal of Economic Theory 53: 215–35. Litan, R. (2001) ‘The Internet economy’, Foreign Policy 16–24. Mackie-Mason, J.K. and Varian, A. (1997) ‘Economic FAQs about Internet’, in L.W. McKnight and J.P. Bailey (eds) Internet Economics, Cambridge, MA: MIT Press. McKnight, L.W. and Bailey, P. (1995) ‘An introduction to Internet economics’, MIT Workshop on Internet Economics, March, http://www.press.umich.edu/. Mason, R. (2000) ‘Simple competitive Internet pricing’, European Economic Review 44: 1045–56.
140 Elise Tosi and Dominique Torre Menger, C. (1892a) ‘On the origin of money’, Economic Journal 2(6): 239–55. Menger, C. (1892b) ‘La monnaie, mesure de la valeur’, Revue d’Economie Politique IV: 159–75. Menger, C. (1950) Principles of Economics, Glenoe, IL: Free Press (reproduced in I. Kirzner (ed.) Classics in Austrian Economics, London: William Pickering, 1995, Vol. I) (English translation by J. Dingwall and B.F. Hoselitz of Grundsätze der Volkwirsthschafslehre, Vienna: Wilhelm Braunmüller, 1871). Menger, C. (1963) Problems of Economics and Sociology, Urbana: University of Illinois Press (English translation by J. Nock of Untersuchungen über die Methode der Socialwissenchaften und der Politischen Oekonomie insbesondere, 1883). Menger, C. (1985) Investigation into the Method of Social Sciences, with Special References to Economics, New York and London: New York University Press. Menger, C. (1994) Lectures to Crown Prince Rudolf of Austria, ed. by E.W. Streissler and M. Streissler, Aldershot: Edward Elgar (1876, German translation by M. Streissler and D.F. Good). Odlysko, A. (1997) ‘A modest proposal for preventing Internet congestion’, AT&T Labs, research mimeo. O’Driscoll, G.P. (1986) ‘Money: Menger’s evolutionary theory’, History of Political Economy 18(4): 601–16. Quah, D. (2000) ‘Internet cluster emergence’, European Economic Review 44: 1032–44. Rubinstein, A. and Wolinsky, A. (1987) ‘Middlemen’, The Quarterly Journal of Economics, August. Shapiro, C. and Varian, H.R. (1998) Information Rules: A Strategic Guide to the Network Economy, Boston: Harvard Business School Press. Volle, M. (2000) E-conomie, Paris: Economica. Wimmer, B.S., Townsend, A. and Chezum, B.E. (2000) ‘Information technologies and the middleman: The changing role of information intermediaries in an information-rich economy’, Journal of Labor Research XXI(3) Summer: 407–18.
Part III
The organization of the firm
6
“Austrian” determinants of economic organization in the knowledge economy Nicolai J. Foss
Introduction There is evidence that firm organization is currently undergoing profound change as a result of the emergence of the knowledge economy, not only with respect to the firm’s vertical and horizontal boundaries (Helper et al. 2000), but also with respect to internal organization (Gittleman et al. 1998; Ichniowski et al. 1996; Mendelsson and Pillai 1999; OECD 1999; Osterman 2000). Factors such as increased differentiation of tastes on the demand side (e.g. Milgrom and Roberts 1990), acceleration of innovation and technological development on the supply side (D’Aveni 1994), and changes in the composition of labor on the input side (Tomlinson 1999) are argued to be important drivers behind the current dynamics of economic organization. However, it is not completely evident how to understand the nature of the relevant causal forces. Why, for example, may these drivers produce changes in internal organization towards more decentralized forms? From the perspective of, for example, standard incentive theory, this is something of a puzzle, since the crucial revelation principle implies that even with asymmetric information, centralization will (weakly) dominate decentralization. It is well-known that this result depends on a highly unrealistic zero communication cost assumption. This assumption is similar to the market socialist assumption of “glass walls” mysteriously arising under socialism, an assumption that was harshly criticized by Hayek in the context of the socialist calculation debate. In fact, a number of economists, Austrians and non-Austrians alike, who have addressed economic organization in the knowledge economy have drawn upon Austrian (more precisely, Hayekian) ideas on the need for decentralization fostered by the presence of dispersed knowledge (Cowen and Parker 1997; Ellig 1993, 1997; Ellig and Gable 1993; Foss 1999, 2000; Ghoshal et al. 1995; Hodgson 1998; Jensen and Meckling 1992; Jensen and Wruck 1994). Briefly, the overall conclusions that emerge from a number of these studies are that that hierarchy and planning methods are as problematic inside firms as they have proved to be outside firms, that firms need to harness the ability of markets to utilize, exchange and build information rapidly in response to changing contingencies, and that extensive delegation of decision rights and the use of high-powered incentives to support this are imperative. It is easy to see how
144 Nicolai J. Foss this may imply a skeptical attitude to the analytical dichotomization between planned firms and unplanned markets, present not only in Coase (1937) (and most of post-Coasian organizational economics), but also in central Austrian contributions such as Mises (1936, 1944, 1949) and Hayek (1973). This chapter links up with recent applications of Austrian ideas to issues of organizational design. In particular, I focus on internal organization issues. Like a number of recent contributors, I discuss the implications for internal organization of the Hayekian notion that the dispersed and subjective character of knowledge is a strongly binding constraint on the use of planned coordination (Cowen and Parker 1997; Ellig and Gable 1993; Ghoshal et al. 1995; Jensen and Meckling 1992; Jensen and Wruck 1993). However, I argue that – contrary to some of these contributors – it does not follow that firms should emulate markets as far as possible. In his critique of market socialism, Mises (1949) pointed to the folly of “playing markets,” and I draw on his overall argument that bringing coordination mechanisms characteristic of market organization into a planned organization is inherently problematic. I also draw on Mises’ (1949) related insight that the mixed economy is inherently unstable, as well as on his insights in property rights and ownership (Foss 1994a; Mises 1936, 1949). In modern parlance, Mises argued that the economic institutions of capitalism are strongly complementary so that (unhampered) capitalism is a stable system, consisting of interlocking elements, such that changes away from pure capitalism will result in serious allocative inefficiencies. I use this fundamental notion to argue that firms are also systems of complementary elements and that this fact places constraints on the extent to which firms may be made “market-like.” In particular, I argue with Mises that “[t]he function of the entrepreneur cannot be separated from the direction of the employment of factors of production for the accomplishment of definite tasks. The entrepreneur controls the factors of production” (Mises 1949: 306). The “direction” and “control” undertaken by “the entrepreneur” is qualitatively different from allocation by means of the price mechanism, since it relies on authority that is backed up by the entrepreneur’s ownership of the alienable (non-human) means of production. In other words, Misesian arguments are used to criticize arguments derived from Hayekian insights that firms should emulate markets to the largest possible extent. Thus, Mises sides, as it were, with the Coasian notion that markets and hierarchies are indeed different mechanisms for resource allocation.1
The knowledge economy: a challenge to economic organization The concept of the “knowledge economy” is used in a loose sense to refer to the overall tendencies that many industries become increasingly “knowledgeintensive,” that an increasing share of the workforce is constituted by “knowledge workers” (Tomlinson 1999), and that commercially useful knowledge becomes increasingly distributed and needs to be accessed from several sources, an increasing number of which lie outside the boundaries of firms (Liebeskind et al. 1995; Matusik and Hill 1998). Although they have so far had little to say here, economists should
“Austrian” determinants of economic organization 145 face the challenge of examining economic organization in the context of the emerging knowledge economy. Fundamentally, this is because recent discussion goes right to the heart of the crucial and perennial issues in the theory of economic organization, challenging us to rethink issues such as, What are the limits to resource allocation by means of authority? What do we mean by authority? What defines the boundaries of firms? How do we distinguish an independent contractor from an employee? The need to revisit such fundamental questions is prompted by the numerous writers who claim that the knowledge economy will fundamentally change the answers we provide, and that traditional answers are no longer valid for the knowledge economy (see e.g. contributions to Myers 1996; Prusak 1997). The following section presents a brief review of these arguments. Recent claims about economic organization in a knowledge economy: review Although a number of fields and sub-fields are involved in the ongoing discussion of efficient organization in the context of the emerging knowledge economy, some distinct themes are discernible. Overall, a consensus seems to be emerging that tasks and activities in the knowledge economy need to be coordinated in a manner that is very different from the management of traditional manufacturing activities, with profound transforming implications for the authority relation and the internal organization and boundaries of firms. For example, Cowen and Parker (1997) explain that Market changes are moving manufacturing farther and farther away from steady-state, low variety, long-batch production runs, relevant to Taylorist methods, to high variety and small runs . . . Organizations are adopting new forms of decentralization to cope with the instability, uncertainty, and pace of change of the market-place . . . In cluster of network working, employees of undifferentiated rank may operate temporarily on a certain task or tasks in teams. The clusters are largely autonomous and engage in decentralized decision-making and planning . . . They are conducive to individual initiative (“intrapreneurship”) and faster decision-taking. They facilitate organizational flexibility. (Cowen and Parker 1997) A number of writers also claim that not only the internal organization of firms but also their boundaries will be profoundly affected by the emerging knowledge economy. Specifically, because of the growing importance in knowledge-intensive industries of being able to access knowledge from multiple sources, knowledgebased networks (Harryson 2000) increasingly become the relevant dimension for understanding the organization of economic activities. Such networks typically cut across the legal boundaries of the firm. Networks, so the argument goes, are particularly useful organizational arrangements for sourcing and transferring knowledge because the costs of pricing knowledge (in a market) or transferring it
146 Nicolai J. Foss (in a hierarchy) (Liebeskind et al. 1995: 7; Powell 1990: 304) often exceed the costs of transmitting knowledge within an informal network. Furthermore, the increased reliance on knowledge networks tends to erode authority-based definitions of the boundaries of the firm, because authority increasingly shifts to expert individuals who control crucial information resources and may not be employees of the firm. As Zucker argues: While bureaucratic authority is by definition located within the firm’s boundaries, expert authority depends on the information resources available to an individual, and not on the authority of office. Thus, authority may be located within the organization . . . but when an external authority market can provide information that leads to greater effectiveness, then authority tends to migrate into the market. (Zucker 1991: 164) To the extent that important knowledge assets are increasingly controlled by employees (“knowledge workers”) themselves, traditional authority relations are fading into insignificance. This is partly a result of the increased bargaining power on the part of knowledge workers (stemming from the control over critical knowledge assets), and partly a result of the increasingly specialist nature of knowledge work (Hodgson 1998). The specialist nature of knowledge work implies that principals/employers become ignorant about (some of) the actions that are open to agents/employees, thus making the exercise of authority through direction increasingly inefficient. The combined effect of the increased importance of knowledge assets that are controlled by knowledge workers themselves and of the increasingly specialist nature of knowledge work is to wreck the traditional economist’s criterion of what distinguishes market transactions from hierarchical transactions (Zingales 2000). Thus, whether direction by means of order giving (Coase 1937; Demsetz 1991; Simon 1951; Williamson 1985) and backed up by the ownership of alienable assets (Hart and Moore 1990) obtains or not is increasingly irrelevant for understanding the organization of economic activities in a knowledge economy (Grandori 2001). Finally, with respect to theories rather than phenomena, a number of writers are quite explicit that the advent of the knowledge economy increasingly questions the relevance of Coasian organizational economics, with its rather strong dichotomization between allocation of resources by means of authority and by means of prices (e.g. Boisot 1998; Helper et al. 2000). Thus, Cowen and Parker argue that firms and markets are not exactly the same, but rather they differ in empirical terms. They refer to different means of organizing economic activity, albeit means that do not differ substantially in kind. . . . This . . . view does not seek to find a clear-cut distinction between firms and markets. Rather the difference between the firm and the market as a resource allocator involves what might more usefully be viewed as subtle differences relating to contracting. (Cowen and Parker 1997: 15)
“Austrian” determinants of economic organization 147 Much of the following is essentially an argument that, contrary to Cowen and Parker, more is involved in the choice between firms and markets than “subtle differences relating to contracting.” However, before this argument can be presented, it is necessary to consider the Hayekian insights that lead some to conclude that there is not much of a difference between firms and markets.
Interpretation: organizing in Hayekian settings Hayekian knowledge settings It is no coincidence that so many of those who write on economic organization in the emerging knowledge economy cite Hayek’s work, particularly his 1945 paper, “The use of knowledge in society.” This is because the basic propositions (or generalizing statement) about the role of knowledge in production that drive recent debate about organization in the knowledge economy have a strongly Hayekian flavor. It is possible to discern at least two such propositions in recent debate. When these two propositions are met, I shall say that “Hayekian settings” obtain. The first one is a claim that because of the increased need for diverse, specialized knowledge in production, commercially relevant knowledge is becoming increasingly distributed or dispersed in the sense of Hayek (1945) (e.g. Coombs and Metcalfe 2000; Ghoshal et al. 1995; Hodgson 1998). “Distributed” or “dispersed” knowledge” is knowledge that is not possessed by any single mind and which may be private, fleeting, tacit and subjectively held (O’Driscoll and Rizzo 1985) but which it may nevertheless be necessary to somehow mobilize (e.g. through a price system) for the carrying out of a productive task or a complex of such tasks, as, of course, Hayek (1945) famously argued. The second proposition that appears to be at the heart of much discussion of organization in the knowledge economy is that because of the increased importance of sourcing specialist knowledge, knowledge assets controlled by individual agents (“knowledge workers”) are becoming increasingly important in production (e.g. Boisot 1998; Hodgson 1998) in the sense of accounting for a greater part of the value-added of goods.
Economic organization in the Hayekian setting In the interpretation adopted here, the two Hayekian propositions above drive recent debate on economic organization in the knowledge economy in the specific sense that the phenomena they describe are claimed causally to produce a host of other phenomena that relate to economic organization (Hodgson 1998). For analytical purposes, I interpret recent debate to assert that the phenomena of knowledge becoming increasingly dispersed and important in production influence authority relations, the boundaries of the firm and the ways in which various mechanisms of coordination may be combined inside firms in regular and predictable ways. Specifically, in the emerging knowledge economy, as approximated by the two Hayekian propositions above, traditional authority relations vanish (Hodgson
148 Nicolai J. Foss 1998; Zucker 1991); the boundaries of firms blur because of the increasing importance of knowledge networks that transcend those boundaries; and coordination mechanisms (i.e. authority, norms, teams, prices, etc.) will increasingly be combined in new, innovative ways (producing what is often referred to as “new organizational forms.” This final claim implies that organizational forms do not come in a few rigid, discrete forms, but that coordination mechanisms can be combined in a multitude of ways (e.g. Grandori 1997; Helper et al. 2000). In particular, firms may adopt coordination mechanisms that we normally think of as characteristic of the market rather than of planned coordination, in particular pricing, entrepreneurial control over resources, and high-powered incentives (Miles et al. 1997). In the following, I present and discuss some of these arguments. It is convenient to begin with a discussion of the meaning of authority. Authority: Coase and Simon Although Max Weber had much of interest to offer on the topic of authority in the beginning of the twentieth century, it was not until Coase (1937) that an economic conceptualization of authority was offered. Coase’s understanding, supplemented with a later contribution by Simon (1951), still provides most economists’ working definition of authority. Moreover, Coase initiated the tendency to see the employment contract and the authority relation as the defining characteristic of the firm. In Coase (1937), the employment contract is explained as one whereby the factor, for a certain remuneration (which may be fixed or fluctuating) agrees to obey the directions of an entrepreneur within certain limits. The essence of the power is that it should only state the limits to the powers of the entrepreneur. Within these limits, he can therefore direct the other factors of production. (Coase 1937: 242) This contractually agreed upon right to “direct the other factors of production” is, of course, authority. A later paper by Herbert Simon (1951) provided a formalization of Coase’s notion of the employment relationship and a clarification of the notion of authority, which is defined as obtaining when a “boss” is permitted by a “worker” to select actions, A0 傺 A, where A is the set of the worker’s possible behaviors. More or less authority is then simply defined as making the set A0 larger or smaller. Some puzzles relating to authority The notion of authority in the context of an employment contract as the defining characteristic of the firm raises several puzzles. In the present context, four such puzzles are particularly relevant: 1
What is ultimately the source of the employer’s authority? In other words, why
“Austrian” determinants of economic organization 149
2
3
4
exactly is it that the employee accepts to be directed when non-alienable assets (i.e. human capital) cannot be traded? What happens to the notion of authority in the sense of Coase and Simon if the employer does not possess full knowledge of the employee’s action set (i.e. the actions that he can take when uncertainty is resolved), so that the employee can take actions about which the employer has no knowledge? What happens to the notion of authority if the employee is better informed than the employer with respect to how certain tasks should (optimally) be carried out? In the works of Coase and Simon, there is an implicit assumption that the employer is at least as well informed, and presumably better, about the efficiency implications of alternative actions. What happens to the notion of authority if employees control knowledge assets that are “within their heads” and which may give them substantial bargaining power?
The latter three questions concern the limits of authority in Hayekian settings, while the first question asks about the sources of the employer’s bargaining power over the employee. Only the first question has been given extensive treatment in the economics of organization. In fact, it has been one of the classic points of contention in a long-standing debate. The sources of authority Thus, to writers such as Alchian and Demsetz (1972) and Cheung (1983), it is not meaningful to assume that an employer can force an employee to do what the employer wants in the absence of coercion. As Alchian and Demsetz (1972) argue, an implication of this view is that the distinction between the authority-based and the price-based modes of allocation emphasized by Coase (1937) is superficial; there is no economic difference between “firing” one’s grocer and firing one’s secretary. In fact, it does not make such sense to speak of firms as well-defined entities at all (Cheung 1983). Note that this “nexus of contracts” position is remarkably close to the position that in a knowledge-based economy, the firm/market boundary is unclear and the notion of authority elusive at best.2 The work of Oliver Hart and others (Hart 1995, 1996; Hart and Moore 1990) – called the incomplete-contracts literature – provides a response to the Alchian and Demsetz/Cheung view. To some extent it does so by changing the terms of the debate: whereas Weber, Coase, Simon, and others focused on direct authority over (non-alienable) human assets, the incomplete-contracts literature rather explains authority over human assets as something that is indirectly acquired through ownership of alienable assets. Two kinds of assets are distinguished, namely alienable (i.e. non-human) and non-alienable (i.e. human) assets. The basic distinction between an independent contractor and an employee, that is, between an inter-firm and an intra-firm transaction, now turns on who owns the alienable assets that an agent (whether independent or employee) utilizes in his work. An independent contractor owns his tools, etc., while an employee does not. The
150 Nicolai J. Foss importance of asset ownership derives from the fact that the willingness of an agent to undertake a non-contractible investment (say, exertion of effort or investment in human capital), which is specific to the asset, depends on who owns the asset. The parties to a relation (whether customer and grocer, or employer and employee) are seen as being in a bargaining situation, each having an outside option. Given this, the division of the surplus from the relation will depend on who owns the alienable assets in the relation, since the pattern of ownership will influence the parties’ outside options.3 In turn, the expectation of this division feeds back into the investments that the parties are willing to make. Efficiency considerations then suggest that authority (i.e. ownership to the alienable assets) should be allocated to the agent who makes the most important (non-contractible) relation-specific investment. Of course, the incomplete-contracts approach is a neoclassical approach, not an Austrian one. However, some interesting aspects in it make it complementary to a Misesian perspective, notably the emphasis on ownership as backing up authority, the argument that the boundaries of the firm lie where the entrepreneur’s ownership of alienable assets stops, and the implication that the entrepreneur assumes his directing role because his inputs to the venture are those that matter the most for the total monetary surplus. I shall rely on these basic ideas in criticizing some of the claims put forward by writers on economic organization in the knowledge economy. The Hayekian challenge to authority However, we are still left with puzzles (2) to (4). The incomplete-contracts perspective does not provide an immediate answer to these. They are of a different nature, since they do not directly concern the issue of the sources of authority; rather, they are about the reach and efficiency of authority in Hayekian settings. In one interpretation, the planning problems posed by Hayekian distributed knowledge have become increasingly pressing for firms (see Cowen and Parker 1997). Thus, because of the increased importance of specialist workers and the increased knowledge-intensity of production, coping with the problem posed by Hayekian distributed knowledge has moved from being a problem for socialist managers and dirigiste bureaucrats to also being a problem confronted by managers of (at least large) firms in capitalist economies. However, the fact that firms exist is prima facie evidence that they can somehow cope with the problem and/or that there are offsetting benefits of firm organization.4 Distributed knowledge and delegated rights How may the Hayekian knowledge-problem be handled in firms? One way is to suppress distributed knowledge as far as possible by discouraging local initiative, indoctrinating employees harshly, and operating with rigid routines and operating procedures. In a dynamic economy, this is, however, bound to lead to disaster. Something else must be done. As Mises (1949: 303) emphasized, “entrepreneurs
“Austrian” determinants of economic organization 151 are not omnipresent. They cannot themselves attend to the manifold tasks which are incumbent upon them,” so that coping with distributed knowledge leads in the direction of decentralization (cf. also Hayek 1945: 83–4), through delegation of decision rights to managers (Mises 1949: 305). Mises also recognized that delegation leads to agency problems, but argued that the system of double-entry bookkeeping and other control measures may partly cope with such problems. Thus, in the Misesian scheme, an organizational equilibrium obtains where decision rights are delegated in such a way that the benefits of delegation in terms of better utilizing local knowledge are balanced against the costs of delegation in terms of agency losses (as in Jensen and Meckling 1992). This provides a useful perspective on many of those new organizational forms that are argued to be characteristic of the knowledge economy (cf. Cowen and Parker 1997), such as team-organization, “molecular forms”, and other manifestations of organizational delegation and decentralization. These are prompted by a market-driven pressure to delegate decision rights (e.g. to better serve customer preferences) and structure reward schemes in such a way that optimal tradeoffs are reached. Hayekian settings and economic organization However, while an Austrian perspective is useful for understanding why firms adopt team organization, etc., we are still left with the puzzle why such teams are organized inside firms, being subject to the exercise of authority. Moving teams out of firms would appear to yield net benefits, since incentives would be further strengthened. Adding to the puzzle is that authority in the sense of Coase or Simon appears to play at best a very limited role under Hayekian dispersed knowledge. This is because the Coase/Simon notion of authority assumes that a directing principal is at least as knowledgeable about the relevant tasks as the agent being directed. The ownership-based notion of authority developed by Hart also seems to play only a limited role under Hayekian distributed knowledge. This is because the assets that in Hart’s scheme confer authority are physical assets. However, as numerous writers have emphasized, an important aspect of the knowledge economy is precisely that physical assets are of strongly waning importance (e.g. Boisot 1998). Of course, the implication is that ownership over such assets is an increasingly ineffective source of bargaining power and that, therefore, authority must wane as bargaining power increasingly becomes more symmetrically distributed over the owners of knowledge assets. Since the boundaries of the firm are (also) defined in terms of legally recognized ownership to the firm’s alienable, primarily physical, assets, and since such assets are of declining economic and commercial importance, it is obvious that the very notion of the firm’s boundaries is becoming increasingly fuzzy, or perhaps even irrelevant. Finally, because authority declines in importance as knowledge becomes distributed and knowledge inputs increase in importance, resort to other coordination mechanisms is necessary. Thus, firms increasingly rely on high-powered incentives, implement employee stock-ownership programs, invest in building “corporate cultures,” try to price corporate resources to the largest possible extent, etc. An
152 Nicolai J. Foss outcome of this is the emergence of “new organizational forms.” The theoretical implication is that various mechanisms for coordinating resources are combined to a much larger extent than hitherto assumed in, for example, organizational economics, where economic activities are normally assumed to be organized across three discrete governance structures, firms, markets, and hybrids (e.g. Williamson 1996). In sum, arguments can be made that Hayekian settings, where knowledge is distributed and knowledge inputs are more important than physical inputs, present real problems for the exercise of authority in firms, make the boundaries of firms blur, and remove many of the constraints on the malleability of coordination mechanisms. The following section discusses the reach of these arguments.
Discussion In this section, I assume that Hayekian settings obtain and discuss the implications of such settings for the Coasian themes of authority, the boundaries of the firm, and the combinability of coordination mechanisms. The underlying perspective is Misesian, in the sense that I shall throughout assume the existence of a speculating entrepreneur who is ultimately in charge of the business venture in the sense that he determines “the general plan for the utilization of resources” (Mises 1949: 303), hires the managers and “technicians, i.e. people who have the ability and skill to perform definite kinds and quantities of work” (ibid.), determines “the expansion and contraction of the size of the business and its main sections” as well as “the enterprise’s financial structure” (Mises 1949: 307), and acquires ownership of the firm’s alienable assets. Nobody denies that in the emerging knowledge economy, there will still be a need for such enterprising agents. On the contrary, many recent writings on the knowledge economy very strongly stress entrepreneurship (e.g. Miles et al. 1997). What is being claimed is rather that the entrepreneur will no longer be able to exercise much authority, that the boundaries of his venture will become ill-defined (if not in a formal, legal sense, then in an economic and commercial), and that his venture can rely on all sorts of combinations of coordination mechanisms, in particular that he can offer employees incentives that in terms of their strength (i.e. the way in which they link effort and rewards/punishment) are very close to the incentives provided under market contracting, effectively mimicking the effects of market pricing. I discuss the three issues of authority, boundaries and malleability of coordination mechanisms seriatim. Authority in Hayekian settings In this section, the strategy is to examine the role of authority in Hayekian settings. Since I later discuss the importance for economic organization of the distinction between physical and knowledge assets, here I only concentrate on the distributed knowledge aspect of Hayekian settings. One way of doing so is to focus on “hidden knowledge” (Minkler 1993) in relations between a principal (e.g. the Misesian
“Austrian” determinants of economic organization 153 entrepreneur) and an agent (e.g. a hired manager). That is, it will be assumed that the problem facing a principal is not just that he is uninformed about what state of nature has been revealed or of the realization of the agent’s effort (i.e. hidden information), as in the usual agency model (Holmström 1979), but that the agent’s knowledge is superior to that of the principal with respect to certain production possibilities (i.e. hidden knowledge). The principal may be ignorant about some members of the set of possible actions open to the agent, or the agent may be better informed than the employer with respect to how certain tasks should (optimally) be carried out, or both. As I shall argue, it is possible to explain the presence of authority in such a setting. I discuss them under the headings of “the need for urgent coordination,” “decisive information,” “economies of scale in decision-making,” and “defining incentive systems.” The need for urgent coordination While Hayek (1945) did much to identify the benefits of the price system in the context of alienable property rights in coping with distributed knowledge and unexpected disturbances, he arguably neglected those situations where efficiency requires that adaptation be “coordinated” rather than “autonomous” (Williamson 1996). Coordinated adaptation or action may be required when actions or activities are complementary (K. Foss 2000; Milgrom and Roberts 1990), for example, when it is important to make some urgent choice (possibly highly inefficient), because doing nothing is worse. In such cases, it may be better to have somebody pick a strategy and make everybody play this strategy, if the inefficiencies from picking a bad strategy are smaller than the inefficiencies from delaying a coordinated solution. In the context of a specific model of this trade-off, Bolton and Farrell conclude that “the less important the private information that the planner lacks and the more essential coordination is, the more attractive the central planning solution is” (1990: 805). Moreover, the decentralized solution performs poorly if urgency is important. Centralization is assumed to not involve delay and therefore is a good mechanism for dealing with emergencies, a conclusion Bolton and Farrell argue is consistent with the observed tendencies of firms to rely on centralized authority in cases of emergencies. Decisive information Even under distributed knowledge, where the centralized decision-maker per definition does not possess (at least some) local information, he may in many cases still hold the information that is decisive. Loosely, information is (strongly) decisive if – in a setting involving many cooperating individuals – a decision can reasonably be made on the basis of this information without involving other pieces of information (Casson 1994). According to Casson (1994), the extent to which a problem involving the knowledge of several individuals has decisiveness features and the cost at which knowledge can be communicated helps to explain the allocation of decision rights. The general principle is that decision rights
154 Nicolai J. Foss will tend to be concentrated in the hands of the individual who has access to the decisive information, and particularly so the more costly it is to communicate this information. This provides a further argument for authority under hidden knowledge. If the knowledge possessed by, for example, managers is not decisive, if the knowledge possessed by the entrepreneur is decisive, and if it is costly to communicate the entrepreneur’s knowledge, then overall decision rights should be concentrated in the hands of the entrepreneur, that is, he should assume ultimate authority in the firm. Economies of scale in decision-making Demsetz (1988) argues that economies of scale in managing are a neglected factor in the explanation of the existence of firms and the understanding of authority. However, he does not explain the underlying reasoning. However, the relevant economies may relate both to managing the internal relations between agents inside the firm and managing relations to outside agents (customers, suppliers, government agencies) (Hermalin 1998). Not only may there be scale economies in such activities; there may also be substantial learning economies. Other agents may be happy to let a central agent incur the effort costs of negotiating, learning about potential suppliers, etc., and compensate him accordingly. Defining incentive systems It is hard to deny that Hayekian settings pose special problems for the use of monitoring mechanisms and incentive pay (Aghion and Tirole 1997; Foss 1999; Minkler 1993). Minkler (1993: 23) argues that “if the worker knows more than the entrepreneur, it is pointless for the entrepreneur to monitor the worker,” which implies that to the extent that monitoring is a precondition for the exercise of direction, using the authority mechanism also seems to become “pointless.” However, even under hidden knowledge, there may still be a role for authority. For example, even under hidden knowledge the principal may be able to form conjectures of the financial results that result from the agent’s activities. He can check whether these conjectures are actually confirmed using the control systems of the firm. Both Knight (1921), discussing business “judgment” and Mises (1949: 303), discussing the entrepreneur delegating responsibilities to managers, clearly allowed for this possibility. None of them assumed that entrepreneurs would have full knowledge of their managers’ action set; still, they did assume that the entrepreneur can rationally delegate decisions to managers and control these. Hidden knowledge does not imply that subjective performance measurement becomes impossible. In fact, it may be conjectured that the more we depart from simple settings where employees are very easily monitored, and the more complicated the control problem becomes, the more likely is it that the entrepreneur will choose to rely on multiple incentive instruments to influence employee behavior (Henderson 2000). In a dynamic economy, maintaining coherence between such instruments may be a recurrent task. Economies of scale
“Austrian” determinants of economic organization 155 in this task may dictate that this activity is centralized. Moreover, centralization is required to the extent that externalities arise when the instruments are controlled by separate firms and transaction costs hinder the internalization of these externalities. Both arguments point towards the centralization of decision rights To sum up, it has been argued that it is possible to give efficiency explanations of authority in the context of Hayekian settings, approximated by hidden knowledge. This is not to say that authority relations will remain unaffected by the arguably increasing importance of Hayekian settings. To be sure, authority as understood by Coase and Simon, as a relation in which the principal has superior knowledge and can observe all contingencies that require a response by some employee, is an increasingly unrealistic conceptualization of authority. Actually, this kind of authority was already too descriptively narrow with respect to the business firms that Mises (1949) discussed. Mises clearly recognized that in many firms decision rights are allocated by the entrepreneur (and the board of directors) to lower levels, presumably in order better to cope with distributed knowledge, an insight that is not present in Coase (1937) and Simon (1951). However, Mises also understood that such rights are circumscribed in an attempt to cope with the control problem that follows from delegation.5 Thus, decision rights are delegated in firms, but they are delegated as means to an end (Hayek 1973); their use is monitored (Jensen and Meckling 1992), and top-management reserves ultimate decision rights for itself (Baker et al. 1999). This suggests that authority in the sense of direction and centralized decision-making – which, as Mises emphasized, does not require detailed knowledge about a subordinate’s knowledge or available actions – may persist in Hayekian settings. Per implication, even in “knowledge-based” firms, there may be a need for centralized coordination. As I shall argue next, when there is such a need, it is often efficient to centralize ownership to alienable assets. In turn, this suggests that centralized coordination is a feature of firms rather than markets. Ownership and the boundaries of firms In the previous section, I did not say much about what backs up authority. I did, however, hint that ownership would play a key role; the purpose of the present section is to go more into ownership issues, and therefore the issue of the boundaries of the firm. The argument that will be critically discussed is that, as knowledge assets become relatively more important in production, the boundaries of firms will blur, at least to the extent that these are defined in terms of legally recognized ownership of the firm’s alienable assets. It is possible to use the incomplete contracts framework (Hart 1995; Hart 1996; Hart and Moore 1990), summarized very briefly earlier, to get an understanding of the implications of knowledge assets for the boundaries of the firm. Recall that in this approach, asset ownership is central because it provides the bargaining lever that backs up authority, and authority may have important efficiency implications, as argued earlier. I argue that when there is a need for centralized coordination, efficiency considerations often suggest a need for also
156 Nicolai J. Foss concentrating asset ownership. The key to this argument is to introduce knowledge assets explicitly. In fact, it is possible to dispense entirely with physical assets, and discuss a purely knowledge-based firm.6 For simplicity, assume that two agents interact and that one of these, the entrepreneur, owns a knowledge asset that is “inside his head” (e.g. an entrepreneurial idea) and the other agent, the scientist, owns the only other asset in the relation which we may assume to be a patent. Both assets are necessary to create value in the relation, and they are (strictly) complementary, so that the one is of zero value without the other. It is prohibitively costly to communicate the knowledge embodied in the entrepreneurial idea from the entrepreneur to the scientist, so it is effectively non-alienable. Moreover, it is not possible to write a comprehensive contract, governing the use of the assets in all contingencies. Given this, we may ask who should own the (alienable) patent which – in terms of incomplete contracts approach – is the same as asking who should own the firm. In this setting, if the entrepreneur makes an effort investment, that is, elaborates on his idea and creates extra value, the scientist can effect a hold-up on the entrepreneur, since the latter needs access to the patent to create value (and the contract is incomplete). Of course, the reverse also holds, so that if the scientist makes an effort investment, for example, makes a spin-off patent, the entrepreneur can hold-up the scientist by threatening to withdraw from the relation. One can show (details in Hart and Moore 1990 and Brynjolfsson 1994) that because of the externality problem that the hold-up threat creates, every agent underinvests; specifically, each party invests to the point where the marginal cost of effort investment equals one-half of the marginal value (because they are assumed to split the extra surplus 50:50). Suppose now that the entrepreneur owns both the patent and the entrepreneurial idea. This will strengthen the entrepreneur’s incentives (the scientist cannot hold him up anymore) and it will leave the scientist’s incentives unaffected. Therefore, this ownership arrangement should be chosen. The conclusion is that it is possible to speak of the boundaries of the firm in terms of ownership – even in a situation where all relevant productive assets are knowledge assets. However, this does not yet demonstrate the point made earlier, namely that concentration of coordination tasks produces a need for concentration of ownership. We can address this issue, however, by assuming that one of the agents, the entrepreneur, has decisive information (in the sense discussed earlier). While efficiency may require that this agent should have decision rights amounting to authority (as argued earlier), should he also be an owner? Consider a bigger “knowledge-based” firm where there is a group of scientists who each owns a patent. The entrepreneur, who is again equipped with a nonalienable entrepreneurial idea in which he may invest further, aggregates information from the messages of the scientists and directs their efforts. His knowledge is decisive in the sense that without it, all actions of the other agents produce zero value. The entrepreneur may improve on this decisive knowledge. Each agent needs access to his own patent and to the entrepreneur’s direction in order to be productive. Given these assumptions, we again have the hold-up
“Austrian” determinants of economic organization 157 problem. Any one of the scientists can hold up the entrepreneur on his investment, leading the entrepreneur to choose inefficient investment levels. However, if the entrepreneur is given ownership to the alienable assets, that is, the patents, the hold-up problem disappears. Thus, this ownership arrangement should be chosen. The combinability of coordination mechanisms Earlier paragraphs have established that it is possible to give efficiency reasons for authority as well as legal/ownership-based notions of the boundaries of the firm in Hayekian settings. Moreover, it was argued that there is a connection between authority and ownership, and that this link also exists in Hayekian settings. This prompts the question of whether there are other necessary “links” between organizational elements. At this stage, it is pertinent to turn once more to Mises’ work. As he (1949: 709) explained, there are inherent contradictions involved in “playing market,” that is, introducing pricing in the context of hierarchy. With reference to various socialist schemes of his day that tried to preserve some market relations while eliminating capital and financial markets, Mises argued that these schemes would be unworkable. To an important extent this is a matter of the sheer impossibility of having rational calculation when crucial markets are eliminated. But Mises also placed much emphasis on property rights and ownership issues (particularly Mises 1936). Thus, he was aware that the concentration of ultimate decisionmaking rights and responsibilities, and therefore ownership, in the hands of a central planning board would dilute the incentives of socialist managers. While planning authorities could (and according to the schemes of the day, should) delegate rights to make production and investment decisions to managers, these rights could not be used efficiently. First, since managers could not be sure that they would not be overruled by the planning authorities, they were not likely to take a long view, notably in their investment decisions. Moreover, since managers were not the ultimate owners, they were not the full residual claimants of their decisions and, hence, would not make efficient decisions. Thus, in addition to his “pure” calculation argument, Mises also put forward property rights arguments why the attempt to “play market” under socialism would only lead to inefficiencies. Firms have the great advantage relative to socialist planning boards that they may, to a much larger extent, rely on the prices of outside markets. Thus, the Misesian calculation problem, while constraining the efficient size of firms (Klein 1996), does not imply that firm organization is “impossible.” However, some of the property rights insights into socialism also apply to firms. In particular, a good deal of recent analytical energies have been devoted to the commitment problems of delegation in firms (e.g. Baker et al. 1999; Miller 1992; Williamson 1985). A main conclusion is that credible delegation may be very hard, since reneging on a promise to delegate will in many cases be extremely tempting and those to whom rights are delegated anticipate this.7
158 Nicolai J. Foss In a recent treatment, the problem is stated in the following way (cf. Baker et al. 1999). Assume that a subordinate initiates a project.8 Assume further that the manager has information that is necessary to perform an assessment of the project, but that he decides upfront to ratify any project that the subordinate proposes. Effectively, this amounts to full informal delegation of the rights to initiate and ratify projects – “informal,” because the formal right to ratify is still in the hands of the manager and because that right cannot be allocated to the subordinate through a court-enforceable contract (cf. Williamson 1996). Because the subordinate values are being given freedom, this will induce more effort in searching for new projects (Aghion and Tirole 1997). The expected benefits of these increased efforts may overwhelm the expected costs from bad projects that the manager has to ratify. However, the problem is that because the manager has information about the state of a project (“bad” or “good”), he may be tempted to renege on a promise to delegate decision authority, that is, intervene in a “selective” manner. But if he overrules the subordinate, the latter will lose trust in him, holding back on effort. Clearly, in this game a number of equilibria is feasible. The particular equilibrium that emerges will be determined by the discount rate of the manager, the specific trigger strategy followed by the subordinate (e.g. will he lose trust in the manager for all future periods if he is overruled?), and how much the manager values his reputation for not reneging relative to the benefits of reneging on a bad project (for details and extensions, see Baker et al. 1999). An implication is that mixing very different coordination mechanisms may lead to efficiency losses, and may not be feasible for this reason. The basic problem is that emulating market organization inside firms amounts to “playing market.” Unlike independent agents in markets, corporate employees never possess ultimate decision rights. They are not full owners. This means that those who possess ultimate decision rights can always overrule employees. Thus, there are incentive limits to the extent to which market principles can be applied inside firms. These insights suggest, on the most basic level, that coordination mechanisms are not simply combinable in an arbitrary fashion. Ultimately, this is because authority and ownership will continue to be important in the knowledge economy, and it is the inherent tension between ownership and delegated rights that create the incentive problem. In a sense, this is a sort of application on the level of the firm of Mises’ demonstration that the various elements that make up the capitalist market economy are complementary ones; one cannot simply take a subset of these away, say, unhampered capital markets, and substitute them with elements that are characteristic of a different system.9
Illustration: trying spaghetti in Oticon Founded in 1904 and based mainly in Denmark, Oticon (now William Demant Holding A/S) is a world leader in the hearing aid industry.10 For a number of years in the beginning to the mid-1990s, Oticon became one of the best-known and admired examples of radical organizational turnaround. The turnaround aimed at increasing employee empowerment and responsibility, reducing product
“Austrian” determinants of economic organization 159 development cycles, increasing contact to customers, mobilizing dispersed and “hidden” existing knowledge, and building new knowledge. These goals would be reached by means of a radical project-based organizational structure that should be explicitly “knowledge-based” (Kolind 1990) and “anthropocentric,” yet based on “free market forces” (Lyregaard 1993). It should be capable of combining and recombining skills in a flexible manner, where skills and other resources would move to those (new) uses where they were most highly valued. The new organization amounted to breaking down the earlier functional department-based organization into an almost completely flat, almost 100 percent project-based organization. Departments gave way to “competence centers” (e.g. in mechanical engineering, audiology, etc.) that broke with the boundaries imposed by the old departments. Rather than being assigned tasks from the above, employees now had a choice to decide which projects they would join. All projects were to be announced on an electronic bulletin board, where employees who would like to join them could sign in. The much noted “multi-job” principle meant, first, that employees were not restricted in the number of projects they could join, and, second, that employees were actively encouraged (and in the beginning actually required) to develop and include skills outside their skill portfolio. Project managers were free to manage the project as they preferred, and accordingly received a considerable amount of decision-making power. For example, they received the right to negotiate salaries.11 The project team was required to undertake all the tasks connected with product development until the product was successfully introduced in all markets. Finally, although project teams were selforganizing and basically left to mind their own business once their projects were ratified, they were still to meet with a “Project and Product Committee” once every 3 months for ongoing project evaluation.12 Interpreting spaghetti From an Austrian economics point of view, the immediately noticeable aspect of the spaghetti organization is the importance of the market metaphor in the design of the new administrative structure (Lyregaard 1993). In the Oticon simulation of the market, employees were given many and far-reaching decision rights. Development projects could be initiated by, in principle, any employee, just like entrepreneurs in a market setting. Project groups were self-organizing in much the same way that, for example, partnerships are self-organizing. The setting of salaries was decentralized to project leaders. Most hierarchical levels were eliminated and formal titles done away with. Thus, the intention was that the organization should mimic the market in such dimensions as flexibility, autonomy, flatness, etc. A major problem that besets centralized decision-making systems – in large firms as well as in centralized economies – is that they have difficulties efficiently mobilizing and utilizing important local knowledge, such as the precise characteristics of specific processes, employees, machines, or customer preferences. As Hayek (1945) explained, the main problem is that much of this knowledge is transitory, fleeting
160 Nicolai J. Foss and/or tacit, and therefore costly to articulate and transfer to a (corporate) center. Markets are not plagued by these type of problems to the same extent. Rather than involving the transfer of costly-to-transfer knowledge to those with decision rights (as in a command economy or a centralized firm), markets tend to economize on the costs of transferring knowledge by instead allocating decision rights to those who possess the relevant knowledge (Hayek 1945; Jensen and Meckling 1992). The Oticon spaghetti organization was very much an attempt to mimic the market in these dimensions. Thus, a basic problem in the old organization had been that commercially important knowledge simply did not reach the relevant decision-makers. By giving project teams extensive decision rights, making ideas for projects public and requiring that teams/project groups possessed the necessary complementary skills for a particular marketing, research or development task, the spaghetti organization stimulated a co-location of decision rights with local knowledge. Those who held the relevant knowledge were also to have the authority to decide over the use of company resources, at least within limits. Decision rights It is the same co-location that takes place in a well-functioning market. However, Oticon remained a firm; its use of “free market forces” (Lyregaard 1993) was fundamentally a simulation, for the full decentralization of decision rights that characterizes market organization never took place in Oticon (and neither could it). In lieu of a distinct price mechanism that could coordinate actions, the marketlike spaghetti organization was to be kept together by a shared set of values (Kolind 1994), advanced information technology, the charismatic leadership of CEO Lars Kolind himself, and, last but certainly not least, by a committee, staffed by Kolind and three other managers, the primary purpose of which was to approve of or reject proposed projects (the Projects and Products Committee). This committee, as well as the strongly overlapping top management committee, were the real holders of power – they possessed ultimate decision rights. In general, firms confront a problem that markets confront to a smaller degree, namely that of making sure that decision rights are utilized efficiently, in other words, the problem of moral hazard. There was no a priori guarantee that project leaders and other employees would act in the interest of the firm. Several of the components of the spaghetti organization may be seen as responses to this fundamental agency problem. The rights to allocate resources to a particular project may be broken down into the rights to (1) initiate a project, (2) ratify projects, (3) implement projects, and (4) monitor and evaluate projects (cf. Fama and Jensen 1983). For reasons of efficiency, firms usually do not concentrate these rights in the same hands; rather initiation and implementation rights may be controlled by one person (or team) while ratification and monitoring rights are controlled by other persons, usually hierarchical superiors.13 This allocation of control rights corresponds to that of the Oticon spaghetti organization. Thus, anybody could initiate a project, in the sense of sketching, making preliminary plans, doing the required calculations, making contacts, etc.
“Austrian” determinants of economic organization 161 However, projects had to be evaluated by the Products and Projects Committee (PPC) that was staffed by Kolind, the development manager, the marketing manager and the support manager – The PPC was the real holder of power in Oticon. Frequent intervention on the part of the Committee ex post made that clear to everybody. Project teams were required to report to the Committee on a 3-monthly basis, and the Committee could at any time halt or close down projects, something which happened quite frequently. Thus, decision management (i.e. initiation and daily project management) was separated from decision control (i.e. project evaluation and monitoring). The internal market was, in actuality, very much a managed one. Although a considerable amount of variety was indeed allowed to evolve, the selection over this variety was very much guided by the visible hand of the PPC (Lovas and Ghoshal 2000). Retreating from spaghetti A retreat from the radical spaghetti organization that Kolind had implemented in 1991 began long before he resigned as CEO in 1998. In 1996, Oticon headquarters was divided into three “business teams” (“team advanced,” “team technology,” and “team high volume”) which function as overall administrative units around projects. Each business team is managed by two team leaders, namely a technician and a person with marketing or human resource skills. These teams refer directly to Niels Jakobsen, the new CEO. In addition to the business teams, a “Competence Center” is in charge of all projects and their financing and of an operational group controlling administration, IT, logistics, sales, and exports. It is one of the successors to the abandoned PPC; however, its style of managing the projects is very different. In particular, care is taken to avoid the erratic behavior with respect to intervening in already approved projects that characterized the PPC. The team leaders and the head of the Competence Center comprise, together with the CEO, the “Development Group,” which essentially is the senior executive group and is in charge of overall strategy making. Much of the initiative with respect to starting new projects is taken by the Development Group. Many of the decision-making rights held earlier by project leaders have now been concentrated in the hands of the Competence Center, or the managers of the business teams. Project leaders are appointed by the Competence Center; the right to be a project leader is not something that one grabs, as under the spaghetti organization. Although multijobs/multi-tasking are still allowed, this practice is no longer directly encouraged, and its prevalence has been much reduced. Although Oticon is still characterized by considerable decentralization and delegation of rights, many of the crucial elements of the spaghetti organization have been left. What happened? Some basic design problems Insights from organizational economics and Austrian economics suggest that although the spaghetti organization was characterized by substantial coherence obtaining between its complementary elements, it was still beset by a number of
162 Nicolai J. Foss problems that may arguably have been among the causes of its partial abandonment about 5 years later. First, the spaghetti organization eliminated most hierarchical levels, leading to a problem of the allocation of managerial competence. Hierarchy could not be used anymore as a sorting mechanism for allocating skills so that those with more decisive knowledge would obtain authority over those with less decisive knowledge. Second, from an incentive perspective, the extremely flat spaghetti organization sacrificed an incentive instrument, since it abolished tournaments between managers. Third, the multi-job led to severe coordination problems, because project leaders had very little guarantee that they could actually carry a project to its end, given that anybody at the project could leave at will, if noticing a superior opportunity in the internal job market. Apparently, reputation mechanisms were not sufficient to cope with this problem. Fourth, contrary to the aim of making Oticon a knowledge-sharing environment, knowledge tended to be held back within projects, because of the widespread, and correct, perception that projects were essentially in competition over resources. Monitoring systems apparently could not cope satisfactorily with these problems.14 Fifth, influence activities (Milgrom 1988) were important under the spaghetti organization. Personal relations to those who staffed the committee became paramount for having a project ratified by the committee (Eskerod 1998: 80).15 However, perhaps the most important incentive problem related to the behavior of the PPC. Selective intervention in Oticon It is arguable that one of the reasons why the spaghetti organization was changed into a more hierarchical organization has to do with the sort of problems described by notions of selective intervention. Thus, the official rhetoric of a flexible market-based structure, with substantial autonomy and the management team (i.e. the PPC) acting as little more than facilitator and coordinator (Kolind 1990; Lyregaard 1993), was increasingly at odds with the frequent selective intervention on the part of the PPC, with the “dynamics of interventionism,” if you like. Selective intervention was partly motivated by the fact that the “PPC does not make general written plans, which are accessible to the rest of the organization . . . if this were done, plans would have to be adjusted or remade in an ever-continuing process, because the old plans had become outdated” (Eskerod 1998: 80). Thus, instead of drafting and continuously revising plans under the impact of changing contingencies, the PPC preferred to intervene directly in projects. In fact, this was taken by the PPC to be a quite natural feature of a flexible, project-oriented organization (Eskerod 1998: 89). However, it also led to diluted incentives and strongly harmed intrinsic motivation (as documented at length by Eskerod 1997, 1998). The present Oticon organization is characterized by a much more consistent approach towards projects on the part of the Competence Center (one of the descendants of the PPC). Projects are rarely stopped or abandoned, and there is a stated policy of sticking to ratified projects. First, projects now rest on generally more secure ground, having been more carefully researched beforehand. Second,
“Austrian” determinants of economic organization 163 the wish to avoid harming motivation (i.e. diluting incentives) by overruling ongoing projects is strongly stressed. Apparently, the present Oticon management has realized the need to credibly commit to a policy of non-interference with ongoing projects. In contrast, one of the main problems of the old spaghetti organization was that Kolind and the PPC never committed in this way; neither, apparently, did they intend to do so. Kolind’s view appears to have been that in important respects and in many situations, they were likely to possess the decisive overall knowledge, and that efficient utilization of resources dictated intervening in, and sometimes closing down, projects. However, that view clashed on a basic level with the rhetoric of widespread delegation of decision rights.
Conclusion The understanding of the dynamics of economic organization, such as what will happen to authority relations, the boundaries of firms and firms’ use of distinct coordination mechanisms, is a task of almost forbidding complexity. Yet, a combination of organizational economics and Austrian insights, primarily represented by the works of Hayek and Mises, provides some useful insights. The approach of this paper has been to try to distill some key assumptions and propositions that characterize much of this literature, and examine these in the light of organizational economics and Austrian economics. Thus, it has been argued that much of the recent discussion of economic organization in the knowledge economy may be distilled into a basic assertion that the kind of knowledge that Hayek (1945) talked about represents an increasingly binding constraint on the exercise of authority, makes the boundaries of firms blur, and necessitates the use of multiple coordination instruments to utilize this knowledge efficiently. To the extent that increasingly firm hierarchies do flatten, functions are spun-off in an attempt to improve incentives, delegation increases, etc. much of this may be interpreted using insights originally put forward by Hayek, as a number of writers have already pointed out (Cowen and Parker 1997; Ellig 1993; Ellig and Gable 1993; Foss 1999; Ghoshal et al. 1995). On the other hand, while Austrian insights are useful for interpreting recent claims, these insights are also useful for understanding their reach. In particular, Misesian insights are helpful here, and it may be argued that there is a certain imbalance in the above writings because of their neglect of these insights. Thus, I have argued that Mises’ insights in entrepreneurship, property rights and the complementarity of elements in economic systems are useful ones for claiming a role for authority and the boundaries of firms, as well as for helping to uphold the notion that there are discrete organizational forms (e.g. firms, markets, and hybrids), and that coordination mechanisms cannot be combined arbitrarily.
Notes Some parts of this chapter draw on material in Foss (2000, 2001). The comments of Pierre Garrouste are appreciated.
164 Nicolai J. Foss 1 Foss (1994b) argued that in many important respects, the Austrians anticipated ideas that have become prominent in the modern economics of organization. The arguments developed in the present chapter go beyond those in Foss (1994b) by putting more stress on Misesian arguments. Klein (1996) is an application of the Misesian calculation argument to the issue of the boundaries of the firm, and Klein and Klein (2000) treat corporate governance issues in a Misesian manner. 2 And see Cowen and Parker (1997) for a statement of the Alchian and Demsetz/ Cheung view in this context. 3 For example, if the employer owns all the alienable assets, the employee can still quit if he dislikes the employer’s orders (as in Alchian and Demsetz 1972), but he cannot take the assets with him, and the employer can ensure that if the employee leaves, somebody else can take over the job. 4 Such as the superior ability of firms to organize transactions characterized by highlevels of relation-specific investments (Grossman and Hart 1986; Hart and Moore 1990; Williamson 1985, 1996). 5 For example, the right to use an asset in certain ways may be delegated; however, it is understood that that right does not entail the right to, for example, use the asset in the service of a competitor firm. 6 This is because the crucial issue is (contrary to the thrust of some contributions, e.g. Boisot 1998) not whether assets are material or immaterial, but whether they are alienable or non-alienable. 7 Transaction cost economist Oliver Williamson has referred to these kinds of problems with his concept of the “impossibility of (efficient) selective intervention.” The main problem is that incentives are diluted. This is because the option to intervene “can be exercised both for good cause (to support expected net gains) and for bad (to support the subgoals of the intervenor)” (Williamson 1996: 150–1). Promises to only intervene for good cause can never be credible, Williamson argues, because they are unenforcable. 8 This should be understood in a broad sense: a “project” may refer to many different types of decisions or clusters of decisions. 9 See also Milgrom and Roberts (1990) for an important discussion of complementarities. 10 The history of Oticon prior to the introduction of the spaghetti organization is extensively covered in Poulsen (1993) and Morsing (1995), and more briefly, in Gould (1994) and Lovas and Ghoshal (2000: 877–8). 11 Although the variance on the distribution of salaries was increased as a result of the new reward schemes that characterized the spaghetti organization, average salaries do not appear to have changed. 12 Complementary measures were taken to back up these initiatives. For example, Kolind introduced an employee stock program, which was motivated by the need to raise needed additional money for the transformation, and he invested 26 millions DKK of his own funds in Oticon. 13 Exceptions may occur when giving subordinates more extensive rights (e.g. a package of initiation, ratification and implementation rights) strengthens employee incentives (see Aghion and Tirole 1997; Baker et al. 1999 for analyses of this). 14 Possibly as a reflection of these problems, the most crucial variable with respect to determining salary changes in the present organization is the degree to which an employee contributes to knowledge-sharing. 15 Foss (2000) discusses whether these design mistakes were remediable, concluding that they were not.
“Austrian” determinants of economic organization 165
References Aghion, P. and Tirole, J. (1997) “Formal and real authority in organization,” Journal of Political Economy 105: 1–29. Alchian, A.A. and Demsetz, H. (1972) “Production, information costs, and economic organization,” in A.A. Alchian (1977) Economic Forces at Work, Indianapolis: Liberty Press. Baker, G., Gibbons, R. and Murphy, K.J. (1999) “Informal authority in organizations,” Journal of Law, Economics and Organization 15: 56–73. Boisot, M. (1998) Knowledge Assets: Securing Competitive Advantage in the Information Economy, Oxford: Oxford University Press. Bolton, P. and Farrell, J. (1990) “Decentralization, duplication, and delay,” Journal of Political Economy 98: 803–26. Brynjolfsson, E. (1994) “Information assets, technology, and organization,” Management Science 40: 1645–62. Casson, M. (1994) “Why are firms hierarchical?,” International Journal of the Economics of Business 1: 47–76. Cheung, S.N.S. (1983) “The contractual nature of the firm,” Journal of Law and Economics 26: 1–22. Coase, R.H. (1937) “The nature of the firm,” in N.J. Foss (ed.) (1999) The Theory of the Firm: Critical Perspectives in Business and Management, vol. II, London: Routledge. Coombes, R. and Metcalfe, J.S. (2000) “Organizing for innovation: Co-ordinating distributed innovation capabilities”, in N.J. Foss and V. Mahnko (eds) Competence, Governance and Entrepreneurship, Oxford: Oxford University Press. Cowen, T. and Parker, D. (1997) Markets in the Firm: A Market Process Approach to Management, London: The Institute of Economic Affairs. D’Aveni, R. (1994) Hypercompetition: The Dynamics of Strategic Maneuvering, New York: Basic Books. Demsetz, H. (1988) “The theory of the firm revisited,” Journal of Law, Economics, and Organization 4: 141–61. Ellig, J. (1993) “Internal pricing for corporate services,” working paper in Market-Based Management, Centre for the Study of Market Processes, George Mason University. Ellig, J. (1997) “From Austrian economics to market-based management,” Journal of Private Enterprise 13: 133–46. Ellig, J. and Gable, W. (1993) Introduction to Market-Based Mangement, Fairfax, VA: Center for Market Processes. Eskerod, P. (1997) “Nye perspektiver på fordeling af menneskelige ressourcer i et projektorganiseret multiprojekt-miljø,” Ph.D. dissertation, Sønderborg: Handelshøjskole Syd. Eskerod, P. (1998) “Organising by projects: Experiences from Oticon’s product development function,” in M. Morsing and K. Eiberg (eds) (1998) Managing the Unmanageable For a Decade, Hellerup: Oticon. Fama, E. and Jensen, M.C. (1983) “Separation of ownership and control,” Journal of Law and Economics 26: 301–25. Foss, K. (2000) “Organizing technological interdependencies: A coordination perspective on the firm,” forthcoming in Industrial and Corporate Change. Online. Available HTTP: (accessed 15 January 2002). Foss, N.J. (1994a) “Ludwig von Mises: Precursor of property rights economics,” in N.J. Foss (ed.) The Austrian School and Modern Economics: Essays in Reassessment, Copenhagen: Munksgaard.
166 Nicolai J. Foss Foss, N.J. (1994b) “The theory of the firm: The Austrians as precursors and critics of contemporary theory,” Review of Austrian Economics 7(1): 31–65. Foss, N.J. (1999) “The use of knowledge in firms,” Journal of Institutional and Theoretical Economics 155: 458–86. Foss, N.J. (2000) “Internal disaggregation in Oticon: An organizational economics interpretation of the rise and decline of the Oticon spaghetti organization.” Online. Available HTTP: (accessed 15 January 2002). Foss, N.J. (2001) “Economic organization in the knowledge economy: Some Austrian insights,” in N.J. Foss and P.G. Klein (eds) (2001) Entrepreneurship and the Firm: Austrian Perspectives on Economic Organization, Aldershot: Edward Elgar. Ghoshal, S., Moran, P., and Almeida-Costa, L. (1995) “The essence of the megacorporation: Shared context, not structural hierarchy,” Journal of Institutional and Theoretical Economics 151: 748–59. Gittleman, M., Horrigan, M., and Joyce, M. (1998) “‘Flexible’ workplace practices: evidence from a nationally representative survey,” Industrial and Labor Relations 52: 99–115. Gould, R.M. (1994) “Revolution at Oticon A/S: The spaghetti organization,” in S. Dutta and J.-F. Manzoni (eds) (1999) Process Re-engineering, Organizational Chnage and Performance Improvement, London: McGraw-Hill. Grandori, A. (1997) “Governance structures, coordination mechanisms and cognitive models,” Journal of Management and Governance 1: 29–42. Grandori, A. (2001) Organizations and Economic Behavior, London: Routledge. Harryson, S.J. (2000) Managing Know-Who Based Companies, Cheltenham: Edward Elgar. Hart, O. (1995) Firms, Contracts, and Financial Structure, Oxford: Oxford University Press. Hart, O. (1996) “An economist’s view of authority,” Rationality and Society 8: 371–86. Hart, O. and Moore, J. (1990) “Property rights and the nature of the firm,” Journal of Political Economy 98: 1119–58. Hayek, F.A. von (1945) “The use of knowledge in society,” in F.A. von Hayek (1948) Individualism and Economic Order, Chicago: University of Chicago Press. Hayek, F.A. von (1946) “The meaning of competition,” in F.A. von Hayek (1948) Individualism and Economic Order, Chicago: University of Chicago Press. Hayek, F.A. von (1973) Law, Legislation, and Liberty, vol. 1: Rules and Order, Chicago: University of Chicago Press. Helper, S., MacDuffie, J.P., and Sabel, C. (2000) “Pragmatic collaborations: Advancing knowledge while controlling opportunism,” Industrial and Corporate Change 9: 443–87. Henderson, R.I. (2000) Compensation Management in a Knowledge-Based World, London: Prentice-Hall. Hermalin, B. (1998) “The firm as a non-economy: Some comments on Holmstrom,” Journal of Law, Economics and Organization 15: 103–5. Hodgson, G. (1998) Economics and Utopia, London: Routledge. Holmström, B. (1979) “Moral hazard and observability,” Bell Journal of Economics 10: 74–91. Ichniowski, C., Kochan, T.A., Levine, D., Olson, C., and Strauss, G. (1996) “What works at work: Overview and assessment,” Industrial Relations 35: 299–333. Jensen, M.C. and Meckling W.H. (1992) “Specific and general knowledge and organizational structure,” in L. Werin and H. Wijkander (eds) (1992) Contract Economics, Oxford: Blackwell. Jensen, M.C. and Wruck, K. (1994) “Science, specific knowledge and total quality
“Austrian” determinants of economic organization 167 management,” in M.C. Jensen (1998) Foundations of Organizational Strategy, Cambridge, MA: Harvard University Press. Klein, P. (1996) “Economic calculation and the limits of organization,” Review of Austrian Economics 9: 3–28. Klein, P. and Klein, S. (2000) “Do entrepreneurs make predictable mistakes? Evidence from corporate divestitures,” unpublished paper. Knight, F.H. (1921) Risk, Uncertainty, and Profit, 1965 reprint, New York: Augustus M. Kelley. Kolind, L. (1990) “Think the unthinkable,” in M. Morsing and K. Eiberg (eds) (1998) Managing the Unmanageable For a Decade, Hellerup: Oticon. Kolind, L. (1994) “The knowledge-based enterprise,” in M. Morsing and K. Eiberg (eds) (1998) Managing the Unmanageable for a Decade, Hellerup: Oticon. Liebeskind, J.P., Oliver, A.L., Zucker, L.G., and Brewer, M.B. (1995) Social Networks, Learning, and Flexibility: Sourcing Scientific Knowledge in New Biotechnology Firms, Cambridge: NBER working paper no. W5320. Lovas, B. and Ghoshal, S. (2000) “Strategy as guided evolution,” Strategic Management Journal 21: 875–96. Lyregaard, P.-E. (1993) “Oticon: Erfaringer og faldgruber,” in S. Hildebrandt and L.H. Alken (eds) På vej mod helhedssyn i ledelse, Ankerhus. Matusik, S.F. and Hill, C.W.L. (1998) “The utilization of contingent work, knowledge creation, and competitive advantage,” Academy of Management Review 23: 680–97. Mendelson, H. and Pillai, R.R. (1999) “Information age organizations, dynamics, and performance,” Journal of Economic Behavior and Organization 38: 253–81. Miles, R.E., Snow, C.C., Mathews, J.A., Miles, G., and Coleman, H.J. Jr (1997) “Organizing in the knowledge age: Anticipating the cellular form,” Academy of Management Executive 11: 7–20. Milgrom, P. (1988) “Employment contracts, influence activities and efficient organization design,” Journal of Political Economy 96: 42–60. Milgrom, P. and Roberts, J. (1990) “The Economics of modern manufacturing: technology, strategy and organization,” American Economic Review 80: 511–528. Miller, G. (1992) Managerial Dilemmas, Cambridge: Cambridge University Press. Minkler, A.P. (1993) “Knowledge and internal organization,” Journal of Economic Behavior and Organization 21: 17–30. Mises, L. von (1936) Socialism, Indianapolis: Liberty Press. Mises, L. von (1944) Bureaucracy, New Haven: Yale University Press. Mises, L. von (1949) Human Action, San Francisco: Fox and Wilkes. Morsing, M. (1995) Omstigning til Paradis? Oticon i processen fra hierarki til spaghetti, Copenhagen: Copenhagen Business School Press. Myers, P.S. (1996) Knowledge Management and Organizational Design, Boston: ButterworthHeinemann. O’Driscoll, G.P. and Rizzo, M. (1985) The Economics of Time and Ignorance, Oxford: Basil Blackwell. OECD (1999) Employment Outlook, Paris: OECD. Osterman, P. (2000) “Work organization in an era of restructuring: Trends in diffusion and effects on employee welfare,” Industrial and Labor Relations Review 53: 179–96. Poulsen, P.T. (1993) Tænk det utænkelige – revolutionen i Oticon, Copenhagen: Schultz. Powell, W. (1990) “Neither market, nor hierarchy: Network forms of organization,” Research in Organizational Behavior 12: 295–336. Prusak, L. (1997) Knowledge in Organizations, Boston: Butterworth-Heinemann.
168 Nicolai J. Foss Simon, H.A. (ed.) (1951) “A formal theory of the employment relationship,” Models of Bounded Rationality, Cambridge, MA: MIT Press. Tomlinson, M. (1999) “The learning economy and embodied knowledge flows in Great Britain,” Journal of Evolutionary Economics 9: 431–51. Williamson, O.E. (1985) The Economic Institutions of Capitalism, New York: Free Press. Williamson, O.E. (1996) The Mechanisms of Governance, Oxford: Oxford University Press. Zingales, L. (2000) “In search of new foundations,” forthcoming in Journal of Finance. Zucker, L. (1991) “Markets for bureaucratic authority and control: Information quality in professions and services,” Research in the Sociology of Organizations 8: 157–90.
7
The new economy and the Austrian theory of the firm Philippe Dulbecco and Pierre Garrouste
Introduction So-called “new economy” approaches highlight the role played by innovation and information in all economic activities. As far as the behavior of the firm is concerned the question addressed by the new economy affects the kinds of coordination forms, both internal and external, which may permit exploitation of information henceforward relatively both richer and less expensive in order to innovate. Indeed, the transformation pressure induced by the new economy seems to reward organizations that adapt and innovate. The analysis of the information systems of firms and the study of the conditions for consistency between these systems and the coordination mechanisms implemented by firms should yield an understanding of the behavior of the firm in an environment characterized by a strong information content. However, such an analysis is unable to provide an answer to what represents, from our point of view, the key challenge to the firm posed by the new economy: that is its ability to transform information into knowledge in order to innovate. What is needed is an analytical framework where the focus is on how economic transformation results from a combination of external pressure, internal innovative capabilities and coordination mechanisms. The aim of this chapter is to make use of the analytical framework provided by the Austrian analysis of production and knowledge (Dulbecco and Garrouste 1999) in order to propose a theory of the firm that may frame a study of such a transformation process and consequently offer an answer to the question of the coordination of economic activities in a knowledge-based economy. The first section reviews the main characteristics of the new economy. Our purpose here is to highlight some challenges to firms attributable to the emergence and expansion of the new information and communication technology (ICT) paradigm. We argue that such challenges cannot be reduced to informational aspects. Indeed, any study of the exploitation by the firm of a wider and wider range of information must include an analysis of the process by which information turns into knowledge in order to innovate. The second section evaluates theories of the firm according to their ability to provide instruments for the analysis at the firm level of the effects of this new ICT paradigm. We defend the idea that most theories
170 Philippe Dulbecco and Pierre Garrouste of the firm, by focusing mostly on the informational aspects of the new economy, are unable to develop the analytical categories required for appreciating the whole information exploitation process. The third section sets out what we agree to call an Austrian theory of the firm. Such a theory offers a detailed analysis of coordination over time of the plans of the firms, which articulates the notions of information and capabilities. A final section concludes.
What’s new for the firm in the new economy? The term “new economy” immediately points to a newly arising sector – the information and communication industries – and a new way of driving the whole economy. Thus, if the first, narrow, meaning of the new economy represented, at most, less than 10 percent of GDP and around 4 percent of employment, the second, broad, definition embraces all the using sectors and hence the whole economy (Cohen and Debonneuil 1998). Indeed, it appears more and more obvious today that technical innovations coming from the information and communication industries1 have generated, by means of their diffusion in other industries, a new productive pattern.2 Investments related to ICT have considerably increased these last years, and ICT diffusion, owing to a technological wave based on new applications such as the World Wide Web or the Navigator, have speeded up since the mid-1990s. The new economy as an economy in permanent change The consequences for firms of the development of the new economy relate mainly to this last, broad acceptance. ICT first contributed to the rise in productivity; the evolution is particularly obvious for activities related to the processing, storage and exchange of information (Steinmuller 1999).3 Second, it appears that because of ICT, science has become more efficient, and more intertwined with enterprise. Third, ICT has reduced externalization costs, contributing to the development of interfirm networks. Finally, ICT has played a major role both in the acceleration of innovation processes and in the decrease of life-cycle durations. Those phenomena combine to make change and innovation the most important elements of any economic activity, if not the main economic activity itself (Carter 1994; Foray and Lundvall 1996). By following a very short technological cycle, ICT represents the main source of a kind of permanent upheaval (Foray 2000; Lundvall and Nielsen 1999). The cumulative character of the introduction of ICT inside the firm (Zuscovitch 1983), combined with the rapid evolution of ICT, forces firms into a permanent process of adoption and adaptation (Benghozi and Cohendet 1999). Moreover, changes initially induced by the use of ICT are amplified by the actions of agents who benefit from previous changes, and become more and more motivated to intensify their actions (Carter 1994): change gives rise to change. A new regime based on innovation and permanent change substitutes for the old one, the latter being characterized by a short construction period of new capacities articulated with a relative long exploitation period. Recall that
The new economy and the Austrian theory of the firm 171 in Europe more than 30 percent of manufacturing turnover comes from new products (OECD 2000). The objectives followed by firms which chose to confront this challenge are today clearly identified (Askenazy 1999). If cost reduction and the optimization of the production process represent major issues, the behavior of the firm is today fundamentally oriented toward objectives such as reactivity, flexibility, quality – that is, the necessity continuously to adapt to demand. Innovation and knowledge in the new economy The new economy is often labeled a knowledge-based economy (OECD 2000). If economics has for a long time conceived of knowledge as information,4 a growing number of works have recently permitted a distinction between the two concepts.5 In this way knowledge refers mainly to the capacity for generating, extrapolating and inferring both new knowledge and information, whereas information remains a set of more or less formatted and structured data, unable to engender by itself new information (Foray 2000; Langlois and Garrouste 1997; Steinmuller 1999). Knowledge is thus conceived as a particular output – linked to a simple informational input – that may contribute to the creation of wealth (Dibiaggio 1999). The role played by knowledge in the new economy must be appreciated in light of the innovation constraints introduced by ICT. According to Foray (2000: 33), it is then possible to identify the main disruptions now introduced by innovation and change into the economic process. Innovation: (1) cancels the technical links built in the past, (2) declares most of the equipment obsolete and depreciates competencies, (3) destabilizes productive organizations and complicates the coordination issue, and (4) exacerbates uncertainty concerning the quality of products and consequently the information asymmetries of any transaction. When changes are infrequent and of small magnitude, it is easy to demonstrate that firms can adapt by deducting, in advance, the supplement of cost induced by innovation from current revenues (Dulbecco et al. 1995; Gaffard 1995). However, it is not the case when innovation and change become the rules of the game, that is, when firms have to adapt at any time. Permanent innovation then requires: (1) an increased level of training, (2) some specific competencies favoring adaptability, flexibility, and mobility, (3) investments in systems that permit good access to information, and (4) the elaboration of complex coordination procedures covering all economic functions (Foray 2000). The point is that these four elements all refer to the very notion of knowledge; the innovative regime explains the key role henceforward played by knowledge in society. Learning and change are indeed closely related (Antonelli 1999), and the causality works both ways (Lundvall and Nielsen 1999). On the one hand, learning is an important and necessary input in the innovation process. On the other hand, changes impose learning for all agents affected by those changes. It is often argued in this perspective that the last decade has been characterized by acceleration in both knowledge creation and knowledge destruction (Lundvall and Nielsen 1999).
172 Philippe Dulbecco and Pierre Garrouste ICT has made a lot of information more easily accessible to a lot of people, but has also made skills and competencies obsolete. Such an important evolution is confirmed, first, by the increase in the share of intangible assets in the real capital stock (Kendrick 1994),6 second, by the continuous expansion of knowledge industries7 (OECD 1998), and third, by the growth of highly qualified manpower in total employment (OECD 1996). Today, however, the importance given to knowledge may not lead firms to underestimate the role played both by information and production inside the new regime. Indeed, innovation constraints contribute to the settlement of complex interactions between knowledge, information and production. Knowledge is connected to information, as information represents a principal input into knowledge, and knowledge is also connected to firms’ physical assets, as capital constitutes the foundation for all innovative strategies (Amendola and Gaffard 1988). In other words, the issue becomes understanding how firms, characterized by their specific capital asset base, may transform information into knowledge in order to innovate. In this way the organizational component is going to play a major role. Knowledge and coordination in the new economy None of the results associated with the new economy would have been accomplished without a profound transformation of the organization8 of the firm. The ICT revolution is not only a technical revolution, it is also an organizational revolution: ICT diffusion requires an evolution of firms’ organization in order to implement and exploit new technologies and hence to benefit from these innovations.9 In most cases, ICT is not confined to communication processes, but contributes to the whole production and decision system of the firm (Benghozi and Cohendet 1999). The study of the relationship between the development of ICT and the organization of firms is not new. It particularly constitutes the heart of the “new techno-economic paradigm” theory (Perez 1985). Indeed, as is now well known, changes in the techno-economic paradigm are based on combinations of radical innovations in product, process and organization. Several commentators, emphasizing the profound transformations involved in large but also small firms, have described changes as a “cultural and social-economical revolution” (Freeman 1987). These analyses are based on implicit causality: from the technology to the organization. Changes induced by the development of ICT give rise to major changes in the organizational structure of firms if their potential is to be fully exploited. Some other works that highlight the existence of a reverse causation refute this technological determinism: from the organization to the technology. From a dynamic perspective, Brousseau and Rallet (1999) divide this last causation process into three phases. The first relates to the introduction of ICT into the firm inasmuch as this introduction is fundamentally constrained by the prevailing organizational scheme. The second is associated with changes induced by this introduction in organizations’ efficiency. This last movement is finally limited again by firm organization and environment.
The new economy and the Austrian theory of the firm 173 However, today it is more and more obvious that those interactions are more complex and cannot be reduced to simple causation. The idea is that the organization and ICT co-determine each other: the organization adapts ICT to a prevailing function, while ICT simultaneously opens new solutions that transform the organization (Benghozy and Cohendet 1999). ICT is consequently efficient for firms capable of integrating it, that is whose organization squares with that implicitly required by those technologies. This complex co-evolution process confirms, if needed, that the challenge to the firm posed by the development and diffusion of ICT cannot be reduced to an informational one, but must be correlated with a wider knowledge perspective. Finally, it is the whole coordination10 issue which is addressed by the development of the knowledge-based economy. Indeed, the exploitation and creation of knowledge exhibit a rather collective dimension, the latter being both internal and external to firms. The expansion of so-called “collaborative technologies” (Foray 2000) contributes to the emergence and development of new industrial organization patterns. One may notice in this perspective a significant expansion of the numbers of both cooperative agreements and mergers during the 1990s. Between 1991 and 1999 the value of international mergers increased six-fold,11 while the number of new cooperative agreements rose from 1,000 in 1989 to 7,000 in 1999.12 These data must be linked again with the innovation constraint. It is rare indeed that enterprises innovate in isolation. For example, in Austria 61 percent of firms creating new products are associated with one or more partners; this ratio reaches 83 percent in Spain and 97 percent in Denmark (OECD 2000). What is new for the firm in the new economy results from the very wide and intensive diffusion of ICT into the whole economic system. This diffusion contributes to the development of a new economic paradigm based on innovation and continuous change. As innovation requires knowledge and knowledge results mainly from particular coordination mechanisms, the challenge posed by the new economy to the theory of the firm is to provide an analysis which articulates the informational and knowledge components of innovation along with the productive one. In this context, ICT poses a new question: is it possible to analyze the impacts of ICT by using the idea that firms are only “information processors” (Cohendet and Llerena 1999)? If the capacity to create, diffuse or exploit knowledge and not only information becomes one of the main sources of competitive advantage, (OECD 2000) is it not necessary to conceive the firm as a “knowledge processor”?
The new economy and the behavior of the firm conceived as a processor of information The economics of information is typically the part of our discipline that is centrally concerned with the emergence and development of ICT. Indeed, the basic idea of the economics of information is to treat the central part of economic analysis as the study of how individuals deal with the acquisition, transmission and utilization of different kinds of information (Stiglitz 1985, 2000).13 In fact, because individuals operate under imperfect and asymmetric information, and because it is costly for
174 Philippe Dulbecco and Pierre Garrouste them to acquire information, individual as well as organizational behavior is conditioned by their ability to solve those problems. What can be called information-based theories of the firm are also concerned with those kinds of problems inasmuch as they all more or less assume, even if they do not consider it to be equally central, that the existence and the functioning of the firm is linked to some problem of information. “Why are firms, or any other forms of organization, needed? The answer is, basically, because information is not free”14 (Barzel 2001). In a different way, according to Holmström, bargaining generates information, and this feature is a central point of the Hart–Moore’s model. Indeed, the model captures very nicely the essence of market competition: the right to exit the relationship and the information and incentives that the ensuing bargaining process generates. (Holmström 1999: 85) However, this information is costly and someone has to invest efforts into finding the information to reach an agreement. Stiglitz also assumes that the existence of incomplete markets and contracts is due to imperfections in knowledge, including asymmetries of information, and he adds that transactions costs provide the major alternative explanation for incomplete contracts, but it seems plausible that if information were perfect – if all contingencies could have been anticipated – all important contingencies (at least where there is the ability and desirability of transferring risk) would have been taken care of in this original contract. (Stiglitz 2000: 1444) It can be added that because assets are becoming more and more information based and intangible, the characteristics of those information-based assets (in terms of their specificity) strongly determine the existence and the boundaries of the firm. Consequently, the firm can be conceived as a processor of information (Cohendet and Llerena 1999). Our concern here is neither to present an exhaustive analysis of information based theories of the firm, nor to discuss further the relevance of its definition. Our point is rather to deal with the following question: if the acquisition, transmission and utilization of information strongly conditions the existence and functioning of organizations, what can the emergence of ICT imply for the way organizations function? Even if they do not all yield the same results, the main responses informationbased theories of the firm give are the following. Concerning the evolution of competition, some authors show that there is a redistribution of rents from suppliers to buyers (Bakos 1997; Brynjolfsson and Hitt 2000). Indeed ICT lowers the cost to the buyer of acquiring information about seller prices as well as product offerings. In fact, ICT reduces the inefficiency inherent in search costs (Bakos and Brynjolfsson 1997). This kind of result is
The new economy and the Austrian theory of the firm 175 important inasmuch as the evolution of competition depends on the possibility for consumers to increase their own market power. As a matter of fact, if the redistribution of rents is really buyer oriented, then the introduction of ICT places firms in a more competitive environment. As a consequence, the number of firms on the market has to increase and prices decrease. However, on the basis of different assumptions (constant returns to scale) Janssen and Moraga (2000) show that the evolution of prices as well as their dispersion is strongly sensitive to the maturity of markets (the number of informed consumers relative to less-informed ones) and to the cost of price search. In other words, first, if the market is not mature, that is to say the ratio of the number of informed consumers to the sum of the number of informed and less-informed consumers is low and, second, if the cost of price search is high, there is no tendency for competition to increase – quite the opposite. The costs of external coordination are decreasing. In fact, “it is widely believed that IT [Information Technology] lowers the costs of inter-firm coordination” (Bakos and Brynjolfsson 1997: 3). Such an evolution of coordination costs is attributable to the fact that “investments in IT are less likely to be relationship specific than other investments designed to reduce coordination costs between firms” (Bakos and Brynjolfsson 1997: 4), reducing the risks of opportunism and holdup. This kind of evolution is interesting because it opens new possibilities in terms of de-integration and partnership. In fact, such an evolution can explain both the tendency for new firms to emerge and for firms to develop cooperation. Those two phenomena are due to the fact that, if coordination costs are decreasing and if the possibilities for opportunism to be effective are reduced, it is more beneficial to cooperate than to merge. The problem is that it is also possible to show that if coordination costs are decreasing, it is less necessary for firms to cooperate because opportunism as well as asset specificity are decreasing. The tendency would be for those hybrid forms to disappear. More problematically, the merger process cannot be explained, since the reduction in transaction costs (due to the decrease in asset specificity) normally provokes a de-integration process. Third, there is a tendency for firms to develop organizational innovations. “A significant component of the value of IT is its ability to enable complementary organizational investments such as business processes and work practices” (Brynjolfsson and Hitt 2000: 4). In fact, this kind of evolution is explained by the effect of the acceleration of the circulation of the information inside the firm (due to the ICT) on the whole organization of the firm. This technological determinism is, however, problematic (see above). It is, in fact, unreasonable to assume that ICT originates outside the economic system and that its introduction inside the firm obliges this firm to adapt its organization structure. It seems more relevant to consider that there is some co-evolution of technology and organization. Fourth, the productivity of firms implementing ICT is increasing. In fact, this result is very controversial. Even if the importance of this result is often debated, many works (Black and Lynch 1996; Greenan and Mairesse 1996) show that at the firm level productivity increases when ICT is introduced. One of the theoretical bases for this result is basically that internal coordination costs are decreasing
176 Philippe Dulbecco and Pierre Garrouste because information is at the root of communication inside the firm and consequently, since information is unambiguous, this communication is less costly. The quantity of information circulating in the firm is thus increasing, its cost is decreasing, or both. All those results are based on the following idea: even if information is not a public good à la Arrow (1962), ICT makes information less firm-specific and reduces imperfections in the distribution of information. It then becomes possible to explain, as we have seen, some important things, such as the tendency for coordination costs to decrease, the productivity of firms to increase, the possibility for firms to develop partnerships, and finally increased competition on the market. However, in light of the underlying assumptions, those explanations do not seem able to capture all the evolution of coordination inside and outside firms. The point is that knowledge is here conceived as not being different in its very nature from information. Even if knowledge is distinguished from information, the idea is that the “natural” evolution of knowledge is to become less informal and then more codified. This process makes knowledge less specific and endows it with all the characteristics of information, in particular, it becomes unambiguous and non-rival. In this way it is natural, for example, to consider that information asymmetries are decreasing and that the efficiency of markets and organizations is increasing. Our aim is now to show that an Austrian perspective of the firm is able to propose a complementary and interesting framework.
Information and investment in the new economy As we have previously explained, the distinction between information and knowledge refers to the difference between a flow of data and a structure that organizes the data and reorganizes itself. One consequence of such a distinction is that the transfer, diffusion, and appropriation of knowledge cannot be reduced to simple exchanges of informational signals (Johnson and Lundvall 1992). Another consequence is that the process of generating knowledge is part of the whole production activity. In other words, it is difficult, if not impossible, to identify the specificity of the knowledge base of any firm without taking into account its production structure. Lundvall (1998) shows, in a different way, that learning in connection with production is fundamental for success in process and product innovation. Any analysis of the behavior of the innovative firm in the new economy environment must consequently articulate, at the firm level, the two dimensions of the innovative process, the productive one and the learning one. But, if it is one thing to recognize the necessity of this articulation, it is quite another to have available the analytical structures that yield an understanding of the complex relationship existing between information, production and knowledge. It is precisely the aim of the remainder of this chapter to propose an analytical framework dedicated to this objective.
The new economy and the Austrian theory of the firm 177 Information, production and knowledge: lessons from the Austrian theory of the firm We argue elsewhere that the generic object of the Austrian theory of the firm consists in studying the nature of the relationship between the structure of capital and the structure of knowledge in an ever-changing world (Dulbecco and Garrouste 1999). In this framework, the concept of capabilities is able to give an accounting for elements of specific subjects as diverse as the configuration of productive activities or the constitution of expectations and the transmission of knowledge. The idea of capabilities was introduced by Richardson (1972)15 to designate the set of abilities, knowledge and experience available to each enterprise at any time, and which allows it to undertake a number of activities.16 Unlike resources, defined as factors of transferable inputs, and thus likely to be freely obtainable on markets, capabilities represent firm-specific tangible or intangible assets because they are created over time through the combination, within the production process, of resources available to the firm. The capabilities of any enterprise thus result from the configuration, at any point of time, of its own structure of production. The great achievement of the capabilities-based approach is hence to represent “a real-time account of production costs in which knowledge and organization have as important a role as technology” (Langlois and Robertson 1995). The firm is then quite logically understood, not only as the place where factors of production are combined, but as the place where the capabilities are built and modified. These capabilities, viewed as a knowledge and a know-how that are more or less incorporated into the equipment, thus become the support for collective routines and serve as the basis for a training process, itself collective. Such training is then oriented and supervised by these capabilities. This results in a better adjustment of the firm to its environment, but at the same time creates difficulties in facing radically new changes in this very environment. On the one hand, the properties of knowledge and capabilities of the firm enable collective training and on the other hand they limit its very scope, by defining its orientations and its content. However, it is important here to notice that, at any moment in time, capabilities arise from investment decisions taken by the firm, although the problem of coordination depends mainly on the latter (Dulbecco and Garrouste 1999). Any understanding of the role played by capabilities in solving the coordination issue must consequently integrate an analysis of the investment decision. The point is then to consider that any investment decision implemented by any enterprise that chooses to modify its production process is not only constrained by the composition of the stock of capital that prevails in the economy, but it will also affect the whole complex network of relations that connects the firms to each other (Richardson 1990).17 In Richardson’s words, this means that any firm that modifies its capacity of production, be it in quantitative or qualitative terms, must take into account the fact that this investment is at the same time complementary to and in competition with other investments that have been, are, or will be made within the economy by other enterprises.18 Hence the challenge is to determine
178 Philippe Dulbecco and Pierre Garrouste what Lachmann calls “the best mode of complementarity” (Lachmann 1978: 6), i.e. the optimal combination that enables us to reconcile the different plans implemented by enterprises with interrelated activities. So, the “best” mode of complementarity is not a “datum.” It is by no way “given” to the entrepreneur who, on the contrary, as a rule has to spend a good deal of time and effort in finding out what it is. Even where he succeeds quickly he will not enjoy his achievement for long, as sooner or later circumstances will begin to change again. (Lachmann 1978: 3) At this point, the question of the coordination of investment plans takes an informational turn, or more exactly, informational and cognitive; and this for two main reasons. First, the objective becomes to bring out the modes of access to information that are likely to improve the entrepreneur’s expectations. Naturally, the type of information looked for and exchanged is different, depending on whether one is interested in the competitive or the complementary aspect of investments. In the first case, the aim is to obtain market information, i.e. information relative to activities planned by the others within the system – mainly consumers and competitors – whereas in the second case the point is to acquire technical information relative to the feasibility of projects implemented by enterprises linked together through technological complementarities (Richardson 1990).19 But the transmission of both types of information “is often delayed and sometimes faulty” (Lachmann 1978: 22). When the transmission delays are too long, and differ from one market to another, and when the economy is subject to a large number of simultaneous changes, it becomes difficult to reconstitute the chronology of events that gave rise to the obtained information, so that the latter is of relatively little use to underpin action. The second reason is that the mere ability to capture information in a situation of uncertainty does not solve the problem faced by the firm. Indeed, it is possible to show that, in a subjectivist way, it is necessary, on one hand, to distinguish between knowledge and information,20 and on the other hand, not to limit this distinction to an opposition between a stock and a flow. Knowledge is understood, not as a receptacle, which refers to the problem of being able to capture information, but rather as a structure (Langlois and Garrouste 1997), which renders the ability to process information essential. Besides, it is important here to stress how much the problem of knowledge, as it has just been formulated, has a true meaning only when referring to a diachronic analysis of production: delays in transmitting and processing information are accounted for in conjunction with delays in construction. It is because the production processes are subject to complementarities over time that the question of knowledge becomes so important. Firms are basically unaware of the evolution of their own production process, because the latter itself depends on the evolution of those implemented by others facing the same difficulties. Because production
The new economy and the Austrian theory of the firm 179 is subject to delays, and because these delays are specific to each firm, the harmonization of plans assumes such a complex form: the time of production generates ignorance. The risk for the enterprise is that the continuity of construction and of use might be impeded or even arrested before the process enters the phase of use. The objective of any enterprise thus becomes to develop its technological and informational abilities, i.e. its capabilities, in order to face such ignorance; success depends directly on its ability to access an informational advantage in its field of production with a view to rendering its production plans consistent vis-à-vis those of other firms over time. The new economy and the organization of industry Let us return to the analysis of the behavior of the firm in the context of the new economy, that is to the issue of the interaction between information, production and knowledge in an ever-changing environment. The first element which must be taken into account when studying the behavior of any firm is that it more or less deliberately confronts economic uncertainty by implementing investments that deal with previously unknown options. Uncertainty is two-sided: it is first technical/technological, in the sense that nothing guarantees ex ante that the firm will be in a position to develop and/or to acquire the technical and human factors required by the new activity. Such factors are indeed generally non-contestable in the sense of Langlois (1993) and, as such, impose high dynamic transaction costs during the period of development of the new activity.21 The firm’s main problem hence becomes to find complementary assets that permit it to organize a process of creation of new knowledge when the speed of renewal of those assets is very high. But this uncertainty also concerns competition. Competitive uncertainty deals with relations between the firm and its market environment. It is due, first, to the inability of the firm to forecast the evolution of supply and demand on its new markets and, second, to its vulnerability during the period of development of the new activity relative to competing firms that have chosen a “passive” strategy of adapting to the environment (i.e. which is delimited by the configuration of their respective routines). Inasmuch as uncertainty is increasing in the new economy, its consequences for the firm in terms of internal as well as external organization become more and more important. The second element refers to a fundamentally temporal conception of innovation and change. Indeed, as we previously mentioned, innovation is not an instantaneous process, rather it is subject to delays. First, a delay in construction of the production capacity before it can be actually used for the production of a new output. Second, a delay in acquisition and/or development of the abilities required by the new activity, i.e. by the construction of the new capacity (Foss 1993). Third, a delay in the transmission of the information to the set of actors involved in implementation of the new production process (Langlois and Robertson 1995; Malmgren 1961). The economic output of the firm quite naturally depends on its very ability to manage constraints imposed by these delays. The problem posed to the firm by the new economy is that the gap between the time available
180 Philippe Dulbecco and Pierre Garrouste for capacities to be built up and the period during which those capacities generate useful output is becoming more important. This phenomenon makes the management of the dilemma specificity/adaptability very difficult (see below). Moreover, two a priori contradictory time horizons must here be reconciled: that of the operation or the optimal mobilization of existing resources, and that of the exploration or the development of new production capacities (Cohendet et al. 1996). Managing this problem is essential inasmuch as there is a tendency in the new economy for investments in R&D to have decreasing returns (Foray 2000). The third element that we have retained as a main attribute of the behavior of the firm in the new economy context relates to the irreversible nature of its commitments – this is indeed the idiosyncratic character of the assets sought after by the firm. This feature originates in the accumulation over time of specific assets dedicated to the construction of new production capacity. Specialization hence carries irreversibility in so far as it calls for a specific articulation over time of the investments that are themselves specific. The external resources integrated into the firm also acquire an idiosyncratic dimension since they partake of a collective and cumulative learning; the firm then becomes a place where knowledge is generated, maintained, replicated and modified (Cohendet et al. 1996). But if specialization leads to comparative advantage and high profits, its main drawback is the sacrifice of adaptability. This point is crucial; it alone crystallizes an important dimension of the problem faced by the firm in the new economy environment. Indeed, the firm is basically subject to what we agree to call the specialization–adaptability dilemma: specialization is represented by firm’s involvement in the construction of new and specific production processes carrying profit opportunities, whereas adaptability is required by uncertainty and the risk intrinsic in the involvement in innovative investments.22 In such a context, the choice of a coordination mode naturally takes on a crucial dimension. The selected mode of coordination must indeed enable firms to transform information into relevant knowledge. We mean by relevant knowledge, knowledge which permits, on the one hand, access to information that is relevant and necessary to the development of specific production processes, i.e. the coordination of investment over time and, on the other hand, justification of revisions and redeployments in an economic world characterized by continuous change.23 Thus, our consideration of knowledge leads to a wider definition of the organization of industry as a system for coordinating ever changing production processes. Arbitration between these different modes of coordination depends basically on: (1) the nature of investments brought into play, (2) the learning capacities relative to firms and markets, and (3) the degree of instability of final demand. The integration/internalization thesis was recently the object of a number of exemplary studies, in particular on the theory of the innovative firm (Lazonick 1992), an analysis in terms of dynamic transaction costs (Langlois 1992), and the approach of the firm in terms of competence (Foss 1993). The merger wave that seems to go with the development of the new economy contributes to rendering this thesis attractive. Indeed, integration is supposed to allow the development of
The new economy and the Austrian theory of the firm 181 capabilities required by the new activity. The collective, cumulative, tacit and idiosyncratic nature of the knowledge involved in innovative activities makes it impossible to resort to the market, unless one is ready to bear prohibitive dynamic transaction costs.24 Vertical integration may also help to achieve some kind of cost rationalization. The problem here is to transform high fixed costs during the period of development into low unit costs when the new process enters the phase of utilization; first by developing the production scale and second by integrating suppliers and distributors to avoid any disruption in the production process and to take advantage of synchronization economies (Lazonick 1992). In other words, integration represents a mechanism of coordination of innovative investments over time. By providing tools for the management of technical and competitive uncertainty, both of which are based on an internalization of resources and an increase in the production scale, integration secures the development of new production capacity and the transition from development to utilization. By the way, we obtain one of the results concerning the boundaries of the firm, i.e. the fact that the number of mergers is increasing. The main problem is that the price paid for integration is likely to be prohibitive for a firm that has chosen to fight against the innovative constraints of the new economy. This price is the price of the sacrifice of adaptability. The logic that prevails here, in view of the search for economic efficiency, indeed requires that cumulative processes be initiated in which the organizational capabilities of the enterprise never stop improving. Yet, because of the internal accumulation of knowledge, competencies and capabilities, the firm runs a risk of inertia and a lack of organizational flexibility. This is partly due to the existence of a close relationship between the range of productive opportunities perceived by enterprises and the origin of investments undertaken: organizational capabilities are increasingly likely to allow for a number of options as their implementation comes from various sources, reflecting a variety of opinions concerning coming threats and opportunities (Loasby 1996). One of the important consequences of the logic of integration is then that the enterprise becomes less “responsive” to its markets (Teece 1992). Moreover, one can note that the type of integration proposed in this set of analyses assumes relatively high costs of capital redeployment (sunk costs) which thwart the firm’s potential for flexibility and adaptability all the more. It then becomes possible to explain why the number of firms in the new economy is increasing. This phenomenon is due to the lack of adaptability of the big firm and its inability to react rapidly to new opportunities. It is the fact that new knowledge needs to be organized in a new way that drives the creation of new little firms. As an example, the fact that IBM was initially unable to develop the personal computer, and that Apple was, is not only due, in this way, to wrong expectations concerning the evolution of demand (such an explanation can be given by an entrepreneurship-based conception of the firm) but also to the difficulty reorganizing the production process, and then the firm, to react to new opportunities. New knowledge sometimes needs new methods of organizing it. Indeed, we are able to explain why competition is increasing in the new economy without making reference to the idea that if information is becoming less costly this
182 Philippe Dulbecco and Pierre Garrouste reduces the possibility for opportunism and the need to integrate. It is the fact that new knowledge needs to be organized differently than old knowledge that produces the necessity to create new firms. The emergence of new firms is also linked to new activities that arise and are undertaken by new firms (Bailey and Bakos 1997) before a possible process of merging takes place. But as we have already mentioned, the development of the new economy is also associated with the development of interfirm cooperative agreements. Cooperation between firms, when it is a coordinated behavior of partner-enterprises over time in view of common tasks and long-run objectives, is also said to provide an advantageous solution to the problem of implementing innovative investments undertaken by firms in the new economy context (Warden 2001). First, because cooperation allows an efficient management of technological and competitive uncertainty, thanks to a permanent flow of technical and market information between cooperating firms. Cooperation agreements concluded between vertically related, but also competing, firms, are one of the very possible embodiments of market connections implemented by firms. Second, because cooperation represents an efficient form of managing the specialization–adaptability dilemma (Dulbecco 1998), it is efficient in the sense that it allows the coordination of specific capabilities without bringing the disadvantages of integration, especially when it comes to the management of sunk costs. It is also efficient in the sense that it creates information, and therefore new productive opportunities, as a consequence of the diversity of investment sources (Loasby 1994) and of the permanent interaction between the production decisions of different firms grouped together within a cooperative process (Imaï and Baba 1994). The division of knowledge is necessary for the growth of knowledge (Loasby 1996); and the coordination of the growth of knowledge and the coordination of the specialized activities that result from the division of knowledge both require the development of the firm’s external organization (Loasby 1993). Such a process is reinforced in the new economy because in-house and ex-house knowledge needs to be more and more connected to be efficient and because relational activities are emerging or arising (Bailey and Bakos 1997).
Conclusion In this chapter we show that the new economy is modifying the organization of industries as well as of firms. The commonly accepted idea that information and knowledge have the same nature, which is at the basis of the information-based theory of the firm, does not seem to be able to capture the most interesting characteristics of the new economy. Conversely, a theory of the firm based on the idea that knowledge is not a stock of information and on the importance given to the production structure yields some interesting consequences, for the firm, of ICT developments. The make or buy choice, and the development of hybrid forms of coordination (the partnership), can particularly be explained more satisfactorily in this Austrian theory.
The new economy and the Austrian theory of the firm 183
Notes Thanks to Cristiano Antonelli, Jack Birner and Nicolai Foss for helpful comments and to the International Center for Economic Research (Torino) for its financial support. Usual caveats apply. 1 The ICT sector is one of the primary, if not the primary, innovative sector; for example, ICT represented 31 percent of the incremental patents issued in the United States between 1992 and 1999. 2 One sometimes speaks of a “Third Industrial Revolution” (Cohen and Debonneuil 1998). 3 Even if this evolution has been particularly slow, giving rise to the famous “productivity paradox.” 4 Except for Machlup’s work (1984). 5 Hayek is one of the first who showed that information is not given by the external world and that the mind is organizing and reorganizing information in some specific ways depending on the whole history of the individual. Then the specificity of information is the consequence of the existence of some path dependency that makes the process non-ergodic, and knowledge is at the very origin of this process. We can find something similar in Feyerabend and Lakatos who show that information (facts or data) is produced by theories and is thus “theory-laden.” The specificity (and the character idiosyncraticity of data) of information is then the consequence of its embeddedness in specific theories. By the way, it is not sufficient to say that information is specific. One needs to explain this phenomenon, and the difference between information and knowledge is a possible means to accomplish that. 6 Kendrick’s work established that the intangible capital stock (education, training, R&D, health) became equal to the capital stock in 1973, and is now dominant. 7 The contribution of these industries to GNP surpassed 50 percent for the OECD countries as a whole in 1985; knowledge-based industries contributed, in 1997, 35 percent of firms’ total value-added. 8 Including, of course, the implementation of new work practices. See, for example, Osterman (2000), OECD (1999), Greenan and Mairesse (1999), Caroli and Van Reenen (1999). A survey of the numerous recent studies on this subject is provided by Askenazy (1999). 9 According to a recent study, a quarter of the US enterprises consulted have implemented some organizational transformations in order to adapt to the changes induced by the Internet (OECD 2000). 10 Following Ménard we mean by coordination “the procedures which render compatible the plans decided by elementary economic units, or which oblige these units to modify their plans over time” (Ménard 1990: 120). 11 Rising from 85 to 538 billion dollars (OECD 2000). 12 In 1999 the increase of the number of cooperative agreements was around 40 percent (OECD 2000). 13 “The recognition that information is imperfect, that obtaining information can be costly, that there are important asymmetries of information, and that the extent of information asymmetries is affected by actions of firms and individuals, has had profound implications for the wisdom inherited from the past and has provided explanations of economic and social phenomena that otherwise would be hard to understand” (Stiglitz 2000: 1441). 14 That is, without cost. 15 The author says he drew his inspiration from Penrose (1959). 16 In Richardson, activities refer to different stages of an elementary production process. 17 The application, in what follows, of Richardson’s analytical categories is entirely our own. 18 Investments are said to be competitive when “the profitability of investment made by
184 Philippe Dulbecco and Pierre Garrouste
19 20 21
22 23
24
one producer will be reduced by the implementation of the investment of others” (Richardson 1990: 3). Investments are said to be complementary when “their combined profitability, when taken simultaneously, exceeds the sum of profits to be obtained from each of them, if taken by itself” (ibid.: 7). This distinction between two types of information is used again by Malmgrem through the concepts of “controlled information” and “secondary information” (1961: 408). “Many economists have professed to analyse information, relatively few have considered carefully the problem of knowledge. Among those who have, Hayek is preeminent” (Loasby 1986: 38). A non-contestable factor is a factor that one cannot obtain on a market unless one is ready to spend more than the cost of its internal development. Dynamic transaction costs cover the prospecting, persuasion, training and coordination costs that arise from resorting to an external supplier. The formulation of this dilemma was inspired by Richardson’s work (1990). Lachmann explains in a similar perspective that: “The results of past mistakes are there not merely to provide lessons, but to provide resources. In revising our expectations we not only have the knowledge, often dearly bought, of past mistakes (our own and others) to learn from, but also their physical counterpart, malinvested capital. Malinvested capital is still capital that can be adapted to other uses” (Lachmann 1978: 25). Moreover, the delays and costs of information transmission are minimized.
References Amendola, M. and Gaffard, J.-L. (1988) The Innovative Choice, An Economic Analysis of the Dynamics of Technology, Oxford: Basil Blackwell. Antonelli, C. (1999) “The evolution of industrial organization of the production of knowledge,” Cambridge Journal of Economics 23. Askenazy, P. (1999) “Innovations technologiques et organizationnelles, internationalisation et inégalités,” unpublished thesis, EHESS, Paris. Bailey, P.B. and Bakos, Y. (1997) “An exploratory study of emerging role of electronic intermediaries” International Journal of Electronic Commerce 1(3): 7–20. Bakos, J.Y. (1997) “Reducing buyer search costs: Implications for electronic marketplaces,” Management Science 43: 12. Bakos, J.Y. and Brynjolfsson, E. (1997) “Organizational partnerships and the virtual corporation,” in C.F. Kemerer (ed.) Information Technology and Industrial Competitiveness: How Information Technology Shapes Competition, Dordrecht: Kluwer Academic Publishers. Barzel, Y. (2001) “The role of measurement, guarantee capital and enforcement in the formation of firms and other organizations,” mimeo, April. Benghozi, P.-J. and Cohendet, P. (1999) “L’organization de la production et de la décision face aux TIC,” in E. Brousseau and A. Rallet (eds) Technologies de l’information, organization et performances économiques, Commissariat Général du Plan, May. Black, S.E. and Lynch, L.M. (1996) How to Compete: The Impact of Workplace Practice and IT on Productivity, Cambridge, MA: Harvard University Press. Brousseau, E. and Rallet, A. (1999) “Synthèse des travaux du groupe,” in E. Brousseau and A. Rallet (eds) Technologies de l’Information, Organization et Performances Économiques, Commissariat Général du Plan, May. Brynjolfsson, E. and Hitt, L.M. (2000) “Beyond computation: Information technology, organizational transformation and business performance,” mimeo. Caroli, E. and Van Reenen, J. (1999), “Skill biased organizational change? Evidence from a panel of British and French establishments,” Couverture Orange, Cepremap 9917.
The new economy and the Austrian theory of the firm 185 Carter, A.P. (1994), “Change as economic activity,” working paper 333, Brandeis University, Department of Economics. Cohen, D. and Debonneuil, M. (1998) Nouvelle économie, Rapport du Conseil d’Analyse Economique 28, La documentation française. Cohendet, P. and Llerena, P. (1999) “La conception de la firme comme processus de connaissance,” Revue d’Economie Industrielle 88(2). Cohendet, P., Llerena, P. and Marengo, L. (1996) “Learning and organizational structure in evolutionary models of the firm,” mimeo, BETA, Université Louis Pasteur, Strasbourg. Dibiaggio, L. (1999) “Apprentissage, coordination et organization de l’industrie – une perspective cognitive,” Revue d’Economie Industrielle 88(2). Dulbecco, P. (1998) “Inter-firms cooperative agreements,” in R. Arena and C. Longhi (eds) Markets and Organization, New York: Springer. Dulbecco, P. and Garrouste, P. (1999) “Towards an Austrian theory of the firm,” Review of Austrian Economics 12. Dulbecco, P. et al. (1995) “Un modèle dynamique de comportement d’une entreprise,” Rapport de recherche 2660 de l’Institut National de Recherche en Informatique et en Automatique, September. Foray, D. (2000) L’économie de la connaissance, Paris: Repères’, La Découverte. Foray, D. and Lundvall, B.A. (1996) “The knowledge-based economy: From the economics of knowledge to the learning economy,” in D. Foray and B.A. Lundvall (eds) Employment and Growth in the Knowledge-based Economy, Paris: OECD. Foss N.J. (1993) “Theories of the firm: Contractual and competence perspectives,” Journal of Evolutionary Economics 3. Freeman, C. (1987) Technology Policy and Economic Performance, Lessons from Japan, London and New York: Pinter Publishers. Gaffard, J.-L. (1995) “De la substitution à la complémentarité: propositions pour un réexamen de la théorie de la firme et des marchés,” Revue d’Economie Industrielle, numéro exceptionnel: Economie industrielle, développements récents. Greenan, N. and Mairesse, J. (1999) “Organizational change in French manufacturing: What do we learn from firm representative and from their employees ?,” NBER Working Paper 7285. Holmström, B. (1999) “The firm as a subeconomy,” Journal of Law, Economics and Organization 15: 74–102. Imaï, K. and Baba, Y. (1989) “Systemic innovation and cross-border networks,” paper presented at the Séminaire International sur la Science, la Technologie et la Croissance Economique, OECD, Paris, 5–8 June. Janssen, M. and Moraga, J.L. (2000) “Pricing, consumer search and the size of internet markets,” mimeo. Johnson, B. and Lundvall, B.-A. (1992) “The learning economy,” Journal of Industry Studies 1(2). Kendrick, J.W. (1994) “The evolution of modern economic accounts: review article,” Review of Income and Wealth 40(4): 457–59. Lachmann, L. (1978) Capital and its Structure, Kansas City: Sheed Andrews and McNeil. Langlois, R.N. (1992) “Transaction-cost economics in real time,” Industrial and Corporate Change 1: 1. Langlois, R.N. (1993) “Capabilities and coherence in firms and markets,” mimeo, Department of Economics, The University of Connecticut. Langlois, R. and Garrouste, P. (1997) “Cognition, redundancy and learning in organizations,” Economics of Innovation and New Technology 4.
186 Philippe Dulbecco and Pierre Garrouste Langlois, R. and Robertson, P. (1995) Firms, Markets and Economic Change, a Dynamic Theory of Business Instituions, London: Routledge. Lazonick, W. (1992) The Innovative Business Organization and the Myth of the Market Economy, Cambridge: Cambridge University Press. Loasby, B.J. (1986) The Mind and Method of the Economists, Cheltenham: Edward Elgar. Loasby, B.J. (1994) “Organizational capabilities and interfirm relations,” Metroeconomica 45: 3. Loasby B.J. (1996) “The organization of industry,” in N.-J. Foss and C. Knudsen (eds) Towards a Competence Theory of the Firm, London: Routledge. Lundvall, B.-A. (1988) “Innovation as an interactive process – from user-producer interaction to the national system of innovation,” in G. Dosi et al. (eds) Technical Change and Economic Theory, London: Pinter Publishers. Lundvall, B.-A. and Nielsen, P. (1999) “Competition and transformation in the learning economy – illustrated by the Danish case,” Revue d’Economie Industrielle 88(2). Machlup, F. (1984) Knowledge, its Creation, Distribution and Economic Significance, vol. III, Princeton, NJ: Princeton University Press. Malmgrem, H. (1961) “Information, expectations and the theory of the firm,” Quarterly Journal of Economics 75. Ménard C. (1990) L’économie des Organizations, Paris: Repères, La Découverte. OECD (1996) Technology, Productivity and Job Creation, vol. 2, Analytical report, Paris. OECD (1998) Perspectives de la Science, de la Technologie et de l’Industrie, Paris. OECD (1999) L’Économie Fondée sur le Savoir: Des Faits et Des Chiffres, Paris. OECD (2000), Science, Technologie et Innovation dans la Nouvelle Économie, L’observateur OECD, Synthèses, October, Paris. Osterman, P. (2000) “Work reorganization in an era of restructuring: Trends in diffusion and effects on employee welfare,” Industrial and Labour Relations Review 47(2), January. Penrose, E. (1959) The Theory of the Growth of the Firm, Oxford: Oxford University Press. Perez, C. (1985) “Micro-electronics, long waves and world structural change: New perspectives of developing countries,” World Development 13(3). Richardson, G.-B. (1972), “The organization of industry,” The Economic Journal, September. Richardson, G.-B. (1990) Information and Investment, 2nd edn, Oxford: Clarendon Press. Simon H. (1999) “The many shapes of knowledge,” Revue d’Economie Industrielle 88(2). Steinmuller, W.E. (1999) Net-worked Knowledge and Knowledge-based Economics, Delft: Telematica Instituut. Stiglitz, J.E. (1985) “Information and economic analysis: A perspective,” Economic Journal 95: 21–41. Stiglitz, J.E. (2000) “The contributions of the economics of information of twentieth century economics,” Quarterly Journal of Economics 1441–78. Teece, D.J. (1992) “Competition, cooperation, and innovation,” Journal of Economic Behavior and Organization 18. Warden, E.A. (2001) “Organizational form, incentives and the management of information technology: Opening the black box of outsourcing,” mimeo, Carlson School of Management. Zuscovitch, E. (1983) “Informatisation, flexibilité et division du travail,” Revue d’Economie Industrielle 25.
Part IV
Networks and communication
8
The small world of business relationships Guido Fioretti
Introduction Curiously, theorizations and models produced within economics generally assume that any agent may interact with any other. Interactions are thought to take place in a homogeneous space where different agents are uniformly distributed, so in the end the outcome of interactions can be averaged and complex economies populated by multitudes of different agents can be subsumed by the behaviour of a few representative individuals, if not a single one. In this way, the existence of structures is assumed away. Ultimately, this approach is assuming institutions away (Birner 1999). Modes of interaction, historical accidents that shaped the habits of peoples along centuries of agreements and quarrels, organisms for collective governance, all this is neglected by a fantastic jump from the microeconomics of an isolated utility maximizer to the macroeconomics of a single representative utility maximizer. I, on the contrary, am claiming that structures do matter for the generation of collective behaviour, and that a few economists who pointed to concrete cases actually highlighted some of the most important issues ever raised in this discipline: •
•
Keynes’ discovery of the possibility of underemployment equilibria, due to insufficient effective demand (Keynes 1936). According to Keynes, equilibrium involuntary unemployment arises when investments are low, because demand is low, because unemployment is high, because investments are low, and so on. In systems theoretic terms, this chicken-and-egg situation can be characterized as an information circuit, a loop where information can circulate forever causing a series of economic agents endlessly to repeat the same sequence of actions. Chandler’s discovery of the organizational shift from multifunctional to multidivisional form that took place in some large American companies in the 1930s and subsequently diffused over most large companies in the world (Chandler 1962). According to Chandler, organizational arrangements that gather command lines according to functions (e.g. production technologies) are not viable for large companies that produce a number of differentiated goods for a number of different markets. Rather, organizational arrangements
190 Guido Fioretti
•
•
•
that gather command lines according to divisions (e.g. sectorally or geographically distinct markets) are more apt to channel relevant information to management. In organizational terms, Chandler was discovering information sinks, i.e. structures that aggregate information for particular decision-makers. Nelson and Winter’s work on routines, modes of behaviour that are peculiar to specific firms and that eventually reproduce and propagate in the process of business replication, mutation and selection (Nelson and Winter 1982). Routines can be seen as sequences of operations carried out by individual workers, who eventually may be unaware of being part of a sequence that endlessly repeats itself, as well as of its effects on the firm as a whole. Just like in the case of Keynes’ ‘effective demand’, we are dealing with an information circuit. Axelrod’s discovery of the feasibility of islands of cooperation in a sea of competition, if the prisoner’s dilemma can be repeated by a population of individuals (Axelrod 1984). Among the many aspects of Axelrod’s research, I would like to draw attention to the fact that he was highlighting information clusters, a very important and very ubiquitous kind of information structure. Kirman’s investigation of Marseille’s wholesale fish market, which is possibly the most detailed investigation of transactions in a non-financial market (Weisbuch et al. 2000). Along years of observation, the Marseille fish market displayed a certain degree of stability of customers around vendors. In structural terms, we could say that vendors act as information stars in the net of relationships that take place. In other words, vendors are nodes where information converges and from where it radiates.
Among theoretical viewpoints, Austrian economics distinguishes itself for highlighting the need of taking account of communication, information structures, knowledge formation and cognition. Friedrich von Hayek has been a forerunner in this field, writing a treatise that anticipated modern connectionist models (Hayek 1952) and calling for consideration of information flows in economic theory (Hayek 1937, 1945). The present contribution suggests the possibility that a widespread structure of interactions among the components of distributed systems (including human societies, neuronal cells, the internet and many others), namely the small world topology, regulates business relationships as well. The chapter is organized as follows. The next section explains the basics of small world structures. Subsequently, the third section highlights the conditions that make a small-world topology arise in a distributed system. The fourth section proposes methodologies for highlighting a small-world topology in the structure of business relationships and speculates about the event that this structure is actually found and the final section concludes.
A small world During the 1960s, American psychologist Stanley Milgram discovered the surprising ability by distant people to connect to one another (Milgram 1967).
The small world of business relationships 191 Milgram assigned individuals living in Kansas and Nebraska the task of making an envelope arrive at an individual located in Massachusetts with whom they were not acquainted, by means of personal contacts. At each passage, information concerning the person who was receiving the envelope and shipping it forth had to be inserted in the envelope itself. In this way, Milgram could track the geographical and social milieus that were crossed. To his surprise, completed chains clustered around quite a small number of steps, namely six. Later investigations, aimed to evaluate the effect of changing physical distance and racial groups, highlighted the same clustering feature around seven, rather than six (Korte and Milgram 1970). Since this human ability of finding connections with distant people is reminiscent of anecdotes of people finding unexpected common acquaintances, Milgram called it the ‘small world’ phenomenon. Clearly, ‘small’ world does not mean that connections between any two individuals have the same length, irrespective of geographical and social distances. Rather, it states that the net of human acquaintances entails long-distance shortcuts. Sociologist Mark Granovetter (1973) was the first to understand the topology underlying the small-world phenomenon. According to Granovetter, the net of human acquaintances is such that clusters of localized connections (strong ties, because they describe friends that are all acquainted with one another) are linked by a few long-distance connections (weak ties, because they originate from occasional acquaintances) that bridge between cliques. Notably, since radical change often originates from unexpected connections, weak ties are generally responsible for important breakthroughs in individuals’ lives. More specifically, weak ties can easily explain the occurrence of the small world phenomenon. In fact, although any single individual is unlikely to have the proper long-distance connection in order to reach any other individual by means of a single jump, he can ask people in his clique whether any of them has it. Figure 8.1 illustrates an example of a small-world topology.
Figure 8.1 A graph connected according to a small-world topology. Nodes are arranged in tightly connected clusters, that are linked to one another by a few long-distance edges.
192 Guido Fioretti Formalization of the notion of small world was not provided until recently, mainly by physicists (Marchiori and Latora 2000; Newman 2000; Strogatz 2001; Watts 1999; Watts and Strogatz 1998). In its simplest formulation, it relies on two magnitudes: • •
Characteristic path length L, defined as the average number of edges that must be traversed in the shortest path between any two pairs of vertices. Clustering coefficient C, defined as the average of the ratios of actual immediate neighbours to maximum immediate neighbours for each vertex.
The intuition behind the above magnitudes is that characteristic path length measures the ability of a node to link to a distant one, whereas clustering coefficient measures the amount of local structure in the network. The above pair of magnitudes characterizes small-world networks. In fact, clustered networks having only local connections will exhibit a high clustering coefficient and a high characteristic path length. On the contrary, random networks (i.e. networks whose connections have been drawn at random) will exhibit a low clustering coefficient and a low characteristic path length. However, small-world networks will exhibit high clustering coefficients and low characteristic path length. Thus, simultaneous occurrence of low L and high C identifies a small-world topology. Since tools for identifying small-world networks are available, researchers are looking for small-world structures in the most diverse settings. Interestingly, small worlds seem to be ubiquitous in distributed systems. Small-world topologies have been found in the collaboration graph of feature films extracted by the Internet Movie Database (where links have been defined as actors working in the same movie), the Western Power Grid of the US, the neural network of the nematode worm C. elegans, the Massachusetts Bay underground transportation system, the English language (where links have been defined as co-occurrence of words) and the structure of hyperlinks connecting Internet sites (Adamic 1999; Ferrer i Cancho and Solé 2001; Marchiori and Latora 2000; Watts and Strogatz 1998). Thus, small-world topologies seem to be a general property of the structure of connections between a large number of interacting, autonomous and (to some extent) intelligent agents. Since market economies are precisely like that, one may expect that business relationships are organized according to the same principle as well. However, before examining the possibility that small-world structures regulate economic life it is sensible to ask on what conditions small-world topologies arise, and whether these conditions are likely to hold within economic systems.
An instance of bounded rationality Small-world topologies are so widespread because they arise for simple reasons. These are, essentially: (1) a generic tendency for each node to establish connections
The small world of business relationships 193 with any other, that is balanced by (2) a constraint on the number of connections that can be entertained, whose localization arises out of (3) greater ease of establishing links with nodes that can already be reached through indirect paths. Point (1) is an obvious feature of human societies, and a valuable ability in the world of business (Burt 1992, 1997). Point (3), meaning that friends of our friends are likely to be our friends as well, is equally obvious (Granovetter 1973). On the contrary, point (2) is more problematic and will be the subject of this section. Since Simon’s pioneering work on bounded rationality (Simon 1982), economists have a conceptual alternative to the idea that economic agents are able to make use of all information they get. Models of bounded rationality assume satisficing, rather than optimizing behaviour. As a rule, satisficing behaviour is modelled by assuming that decision-makers are content to attain certain levels of performance, rather than striving for the best possible performance. The idea is that decisionmakers face cognitive limits to their information-processing abilities. Possibly, a limitation of models of bounded rationality is that no general rule is available to calculate the threshold where cognitive limits put a halt to optimization. In other words, models of bounded rationality do not supply a ready-made decision rule, whereas utility maximization does. However, if optimizing behaviour is simple but false while satisficing behaviour is correct but difficult, one should look for regularities in satisficing behaviour that would make it easy to employ, rather than sticking to wrong optimization assumptions. It has long been suspected that cognitive limitations set an upper bound to human circles of acquaintances. Derek De Solla Price, who inspired the creation of the Science Citation Index, deemed that scientists gather around informal groups of about 100 people, which he called invisible colleges (De Solla Price 1965). Torsten Hägerstrand, a leading figure in cultural and economic geography, who carried out extensive analyses of circles of acquaintances, attempted unsuccessfully to discover them by counting the number of references in commemoration books of prominent people (Hägerstrand 2001, unpublished letter). Apparently, the problem lies in separating stable acquaintances from occasional ones. However, this problem does not exist if we begin with primitive, simple societies. By assuming that intelligence developed in order to keep groups of hominids together, evolutionary psychologist Robin Dunbar started to look for a correlation between neocortex size and group size for various species of primates (Dunbar 1996). In order to avoid spurious correlation due to the need of larger animals to have larger brains just in order to control a larger number of muscles, the relevant variable was actually the ratio of neocortex volume to total brain volume. Correlation between neocortex ratio and group size turned out to be high, as illustrated in Figure 8.2. Most interestingly, this correlation allowed an inference to be made on the size of prehistoric human groups. According to Dunbar’s calculations, human groups must have counted 150 individuals, approximately.
194 Guido Fioretti
MEAN GROUP SIZE
100
10
1
0 0.1
1 NEOCORTEX RATIO
10
Figure 8.2 Dunbar’s finding of a correlation between neocortex ratio (the ratio of neocortex volume to total brain volume) and mean group size. By courtesy of Robin Dunbar ©.
Dunbar supported his findings with many examples taken from observation of actual human societies. He found out that clans of contemporary primitive societies average almost exactly 150, and that these clans are much less variable in size than any other grouping. Furthermore, he reports that archaeologists have suggested that the villages of the earliest farmers of the Middle East (5000 BC) typically numbered 150 people, just like today’s horticultural villages in Indonesia, Latin America and the Philippines. Possibly, the most interesting evidence collected by Dunbar concerns religious communities in North America. Hutterites live in groups whose mean size is a little over 100. This is because they always split as soon as they reach a size of 150. In fact, the elders claim that once a community exceeds 150 people, it becomes increasingly difficult to control its members by peer pressure alone! Another example is provided by the Mormons. When Mormon leader Brigham Young led his followers out of Illinois into Utah, he chose groups of 150 people as the ideal size. However, all these examples regard simple societies, where individuals only interact with the members of the group to which they belong. How is it in modern societies, where people typically entertain relationships with many more than 150 fellows? If one counts the number of people with whom each individual interacts, one finds numbers that vary greatly according to profession and can be up to the order of the thousands (De Sola Pool and Kochen 1978). Thus, Dunbar’s anthropological constraint eventually holds only for a core of stable acquaintances, or, with a more precise definition, only for the number of people with whom an individual, at any given point in time, cares to keep in touch. In its turn this might
The small world of business relationships 195 be a problematic concept, since one could speculate that in modern societies human relationships may take any degree of depth, eventually blurring any distinction between stable acquaintances and occasional ones. None the less, there exists some anecdotal evidence suggesting that even in modern societies and businesses, humans are subject to constraints as far as it regards the number of people with whom they can interact. Psychologist Nigel Nicholson reports cellular organizational forms where a large number of semiautonomous units are kept at an average of fifty employees each (Nicholson 1998). Economists Franco Malerba and Francesco Lissoni, while carrying out research on the structure of co-authored patents, discovered that apart from researchers who are working for large firms, inventors form clusters of a nearly constant size of eighty people (Lissoni, personal communication). These numbers are much lower than 150, and even quite different from one another. Possibly, 150 should be discounted for friends and relatives before yielding an upper limit to the number of business relationships that one can entertain. Furthermore, this figure is likely to be different for different kinds of people, according to profession and inclinations. Nevertheless, if bounded rationality means – inter alia – that businessmen cannot entertain relationships with infinitely many people, then even a global new economy can be expected to retain certain structural features of the old one.
The global network Admittedly, the little evidence presented in the previous section cannot be deemed to be conclusive in any sense. However, it is interesting to speculate what would happen if business relationships really conform to a small-world topology. If business relationships are arranged in a small-world structure because of businessmen’s bounded rationality, then this structure should be invariant with respect to technological paradigms and availability of information and communication technologies. Thus, we should expect structural invariance of business relationships across time and space, in the ‘old’ as well as in the ‘new’ economy. However, we should not expect that structural invariance is in any way related to physical location. On the contrary, spreading of business relationships all over the globe is a salient feature of the ‘new economy’. In many cases, it is not even necessary to resort to improving information technologies and falling transportation costs in order to justify this development, since physical distance may bear advantages of its own (Felsenstein 2001; Kilkenny 2000). Rather, we should expect the clusters of the supposedly small-world network of business relationships to become ever less dependent on physical distance, spreading over continental areas of free trade and eventually, at a later stage, all over the globe. Namely, independence of information clusters from physical distance would be the hallmark of globalization. Thus, a research agenda could be set out. One could reasonably think of gathering data on business relationships with respect to geographical location,
196 Guido Fioretti looking for: (1) the existence of a small-world structure, and (2) the changing relation of this structure to physical space. In order to do this, one would need extensive interviews with managers across industries, space and time. Clearly, a panel of this kind is very unlikely to be realized, particularly because of the requirement to span time and space, besides industries. Possibly, one may try to use a series or set of regional input–output tables, in the hope that the structure of business relationships did not get blurred in the process of aggregation from managers to firms and from firms to industries. In this case, input–output tables should be discretized, translating their entries into zeros if they fall below a certain threshold, ones if they are above it. Threshold values need not be fixed arbitrarily, since they can be chosen to maximize information entropy (Schnabl 1994). Note that, if one would find a small-world structure and if this structure would be found to be invariant with technological regimes, then the path of technical progress could no longer be conceived as exogenous. In fact, one could state that the net of inter-firm connections evolves according to precise psychological laws that inhibit combination of too many technologies at a time. Weird as it might seem at first sight, this is precisely the way natural evolution proceeds. In fact, the overall fitness of an organism generally does not result from simple summation of the fitness of its genes, but rather depends on the extent of interactions between genes as well. In general, mutation of a gene affects overall fitness to a greater extent the farther-reaching its interactions with other genes are. Up to a certain threshold, greater interaction means that a favourable mutation of a gene increases overall fitness to a greater extent. However, beyond that threshold a favourable mutation of a gene causes overall fitness to fall because of the negative influence that it exerts upon other genes. Thus, there exists an optimal level of genes interaction or, to speak in broader terms, there exists an optimal level of interaction between the components of an evolving system (Kauffman 1993). Stuart Kauffman proposed to extend these concepts to the economic system, where technologies would take the role of genes and products would take the role of organisms (Kauffman 1988). According to this scheme, innovations would arise out of mutation and recombination of existing technologies (Schumpeter 1911) and, if this metaphor makes sense, one could claim that there should exist a limit to the number of technologies that can be recombined at any given point in time. Possibly, this limit lies in the cognitive inability to handle infinitely many business relationships at the same time. Bounded rationality, understood as the existence of a limit to the number of relationships that can be entertained with repositories of particular technologies, might shape the set of innovations that can be carried out at any given point in time.
The small world of business relationships 197
Concluding remarks Although this short chapter does not present definitive results but rather hints and suggestions for future research, its leading theme was that beyond all possible differences between ‘old’ and ‘new’ economy, certain structural invariants are likely to persist. In particular, this contribution focused on a possible psychological invariant, namely a constraint on the number of stable relationships that humans can entertain. Generally speaking, psychology is seldom accepted in economics. In particular, it is never accepted when one deals with ‘hard’ issues, like technology (Sen 1989). Yet the main message of this contribution is that, in spite of this prejudice, psychology might command variables that most economic theories take as exogenous. If this would turn out to be true, then the chain of causal links to which economists are accustomed, namely from exogenous technologies towards production and consumption according to exogenous tastes, would close into a double ring where both producers and consumers innovate technologies and habits along anthropological and psychological guidelines, influencing one another in a never ending evolutionary spiral.
Acknowledgments I wish to thank Jack Birner, Eugene Garfield, Pierre Garrouste, Torsten Hägerstrand, Francesco Lissoni, Cesare Marchetti, Bart Nooteboom, an anonymous referee for comments and suggestions, and Robin Dunbar for permission to reproduce a picture from his book.
References Adamic, L.A. (1999) ‘The small world Web’, paper presented at the Third European Conference on Research and Advanced Technology for Digital Libraries, Paris, September. Axelrod, R. (1984) The Evolution of Cooperation, New York: Basic Books. Birner, J. (1999) ‘Making markets’, in S.C. Dow and P.E. Earl (eds) Economic Organisation and Economic Knowledge: Essays in Honour of Brian J. Loasby, Cheltenham: Edward Elgar. Burt, R.S. (1992) ‘The Social Structure of Competition’, in N. Nohria and R.G. Eccles (eds) Networks and Organizations: Structure, Form, and Action, Boston: Harvard Business School Press. Burt, R.S. (1997) ‘The contingent value of social capital’, Administrative Science Quarterly 42: 339–65. Chandler, A.D. (1962) Strategy and Structure, Cambridge, MA: MIT Press. De Sola Pool, I. and Kochen, M. (1978) ‘Contacts and influence’, Social Networks 1: 5–51. De Solla Price, D.J. (1965) Little Science, Big Science, New York: Columbia University Press. Dunbar, R. (1996) Grooming, Gossip and the Evolution of Language, London: Faber and Faber. Felsenstein, D. (2001) ‘New spatial agglomerations of technological activity – anchors or enclaves? Some evidence from Tel Aviv’, paper presented at the IGU Conference on Local Development, Turin, July.
198 Guido Fioretti Ferrer i Cancho, R. and Solé, R.V. (2001) ‘The small world of human language’, Santa Fe Institute working paper 01-03-016. Online. Available HTTP: (accessed 4 April 2002). Granovetter, M.S. (1973) ‘The strength of weak ties’, The American Journal of Sociology 78: 1360–80. Hayek, F.A. (1937) ‘Economics and knowledge’, Economica 4: 33–54. Hayek, F.A. (1945) ‘The use of knowledge in society’, The American Economic Review 35: 519–30. Hayek, F.A. (1952) The Sensory Order, London: Routledge and Kegan Paul. Kauffman, S.A. (1988) ‘The evolution of economic webs’, in P.W. Anderson, K.J. Arrow and D. Pines (eds) The Economy as an Evolving Complex System, Redwood City: AddisonWesley. Kauffman, S.A. (1993) The Origins of Order, Oxford: Oxford University Press. Keynes, J.M. (1936) The General Theory of Employment, Interest and Money, London: Macmillan. Kilkenny, M. (2000) ‘Community networks for industrial recruiting’, paper presented at the conference Entrepreneurship, ICT and Local Policy Initiatives: Comparative Analyses and Lessons, Amsterdam, The Tinbergen Institute, June. Korte, C. and Milgram, S. (1970) ‘Acquaintance networks between racial groups: Application of the small world method’, Journal of Personality and Social Psychology 15: 101–8. Marchiori, M. and Latora, V. (2000) ‘Harmony in the small world’, Physica A 285: 539–46. Milgram, S. (1967) ‘The small-world problem’, Psychology Today 1: 60–7. Newman, M.E.J. (2000) ‘Models of the small world’, Journal of Statistical Physics 101: 819–41. Nelson, R.R. and Winter, S.G. (1982) An Evolutionary Theory of Economic Change, Cambridge, MA: The Belknap Press of Harvard University Press. Nicholson, N. (1998) ‘How hardwired is human behaviour?’, Harvard Business Review Jul–Aug: 134–47. Schnabl, H. (1994) ‘The evolution of production structures, analyzed by a multi-layer procedure’, Economic Systems Research 6: 51–68. Schumpeter, J.A. (1911) Theorie der wirtschaftlichen Entwicklung, Berlin: Duncker and Humblot. Sen, A. (1989) ‘Economic methodology: Heterogeneity and relevance’, Social Research 56: 299–329. Simon, H.A. (1982) Models of Bounded Rationality, C.B. Radner and R. Radner (eds), Cambridge, MA: MIT Press. Strogatz, S.H. (2001) ‘Exploring complex networks’, Nature 410: 268–76. Watts, D.J. (1999) ‘Networks, dynamics, and the small-world phenomenon’, The American Journal of Sociology 105: 493–527. Watts, D.J. and Strogatz, S.H. (1998) ‘Collective dynamics of “small world” networks’, Nature 393: 440–2. Weisbuch, G., Kirman, A.P. and Herreiner, D.K. (2000) ‘Market organization and trading relationships’, The Economic Journal 110: 411–36.
Part V
Markets and market failure
9
Some specific Austrian insights on markets and the “new economy” Richard Aréna and Agnès Festré
Introduction Economists disagree today on the elaboration of an adequate analysis of the markets which emerged with the development of the “new economy” (NE). Two main interpretations, however, seem to prevail. According to a first view, the usual tools of marginal analysis continue to fit for the study of the new realities which have to be considered and coped with. This does not mean that the traditional theory of pure competition in a private good economy provides the best analytical framework. The NE would imply the necessity of abandoning this theory and of replacing it with the so-called “new microeconomics.” This view has been developed by various authors even if they do not always agree on the specific tools which are the most relevant to cope with markets in the NE. Some of them, especially Shapiro and Varian (1998), stressed the interest of using the concepts and mechanisms provided by the modern theory of information and tried to elaborate on them guidelines for firms or policy-makers. They indeed considered that markets for information goods cannot be analyzed with the usual tools of microeconomic analysis, requiring the assumptions of perfect information and perfect competition. On the one hand, firms are assumed to adopt price differentiation strategies. On the other hand, the existence of network externalities, of uncertain quality and of different types of increasing returns imply the existence of substantial information asymmetries. Finally, the probable opportunity of quasimonopolies or oligopolies also requires the necessity of adequate anti-trust laws and policies. Other authors (see, for example, Gensollen 2001), who focus specifically on the nature and regulation of markets, are more eager to combine the analytical tools of public economics with those of “new microeconomics.” On the one hand, what is argued is that information technologies (IT) and Internet permit an increasing independence of information as regards its usual means of storage and transportation. On the other hand, information appears to be a non-rival good: consumers can use a good or a service without any possibility of excluding other consumers from this use. Finally, information generates important externalities and therefore substantial decrease of consumption marginal cost: let us think of the
202 Richard Aréna and Agnès Festré possibility of individual consumers to get copies of an electronic text, to scan images or to reproduce video tapes with rather cheap equipment. All these phenomena are combined with drastic changes on the supply side, like the development of large production economies of scale or the increase of the part of information in the production costs. They imply, therefore, economic mechanisms which are close to those analyzed in public economics. Moreover, they also require new types of market regulation. A second view concerning the NE rather stresses the idea of a revival of market coordination. It is often associated with the view of the emergence of a third period in the history of market economies. Market coordination was supposed to prevail in the “competitive capitalism” of the nineteenth century. Hierarchical coordination is assumed to have become progressively predominant in the “trustified capitalism” of the late nineteenth and of the twentieth centuries.1 The NE would mean the end of the prevalence of the modern managerial firm and the return of market coordination. The empirical support of this view – either explicit or implicit – is generally based on the increasing importance of electronic commerce among producers or between producers and consumers, seen as “the leading edge of the digital economy” (see US Department of Commerce 2000: 7). This importance is frequently associated with two consequences. On the one hand, the development of business-to-business (B2B) e-trade would foreshadow a future general substitution of market coordination to hierarchical coordination among firms. On the other hand, the development of businessto-consumers (B2C) e-trade is supposed to generate a global process of desintermediation and, therefore, the future prevalence of direct relations between suppliers and demanders, often described as the modern materialization of the model of pure competition in the real world. This interpretation can be found, for instance, in the following text, written by Pamela Woodwall, present economics editor of The Economist: Economic theory has never described the real world completely; it probably never will. Perfect competition does not exist, and many question marks remain over the precise role of technology and human capital in growth. In both these areas, economists have been doing some serious rethinking in recent years. However, neither IT nor globalization overturns the basic rules of economics. Indeed, IT does the opposite, by making economies work rather more as the textbooks say they should. The theory of perfect competition, a basic building-block of conventional textbook economics, optimistically assumes abundant information, zero transaction costs and no barriers to entry. Computers and advanced telecommunications help to make these assumptions less far-fetched. IT, and the Internet in particular, makes information on prices, product, and profit opportunities abundant, serving it up faster and reducing its cost. This, in turn, makes markets more transparent, allowing buyers and sellers to compare prices more easily. At the same time, advances in telecommunications have brought down transaction costs by slashing communications costs
Austrian insights on markets and the “new economy” 203 between far-flung parts of the globe and by allowing direct contact between buyers and sellers, cutting out the middleman. IT has also lowered barriers to entry by improving the economics of smaller units. In other words, the basic assumptions of perfect competition are starting to become true, better information, low transaction costs, and lower barriers to entry all add up to a more efficient and competitive market. (The Economist 1999: 93) This view of the markets of the NE implies that the schemes used by conventional ‘old’ microeconomic theory in characterizing a competitive economic equilibrium (CEE) are more and more relevant to analyze the set of transactions which is daily taking place in modern market economies. It is predominant today in mass media and in part of the empirical economic literature, and this is the reason why we will focus on it in the present chapter, even if some of the developments which follow are also relevant for an assessment of the first view. In the first section we will recall the usual arguments which are put forward to analyze the NE as the empirical realization of the CEE model. After having stressed the limits of these arguments, we will try to show that some Austrian concepts and developments better fit to explain the economic impact of IT on present markets. This purpose is characterized in the second section. In the final section we will refer to some concrete examples taken from the reality of electronic markets, in order to emphasize how these concepts and developments allow a better understanding of the working of markets in the NE, even if they exhibit some limitations.
The CEE model view of the “new economy” Three arguments are generally put forward to support an interpretation of the impact of IT on e-markets based on the concept of CEE. On one side, the very notion of economic equilibrium – either partial or general – presupposes the existence of a distinction between “given” and “unknown” economic magnitudes. As we know, in a CEE model, “given” magnitudes correspond to “fundamentals,” i.e. initial individual endowments, consumers preferences and the “blueprint” of techniques. Now, what is generally argued is that these “fundamentals” are more easily identified in the NE. Thus, through the use of Internet, consumers are supposed to be able to better express their preferences while producers can take them clearly into account thanks to the interaction implied by the IT. Consumers’ tastes are explicitly revealed in accordance with the requirements of the CEE model. On the other side, the usual neoclassical assumptions seem to apply in our modern real world. Information is less and less costly and more and more shared by the participants to markets: information asymmetries seem to decrease thanks to the generalization of electronic transactions. Transaction and search costs are also supposed to diminish with the reduction of intermediaries on markets. Spatial distortions also tend to disappear since any agent can buy or sell from
204 Richard Aréna and Agnès Festré any location in the world through the use of the Internet. The generalization of auction markets is also put forward in order to show that tâtonnement processes are no longer theoretical devices but tend to emerge within the real world. Supply is also concerned by the use of IT. Inventories can be reduced through an electronic management of the supply chain of firms: therefore, the assumption of full utilization of the productive capacities appears to become a better approximation of economic reality. The use of electronic data interchange (EDI) and Internet also contributed to decreasing the importance of transaction and search costs for suppliers. Some empirical studies seem to confirm this general picture (see Smith et al. 2000), even if some unsolved difficulties remain to explain the persistence of price dispersion on e-markets (ibid.: 104–5). A more careful investigation of the markets of the NE, referring to Austrian concepts, however, leads to some substantial objections. The first objection we will raise is “Hayekian.” Hayek indeed always insisted on the importance of individuals’ heterogeneity on markets. As we know, his approach is based on the differences between individual preferences as well as those related to the subjective perceptions of the environment. According to Hayek, agents’ heterogeneity does not exclude some mechanism of economic coordination. However, two conditions, at least, are required in order to obtain this result. On the one hand, “external events” on which individual agents found their perceptions, expectations and decisions must belong to the “same set” (Hayek 1937: 37). Therefore, interpretations of the real world might differ but agents have to refer to a unique and common real world. On the other hand, agents cannot base their plans on purely external and objective facts or information, as those which are assumed to appear on the Internet. They must also include among their decision parameters some forecast of the future behaviors of other agents. This situation, therefore, involves the existence of heterogeneous subjective plans as well as strategic uncertainty. Now, if these two conditions are fulfilled, it means that the data on which agents base their decisions are no longer limited to the ‘fundamentals’ of the CEE model. They also include, indeed, the “subjective” data, which are related to their own specific positions within the mechanisms of social interaction of a market economy. The debate on social calculation, which Hayek participated in, confirms this view. Disputing the choice of a general equilibrium framework as a guide to take rational decisions in a socialist planned economy, Hayek contested the possibility for a central decision-maker to have an explicit and codified knowledge of the parameters of calculation. He indeed argued that part of the information on the blueprint of productive techniques in the economy were only available in the form of tacit knowledge related to “circumstances” (Hayek 1935/1949: 155). In 1940, Hayek gave a convincing example of this problem, showing that it was hardly possible to have a codified and explicit knowledge of real markets and activities. Under these circumstances, he could not conceive of a central planner able to define, a priori, a list of standardized commodities as well as a list of suppliers and demanders that would qualify for a sufficient characterization of any given market. From here, the apparatus of the general
Austrian insights on markets and the “new economy” 205 equilibrium (GE) theory could not be relied upon if one is looking for a satisfactory explanation of the mechanisms of demand and supply that prevail in real markets (Hayek 1940/1949: 188–9). Obviously, the objections put forward by Hayek are also valid for a market economy: it is impossible to define an objective list of “fundamentals” independently of the subjective perceptions of agents. This type of objection is all the more relevant for the markets of the NE, where the life cycle of goods tends to be shorter and shorter and new commodities and services are appearing and disappearing at a quicker pace. Moreover, within the NE, it becomes more and more difficult to define precisely what a good is and what a market is. Commodities are often supplied according to a “bundling” context, in which it is difficult to distinguish the commodity itself from the bundle of services that are related to it. By the way, the use-value of a good is often imprecise a priori since it emerges from the interaction of consumers and producers. Thus, a computer or a computer system has no a priori use-value till the consumer interacts with the producer or intermediaries to define it a posteriori. This feature of IT goods has often been stressed in the literature (especially by Shapiro and Varian 1998, for instance), by arguing that information is a good related to experience, so that its utility cannot be appraised ex ante. The definition of the use-value of commodities (or services) is rendered even more complicated by the fact that, in the field of e-commerce, the final transaction is often only indirectly linked with the electronic connection between one demander and one supplier (or intermediary). For instance, some of the services associated with electronic markets are free although they might contribute to the realization of final transactions. It is also difficult to conceive of static “fundamentals” in a world in which changes affect every-day techniques, preferences and goods themselves. Indeed, in the NE, the interactions between suppliers on one side, but also between consumers, on the other side, which are emerging today, are constantly adapting the changing environment implied by the adoption and diffusion of IT. This illustrates why, as early as 1928, Hayek did not accept the static framework of GE theory but preferred to substitute it with what he called an “intertemporal price equilibrium” framework. Within this concept, time is described as a sequence of “flows” of “individual processes.” They form the “economic period” (the “year”), which constitutes the horizon of agents’ decisions (Hayek 1928/1984: 72). Each temporal “flow” corresponds to a subdivision of the “economic period,” called “day” or “season.” When a new “day” or a new “season” begins, it includes a flow of new economic “processes” of production. Now, in each sub-period, according to Hayek, permanent changes affect what the CEE model calls “fundamentals,” namely, production techniques, as well as consumers’ preferences (ibid.: 73). Therefore, in his conception of intertemporal equilibrium, Hayek accepts the possibility of real disequilibria related to persistent changes of techniques and preferences (see the “voluntary saving” case in Hayek’s business cycle theory), which contrast with the virtual disequilibria of the Walrasian and neo-Walrasian theories of tâtonnement. Finally, the conception of market coordination implied by the CEE interpretation of the NE is clearly contradictory to Hayek’s conception of the working
206 Richard Aréna and Agnès Festré of a market economy. According to the former interpretation, a market economy is indeed the outcome of voluntary behaviors, the intended consequences of which confirm the a priori individual objectives resulting from optimization behaviors. Now, for Hayek, a market economy does not refer to a “taxis” but rather to a “kosmos,” namely, a self-organized order that results, on the contrary, from the unintended consequences of individual subjective plans. In such an economy, agents again are heterogeneous. Their knowledge of economic activities is not entirely codified and explicit, as it is in a CEE framework characterized by formal utility, production, demand and supply functions. Part of their knowledge is indeed tacit or related to specific “places” or “circumstances.” The situation of agents in their relations to knowledge might be described as a situation of “division” (Hayek 1937/1949: 50) or a “fragmentation of knowledge” (Hayek 1973/1980, vol. I: 16): each member of the society only knows a very limited part of “global” knowledge and any of them ignores most of the facts on which the working of the economic system rests. This kind of approach better fits with the realities of the NE, where self-emerging markets are the rule and their characteristics prevent agents from understanding what is going on at level of the global society. This is why the image of the “discovery process” better describes the NE than the abstraction of a complete set of interdependent markets related to objective mechanisms and purely codified information.
Some specific Austrian insights on the “new economy” We will try now to develop a different conception of market economies, rooted in the Austrian tradition and able to provide a better framework for understanding market realities in the NE. Austrian economists of the past and the present offer, however, divergent views on the theory of market processes. Gloria-Palermo (1999) has convincingly pointed out the detail of the analytical origins of this divergence, showing the existence of a major difference between a “Kirzner–Hayek conception” which assumes “(without really demonstrating) that disequilibria signals are sufficient to move the system towards equilibrium” and “[derives] from this assumption the conclusion that the market process constitutes an efficient coordinating device” (ibid.: 125); and Lachmann’s view, according to which “the possibility of inconsistency of plans challenges the traditional view of a tendency towards equilibrium” (ibid.: 126). Our basic idea, here, is indeed to give up the view of a universal model of market which would express the essential features of any kind of market, and in which market failures or disequilibria might be characterized as simple frictions, imperfections, undiscovered profit opportunities or individual misperceptions. Therefore, our viewpoint does not only exclude the framework of the CEE model but also, to some extent, the “Kirzner–Hayek conception.” It is closer to Lachmann’s treatment of markets as “institutions” (Lachmann 1986: xi) and therefore implies the analysis of the institutional features of the various types of markets. In this framework, different markets indeed imply various market processes, which allows the dismissal of what O’Driscoll and Rizzo (1985) called Newtonian time, namely, Hayek’s analytical time (ibid.: 81–2). Our
Austrian insights on markets and the “new economy” 207 preference for a Lachmann-type conception does not exclude, however, the utilization of Hayek’s contribution to the analysis of the relations between social rules and individual behaviors; our dissatisfaction with Hayek’s theory of markets mainly concerns, in fact, his belief in the existence of an “empirical” tendency of market economies toward equilibrium. The origin of our view is to be found in Menger’s works. For Menger, it is clear that a market economy is not a universal and unchanging system of agent coordination. In accordance with Menger’s evolutionary approach of the emergence of institutions (Menger 1871/1976: 232–86, 1883/1963: 127–61), this kind of economy is the result of a slow process of self-organization and selfreinforcement. This process can take on the most various forms and, therefore, explain the diversity and specificity of market and market organization types. For Menger, the origin of this process is located in the existence of a production economy, which ignores market mechanisms (Menger 1871/1976: 236). In these “isolated domestic economies,” production is not directed towards exchange transactions between anonymous agents.2 The technical division of labor is present but “self-sufficiency” is prevailing. The second stage of the process consists in the introduction of a craftsmen system where producers use inputs belonging to consumers in order to provide them with outputs in counterpart of a material levy. In a third stage, production on order is introduced. However, its inefficiency prevents its generalization: temporal distortions indeed appear in supplying as well as in delivery. This failure of the production on order system then paves the way to the “institutional arrangements” (ibid.: 238), according to Menger’s expression of market economies characterized by organized markets, intermediaries between producers and consumers and monetary institutions (see Aréna 1999: 24). The self-organized and self-reinforced aspect of this evolutionary process explains why initial conditions and followed paths are different according to the country or the culture which is considered. The variety of the forms of market organization thus precludes the existence of a unique model of market economy. These forms, however, cannot be assimilated to competitive imperfections, as it is in the CEE model. They correspond to different degrees of exchangeability (Menger 1871/ 1976: 241), which depend on four main factors. The first factor is related to the various forms of trade organization. These forms differ according to the size of supply and demand (ibid.: 243), the means of information circulation, the accessibility of markets and their working mechanisms, as well as the prevailing legislation (ibid.: 248 and 249). The second factor is connected to the location of agents and the spatial constraints of transactions (ibid.: 251). The third factor concerns the mechanisms of auction and the habits and methods of bargaining. These mechanisms do not always imply flexible prices. Menger does not exclude the possibility of sticky or rigid prices (ibid.: 251–2). The fourth factor refers to the length of the period during which transactions are allowed, their periodicity and to the purchase rate. These elements correspond to the temporal constraints of transactions (ibid.: 252–3). These constraints do not mean that Menger always assumes market clearing. He explicitly considers the occurrence of inventories and this possibility obviously depends on the flexibility of prices.
208 Richard Aréna and Agnès Festré In Menger’s approach, market diversity is therefore assumed. The four factors we just mentioned are related to the degree of exchangeability of commodities. However, in his explanation of market diversity, Menger combines them with natural rules or habits. The mention of these rules or habits is important since it shows that, for Menger, market transactions are “embedded” in a specific social and cultural context and have to be studied, taking this context into account. Wieser resumed Menger’s criteria of market diversity and used them to define a real institutional typology of markets. Some of these markets are “organized” according to some permanent and specific rules (Wieser 1927/1967: 173–6). Others are “disorganized” (ibid.: 175) and represent a kind of economic pathology, which might explain the existence of panics, for instance. If Menger’s criteria play a major role in Wieser’s typology of markets, Wieser however took a step forward, emphasizing what he called “exchange institutions,” defined as a supplementary element of market diversity. These “institutions” first refer to the various forms of property and contract rights. These rights are essential since, in each specific market, they determine what is, and what is not allowed. Therefore, they shape the nature of market transactions in accordance with law and, beyond it, “social institutions” (ibid.: 172). Exchange institutions also refer to agents’ “customs” (ibid.: 179). Wieser characterizes “customs” as permanent social rules, which agents follow when they make transactions on markets. They are similar to what modern economic analysis would characterize as “routines.” Routines or customs are especially significant to understand why market conditions are always changing gradually (ibid.). Lachmann inherited this type of approach from the Menger–Wieser Austrian tradition. He also emphasized the fundamental diversity of markets (Lachmann 1983: 3) and his criteria are close to Menger’s. The first one refers to the organizational and spatial specificity of markets (ibid.: 3), which Menger also identified. The second criterion is related to the forms of auctions and bargaining methods, as in Menger. Combining the two already mentioned by Menger, namely, sticky prices and inventories, Lachmann opposes fix price markets and flexible price markets. Far from conceiving the first type of market as an anomaly, Lachmann rather considers that it results from the process of commodity standardization, which permits producers to impose supply prices on consumers. The third criterion put forward by Lachmann is related to the nature of market intermediaries: for instance, the presence of arbitrageurs or speculators on a specific market implies very different market mechanisms (Lachmann 1986: 125). The fourth criterion derives from Menger’s seminal distinction between exchange economies and production economies. Lachmann indeed opposes consumers’ markets and producers’ markets. Thus, for instance, substantial productive capacities and strong technological complementarities tend to imply sticky prices, but the latter can be rendered more flexible by the introduction of an intermediary. These preceding developments show how Menger, Wieser and Lachmann all contributed to put an institutionalist typology of markets in the place of the idea of a universal model of market. Within this typology, decentralized social
Austrian insights on markets and the “new economy” 209 interactions between agents play a major role. Hayek strongly contributed in stressing this aspect when referring to the social division of knowledge: if we disagree with Hayek’s belief in the existence of a tendency of market economies towards equilibrium, we however welcome his conception of “dispersed information.” Now, one of the major consequences of this “division of knowledge” is the fact that a substantial part of agents’ knowledge is strictly tacit and private (see on this point the influence of Polanyi (1966) on Hayek). Consequently, this part of individual knowledge cannot be transferred to another agent. Agents are not able to acquire a complete knowledge of past actions of other participants to markets, neither are they able to forecast their future actions. Market coordination then requires an indirect way of knowing and understanding the various strategies of other individuals. This way belongs to the realm of what Hayek called “unorganized knowledge” or “knowledge of the particular circumstances of the moment and the place” (Hayek 1949: 80). Agents accumulate this type of knowledge through the use of some persistent behavioral rules. It is not worth analyzing here the forms of these rules. What is more important is to note that they always suppose some type of social interaction. According to Hayek, two main types might be distinguished. The first is mimetic. It consists in the imitation of other individuals’ observed actions. Then, this imitating behavior is revised according to how the rule that has been chosen actually performs. If the rule makes the agent better off, then it is repeated. Therefore, the mimetic attitude gradually endogenizes the rules governing the behavior of observed and imitated agents. The second type of social interaction corresponds to the innovative attitude. In this case, the agent tries to imagine and to introduce a new kind of behavior and, here again, he observes how it performs. If the agent realizes that the innovative behavior makes him better off, then he reiterates this conduct and, little by little, he assimilates the rule(s) that govern(s) it. Innovative attitudes are not always successful however. A process of “trial and error” is often necessary in order to find the behavioral rules that fit with the social context. Rule following does not always derive from mimetic or innovative attitudes. Agents also unconsciously adopt some social rules. Some are inherited conventions. Others are the legacy of culture. Others, finally, are imposed by law (see Hayek 1973/1980: 52). Beyond this diversity, the existence of rules, however, shows how decentralized social interaction is important to understand the working of markets.3 It is now time to check it, considering the market realities of the NE.
The markets of the “new economy” in the Austrian perspective In this last part of our chapter, in accordance with Wieser and Lachmann’s approaches, we will define a typology of the markets of the NE which combines Menger’s criteria of market diversity and Hayek’s focus on decentralized social interactions. The empirical foundation of this typology is obviously rooted in the two main types of markets, which appeared with the emergence of the NE, namely, electronic markets (see, for instance, Burton-Jones 1999 and Currie 2000) and
210 Richard Aréna and Agnès Festré technological markets (see Guilhon 2001). These two empirical types of markets will give birth, in our typology, to the following four kinds of markets. B2C direct e-markets These markets are those to which commentators refer when they interpret the NE as the achievement of the type of market coordination analyzed in the CEE model. Strong objections can, however, be raised against the idea of tendency to desintermediation on these markets. Quite the reverse, a process of reintermediation or substantial changes in intermediation is presently occurring on these markets. Usual intermediaries are indeed replaced with “infomediaries.” These new forms of intermediation are especially useful when consumers are confronted with complex digital producers and numerous web sites. In this case, their decisions are indeed particularly difficult. From this angle, it can be seen that new intermediaries introduce a kind of ancillary market, which offers to consumers a bundle of services and information dedicated to help them to make choices. They also try to take into account problems related to the safety of transactions, namely, secure payments, product quality or delivery guarantee. This new form of intermediation also appears in the case of portals. Portals are not only entry points for purchase. Very often, they combine a search engine, an organization of available information, a means of interactivity between intermediaries and demanders and possibilities for personalization in individual choices. All these aspects stress how much the Austrian tradition is right when it emphasizes the importance of intermediaries and their impact on the formation of subjective consumers’ preferences. Moreover, it is striking to note that today, intermediaries tend to become information providers, in accordance with Hayek’s view of the division of knowledge. E-market bundling indeed confirms the impossibility of defining an a priori list of identified consumer preferences and standardized goods. Generally, on B2C markets, the nature of goods and the formation of preferences emerge from the interactivity between intermediaries and consumers and social uses are created through the use of the Internet. Moreover, consumers are also interacting between each other through “peer-to-peer” communities (such as Napster or Gnutella), in order to exchange digital goods, for instance, music, or information on prices or quality of new products. Thus, they do influence the determination of the use-value of goods. B2C direct e-markets are not only characterized by the existence of “infomediaries” but also by bargaining modes. Now, various auction systems are used on e-markets and most of them are not Walrasian. First, they do not only concern prices but also delivery or payment dates and, even sometimes, the very nature of goods (Raisch 2001: 23). Second, the variety of auction systems is substantial.4 Some offer fixed prices associated with a catalog, others, sticky prices that are revised from time to time. Producer or consumer auctions also exist. Finally, quick auctions can also occur, limiting the time of bargaining to an a priori fixed period (Raisch 2001: 136–7). Here again, the Austrian view appears to be
Austrian insights on markets and the “new economy” 211 very useful since it distinguishes markets according to the method of bargaining and the system of auctions. On the contrary, the idea of a generalized Walrasian auction price system is clearly disputable. Spatial and temporal constraints are also important on B2C direct e-markets. Direct e-markets are indeed defined as those where order and delivery are both electronic and where also the products themselves are digital. These features are specific to this type of market and imply that spatial and temporal constraints do not play a major role in its working. Here again, Menger’s typology appears to be particularly relevant. Wieser’s “exchange institutions” are also present on e-markets. A significant example of these institutions or these rules is given by standards. Standards such as htlm (hypertext markup language), for instance, play an essential role in the NE. In the “old economy,” the definition of standards is also playing an important role; the difference, however, lies in the fact that all standards are essentially physical and characterize tangible goods, while new standards are more related to information and collective conventions. Therefore, new standards require a social agreement of the participants to the market in order to set a situation of common knowledge. A consensus must thus emerge in order to allow e-consumers and e-intermediaries to communicate. Now, what is striking in an Austrian perspective is that standards are never defined a priori by hierarchies but that they emerge from self-organized processes that market participants can only approve or disapprove (Picot 2001: 8). These processes are the result of innovative and imitative behaviors, in accordance with Hayek’s approach of the emergence of rules. For instance, some “dotcom” firms such as Amazon.com soon realized, before others that imitated them afterwards, that free information or services through dynamic hypertext or discussion forums allowing internauts to interact between each other could be profitable. This illustrates the role of learning, imitation and innovation in the development of electronic markets. The reference to exchange institutions also allows analyzing electronic markets as typical self-emerging Austrian markets. E-markets are indeed the result of spontaneous behaviors of different types of actors who affect various aspects of transactions on markets. Production firm managers or intermediaries are exemplary Austrian entrepreneurs. They indeed look for profit opportunities and contribute to the discovery of new products or new markets through a process of trial and error. They are thus shaping market processes on the supply side and these processes emerge as typical unintended consequences of entrepreneurs’ decisions. However, entrepreneurs are not the only relevant actors. A second type of agent corresponds to market organizers, i.e. to the very private institutions or professional associations which contribute in defining and introducing new standards of communication and transfer of information. A third type of actors is constituted by consumer communities which try to influence both the first two types of actors, in order to convince them to build responses in accordance with their requirements and, therefore, also contribute to the emergence of new social-use values. It is clear, therefore, that e-markets cannot be analyzed with the usual tools
212 Richard Aréna and Agnès Festré of the CEE model. They are submitted to self-organizing processes and continuous changes, which allows defining them as typical institutional arrangements. Therefore, social interaction is clearly important on e-markets. The emergence of standards is not, however, the only example. Another important form of social interaction is related to the role of trust on e-markets. The use of the Internet for market transactions indeed entails problems related to the anonymous nature of consumers, products and services. On the one hand, suppliers have important means of identification of consumers’ communities but the importance of consumer learning and preference change implies the adoption of a systematic “alertness.” On the other hand, consumers buy new goods and services, which are often virtual. They are therefore confronted by a permanent uncertainty regarding the nature of transacted goods. This is why firms or intermediaries spend important resources to acquire a reputation in order to win consumers’ trust. Now, the existence of trust seems to contradict economic rationality in some cases. For instance, the empirical price dispersion, which prevails on e-markets, is not necessarily a market imperfection or a sign of “market immaturity.” It can derive from both a bundling strategy of intermediaries and the existence of a hierarchy of trust relations in the minds of consumers. B2C indirect e-markets The preceding remarks could be resumed and applied to indirect e-markets too. The only difference, which appears when we consider indirect e-markets, derives from the fact that transactions concern tangible goods even if orders are electronic. Within this framework, intermediaries cannot limit their role to the management of information flows. They are confronted with additional problems, which were usual in the “old economy”: for instance, inventory management or financing optimization. In this framework, R&D strategic choices are crucial. They indeed tend to become a deciding factor in the context of the NE. It is easy to see that these new constraints exert immediate effects on trade organization and explain why new types of intermediaries emerge, such as aggregators that try to combine the electronic flexibility of transactions with an inventory management policy; or e-market places, specialized in specific groups of products belonging to some precise human interest (sport, scientific topics, cultural and social subjects, etc.). These new forms of organization are prevalently dedicated to the decrease of spatial and temporal constraints related to good delivery and technological innovations. Part of the organizational change that is taking place in the NE is interpreted as a reaction to the increasing tension inside the value chain (Gensollen 2001: 7). While increasing returns in the upper stages of production tend to favor cooperation between firms, the increasing need for “one-to-one” marketing downstream seems to be more adapted to competitive strategies.
Austrian insights on markets and the “new economy” 213 B2B e-markets Very often, the relatively increasing role of B2B e-markets is explained by the reduction of transaction and search costs (see Aréna 2001: 17–18). This factor is certainly essential but does not, however, imply a tendency to perfect competition. Seen from a Hayekian angle, we might indeed note that B2B e-markets allow a more efficient “discovery process” on markets since they contribute in improving information between firms. A good example of this improvement is given by the changes occurring in marketing activities within e-trade. In usual markets, firms were forced to employ many employees to answer to their customers’ queries and improve their knowledge of consumers’ preferences. The use of the Internet substantially changed this situation. On the other hand, through the Internet, firms or intermediaries can easily acquire numerous and various pieces of information related to consumers. These new possibilities allow them to develop a much more efficient marketing policy. For instance, they are now able to aim at precise targets corresponding to specific communities of consumers. These advantages also prevail in the realm of interfirm relations for supply as well as delivery. Discovery processes might also be improved by the reduction of strategic uncertainty. New forms of marketplaces indeed imply the utilization of “hubs,” either vertical or horizontal, or e-procurement marketplaces (Raisch 2001: 211–14). These new types of trade organization allow firms to replace hierarchies with efficient producer markets that are dedicated to specific firm needs and requirements. Finally, B2B e-markets also help to develop social interactions, which decrease the degree of dispersed knowledge. A significant example is given by subcontracting relations where the generalization of e-trade is sometimes equivalent to a selection process among small and medium firms devoted to find those which are the most reliable. Another example could again correspond to the emergence of standards. This emergence is comparable to what is happening on B2C e-markets. However, standardization on B2B e-markets also implies a separation between two kinds of markets. When firm needs can be easily defined and give birth to standardized products, B2B e-markets easily replace interfirm relations. When they are more complex, these relations cannot be obtained through usual market coordination. From this perspective, it is then necessary to come back to firm agreements or to introduce “technological markets.” Technological markets Technological markets concern transactions related to scientific and intangible assets. These assets are protected by intellectual property rights in the form of patents, copyrights, licenses and patterns. These markets are likely to transfer knowledge already established or on the way to be. To some extent, they shape relationships between instrumental knowledge and activities that
214 Richard Aréna and Agnès Festré represent the firm’s value chain: research, development, conception, production, marketing. (Guilhon 2001: 11) These products first require strong temporal constraints: most of the time, producers who buy or sell in technological markets have to build long-term relations based on trust and mutual knowledge. They sometimes also imply geographical constraints when, for instance, suppliers and buyers belong to a network, which entails externalities and proximity effects. It is for instance the case when the market is related to an industrial district or, to some extent, to a national system of innovation. Technological markets also provide a significant example of the limits of the CEE model definition of fundamentals. In these markets, it is indeed perfectly impossible to define, a priori, sets of preferences or catalogs of techniques. Firms are looking for instrumental knowledge, the use of which is partially ignored by them, and which they contribute in creating during both stages of conception and production of goods. Technological markets also exist because of the existence of a Hayekian division of knowledge in the economy. They indeed allow transactions of codified and explicit knowledge between firms. However, firms do not acquire this knowledge for itself but rather to complete the tacit and private knowledge that forms the basic resource of firms. Finally, technological markets imply relations of trust and mutual knowledge, namely, forms of social interaction that cannot be reduced to price coordination. This is why Lundvall labelled these markets “rather organized markets,” while Guilhon referred to them as “quasi-markets.” This mixture of electronic hierarchies and e-market coordination does not fit so well with Hayek’s view of the market order. According to us, however, they are perfectly compatible with the Menger– Wieser–Lachmann line of interpretation.
Some conclusive remarks and limitations The rise of direct B2B e-trade during the last decade led some observers to interpret it as a confirmation of the relevance of the CEE model view of market coordination, even if the statistical importance of this trade still remains very modest. However, the introduction of a typology which includes direct and indirect B2C and B2B e-trade as well as technological markets substantially weakens this interpretation. The observed diversity of trade organization forms, bargaining and auction systems, and spatial and temporal constraints rather contributes in reviving the old Austrian tradition, all the more so since an essential role is played by social coordination mechanisms on markets. This reference to the Austrian tradition does not mean, however, that the NE confirms Hayek’s conception of the market order. Quite the contrary, the NE exhibits the emergence of a multiplicity of market types, which does not contradict the idea of a spontaneous order, but excludes the deterministic tendency towards a market order characterized a priori. This is why our Austrian reference only concerns its Mengerian origin and
Austrian insights on markets and the “new economy” 215 its post-Mengerian developments. Neither does it mean that Austrian theory is the only one able to offer a proper theoretical framework for understanding the emergence of new markets in the NE. On the one hand, this theory does not seem to provide a ready-made analysis of supply phenomena, which appear on the markets of the NE. Kirznerian entrepreneurship is certainly not sufficient from this standpoint. We are here confronted by a usual shortcoming of the Austrian tradition, namely, its neglect of organization and planned institutions. Now, one of the problems of the NE is indeed, in some cases, the absence of control of supply by demand. The defenders of the CEE view often argue that IT makes markets more transparent. This assertion is strongly dubious. Internet markets are not more transparent than traditional markets (see Brynjolfsson and Smith 1999). Quite the contrary, they are organized in a way which allows a tighter control of producers on demand. This is permitted by the introduction of new forms of marketing and advertisement, by the emergence of new standards and institutional arrangements and by the ability of firms to generate the creation of consumer communities which individuals are induced to belong to. From this standpoint, other contributions might be utilized, such as the Marshallian one, for instance (see Aréna 1999, 2001 and 2002). On the other hand, if the Menger–Wieser– Lachman line of interpretation provides a broad framework for the analysis of various types of price formation mechanisms, it does not however go very far. Hayek’s “empirical” tendency towards equilibrium cannot offer a solution. Here again, it is necessary to use other lines of contributions, which have little in common with the Austrian tradition. We could refer, for instance, to “cognitive economics,” which focuses on the role of social interactions in the working of markets. What our contribution, therefore, suggests is that the Austrian approach is useful to explain some important features of the markets of the NE, but also that it must be completed and included in the broader perspective of institutionalist economics.
Notes Preliminary drafts of this contribution have been presented to the third conference of the Association of Historians of the Austrian Tradition in Economic Thought (held in Pisa, 24–26 May 2001) and to the annual conference of the European Association for Evolutionary Political Economy on “Comparing Economic Institutions” (held in Siena, 8–11 November 2001). Useful comments on these drafts by F. Belussi, N. Foss, P. Garrouste, P. Gunning, N. Jansson and H. Schlör are specially and gratefully acknowledged. The usual disclaimer applies. 1 The expressions “competitive” and “trustified capitalism” are due to Schumpeter (1939, vol. I). 2 This analysis shows how the assumption that the Austrian School coped with pure exchange economies is superficial. 3 The role of rule emerging and rule following on Hayekian markets has been especially stressed in the literature these last years: see Birner (1999), Garrouste (1999a, 1999b), Schmidt and Versailles (1999), Ioannides (1999), Rizzello (1999). 4 The commonly used auction types are the open-cry (or English) auctions, single and multiple round sealed bid auctions and Dutch auctions. See Kumar and Feldman (1999).
216 Richard Aréna and Agnès Festré
References Aréna, R. (1999) “Austrians and Marshallians on markets: Historical origins and compatible views,” in S.C. Dow and P.E. Earl (eds) Economic Organization and Economic Knowledge: Essays in Honour of Brian J. Loasby, Vol. 1, London: Routledge. Aréna, R. (2001) “A propos de la place de l’organisation, et des institutions dans l’analyse économique d’Alfred Marshall: Une interprétation évolutionniste,” Revue d’Economie Industrielle 97(4). Aréna, R. (2003) “Marchés électroniques, marchés technologiques: Nouvelles réalités et nouveaux outils d’analyse,” in B. Bellon, A. Ben Youssef and A. Rallet (eds) La Nouvelle Economie en perspective, Paris: Economica. Birner, J. (1999) “Making markets,” in S.C. Dow and P.E. Earl (eds) Economic Organisation and Economic Knowledge: Essays in Honour of Brian J. Loasby, Vol. 1, London: Routledge. Brynjolfsson, E. and Smith, M. (1999) “Frictionless commerce? A comparison of Internet and conventional retailers,” mimeo, MIT, http://ebusiness.mit.edu/erik/. Burton-Jones, A. (1999) Knowledge Capitalism, New York: Oxford University Press. Currie, W. (2000) The Global Information Society, Chichester: John Wiley and Sons. The Economist (1999) “Economics – making sense of the modern economy,” London: The Economist Newspaper Ltd. and Profile Books Ltd. Garrouste, P. (1999a) “Is the Hayekian evolutionism coherent?,” History of Economic Ideas 7(1–2). Garrouste, P. (1999b) “La firme ‘hayekienne’ entre institutions et organisation,” Revue d’Economie Politique 6. Gensollen, M. (2001) “L’avenir des marchés: Planification ou écosystèmes,” contribution to the conference Nouvelle Economie: Théories et Evidences, University of Paris-Sud, 17–18 May 2001. Gloria-Palermo, S. (1999) The Evolution of Austrian Economics – From Menger to Lachmann, London and New York: Routledge. Guilhon, B. (2001) Technology and Markets for Knowledge, Dordrecht: Kluwer Academic Publishers. Hayek, F.A. (1928) “Intertemporal price equilibrium and movement in the value of money,” in F.A. Hayek (1984) F.A. Hayek: Money Capital and Fluctuations, Early Essays, edited by R. McClaughry, London: Routledge and Kegan Paul. Hayek, F.A. (1935) “Socialist calculation: The present state of the debate,” in F.A. Hayek (1949) Individualism and Economic Order, London: Routledge and Kegan Paul. Hayek, F.A. (1937) “Economics and knowledge,” in F.A. Hayek (1949) Individualism and Economic Order, London: Routledge and Kegan Paul pp. 33–56. Hayek, F.A. (1940) “Socialist calculation: the competitive solution,” in F.A. Hayek (1949) Individualism and Economic Order, London: Routledge and Kegan Paul. Hayek, F.A. (1949) Individualism and Economic Order, London: Routledge and Kegan Paul. Hayek, F.A. (1973/1980) Law, Legislation and Liberty, Vol. I: Rules and Order, London: Routledge and Kegan Paul (French translation, PUF, Paris, 1980). Hayek, F.A. (1984) F.A. Hayek: Money Capital and Fluctuations, Early Essays, edited by R. McCloughry, London: Routledge and Kegan Paul. Ioannides, I. (1999) “Marché, firme et direction d’entreprise: Une perspective hayekienne,” Revue d’Economie Politique 6. Kumar, M. and Feldsman, S. (1999) “Internet auctions,” PDF. Lachmann, L.M. (1986) The Market as an Economic Process, Oxford: Basil Blackwell.
Austrian insights on markets and the “new economy” 217 Menger, C. (1871/1976) Principles of Economics, New York: New York University Press (first German edition: 1871; English translation: 1976). Menger, C. (1883/1963) Problems of Economics and Sociology, Urbana: University of Illinois Press (first German edition: 1883; English translation: 1963). O’Driscoll, G.P. and Rizzo, M.J. (1985) The Economics of Time and Ignorance, Oxford and New York: Basil Blackwell. Picot, A. (2001) “Un facteur-clé, les standards,” Problèmes Economiques 2697. Polanyi, M. (1967) The Tacit Dimension, London: Routledge and Kegan Paul. Raisch, W. (2001) The E-marketplace – Strategies for Success in B2B E-commerce, New York: McGraw-Hill. Rizzello, S. (1999) The Economics of the Mind, London: Edward Elgar. Schmidt, C. and Versailles, D. (1999) “Une théorie hayekienne de la connaissance économique?,” Revue d’Economie Politique 6. Shapiro, C. and Varian, H.R. (1998) Information Rules. A Strategic Guide to the Network Economy, Harvard: Harvard Business School Press. Schumpeter, J.A. (1939) Business Cycles (2 vols), New York: McGraw-Hill. Smith, M., Bailey, J. and Brynjolfsson, E. (2000) “Understanding digital markets: Review and assessment,” in E. Brynjolfsson and B. Kahin (eds) Understanding the Digital Economy – Data, Tools and Research, Cambridge, MA: MIT Press. US Department of Commerce (2000) Digital Economy 2000, Economics and Statistics Administration – Office of Policy Development, Washington. Wieser, F. (1927/1967) Social Economics, New York: Augustus M. Kelley (first English translation: 1927).
10 Turning lemons into lemonade Entrepreneurial solutions to adverse selection problems in e-commerce Mark Steckbeck and Peter Boettke Akerlof (1984) postulates the failure of markets in which asymmetric information between buyers and sellers leads to adverse selection in the quality of goods sold in such markets. In the absence of an intervening mechanism to correct information problems, the incentive exists for sellers of relatively low quality goods to promote their wares as being of higher quality; the gains to unscrupulous sellers from misrepresenting their products are externalized on to other sellers in the market in the form of lower subsequent prices. Adverse selection, therefore, leads to declining average quality of goods in markets rife with asymmetric information. Prospective buyers accordingly lower their expectations of average quality, and consequently, the prices they are willing to pay. These lower prices force sellers of higher quality goods – unable to credibly signal the superior quality of their goods in order to exact a price above their reservation price – to exit the market. As this cycle is repeated, the market becomes increasingly saturated with lower quality goods and the market disappears, if it materializes at all. Although he cites “private institutions” as a means of ameliorating the problems resulting from uncertainty, Akerlof offers a subsequent caveat stating “these institutions are nonatomistic, and therefore concentrations of power – with ill consequences of their own – can develop.”1 In other words, markets fail due to asymmetric information and private solutions themselves lead to further failure in the market system. Akerlof inevitably concludes that since “[I]n these markets social and private returns differ . . . governmental intervention may increase the welfare of all parties.”2 In this chapter we challenge Akerlof’s inference that market solutions to resolving problems stemming from asymmetric information lead to further market failures; his otherwise insightful analysis of information problems inherent in markets notwithstanding. Contrary to these private means of ameliorating information asymmetries being nonatomistic, thus leading to market power and a further misallocation of resources, entrepreneurs vie to minimize measurement costs and efficiently to allocate these and other verification or social costs across buyers and sellers, a function that markets have shown to be more efficient at accomplishing than regulations (Coase 1960). More precisely, Akerlof’s devotion to the neoclassical theory of competition, especially the assumption of product homogeneity, leads
Entrepreneurial solutions in e-commerce 219 him to overlook the robustness of markets. His response, therefore, is to give short shrift to private institutions for abating this failure. In the first part we summarize Akerlof’s lemons model and describe his proposed solutions for abating market failure and the subsequent market power. In the second part we describe the competitive solution to market failure resulting from asymmetric information as posited by Hayek (1948). In the last section we consider Akerlof’s problem as it pertains to electronic commerce, the quintessential forum for such problems to arise, and illustrate the Hayekian solutions private firms have elicited to prevent lemons markets from evolving on the Internet.
The Akerlof problem Akerlof’s model identifies potentially serious problems in some markets: asymmetric information creates an incentive for individuals to conceal private information when they interact in markets, a problem assumed to be more common with suppliers than with consumers. Such problems, if left unabated, theoretically cause both sellers and potential buyers of high quality goods to leave the market leading to its demise. Markets for used cars (Akerlof’s favorite example) thrive, however, notwithstanding the pervasiveness of asymmetric information in this market. We also witness successful exchanges in such markets as automotive repair, software, and home remodeling and construction, markets consisting of experience goods in which asymmetric information is prevalent. Attributing the success of these markets to government intervention and regulation is dubious. Perhaps an even more challenging empirical anomaly to Akerlof’s “lemons theory” is the overwhelming success of electronic commerce and Internet markets. Despite the widespread media attention about “dot-com” failures, commerce conducted over the Internet continues to grow. The ubiquity of asymmetric information and the ineffectiveness of formal legal institutions to compel cooperation in these markets have not proven to be obstacles to the growth of electronic commerce. We contend that, as in most markets, it is the entrepreneurial innovations employed to ameliorate potential “lemons” problems in Internet markets – markets where exchange participants are largely unconstrained by formal legal institutions – that explain the success of electronic commerce. It is precisely the self-governing mechanisms developed by private firms, and the rules or norms that have evolved, that promote trust among strangers interacting in cyberspace. Such success can be attributed to the private sector, not the public sector; that the incentive to facilitate exchange in a safe and secure environment leads entrepreneurs to discover and to experiment with new technologies and ideas in their quest to provide such a marketplace. What Akerlof perceives as problems in market systems, have actually given rise to Hayek solutions, as private actors situated in particular time and place have adjusted their behavior to realize the mutual gains from exchange. As Hayek (1948: 103–4) argued:
220 Mark Steckbeck and Peter Boettke The confusion between the objective facts of the situation [e.g., Akerlof asymmetries] and the character of human responses to it tends to conceal from us the important fact that competition is the more important the more complex or “imperfect” are the objective conditions in which it has to operate. Indeed, far from competition being beneficial only when it is “perfect,” I am inclined to argue that the need for competition is nowhere greater than in fields in which the nature of the commodities or services makes it impossible that it ever should create a perfect market in the theoretical sense. The success of e-commerce illustrates Hayek’s notion of competition. Internet commerce has, and continues to grow rapidly. Retailers selling over the Internet grew by one-third to 550,000 online businesses by mid-2000. Average revenue growth for these retailers increased by 130 percent from the previous year; exceeding $132 billion for 2000, up from $58 billion in 1999.3 Revenues of business to consumer (B2C) Internet retailers – roughly 20 percent of e-commerce – are projected to exceed $125 billion by 2004,4 increasing to $1.1 trillion by 2010.5 More intriguing is the success of Internet auction markets – the online equivalent of flea markets where unknown individuals exchange with each other from around the world. Aggregate Internet auction sales grew from about $7.3 million at the end of 1996 to $6.5 billion at the end of 2000; an average annual increase of 446 percent. And eBay, the market leader with an estimated 76 percent of the online auction traffic and over 85 percent of online auction sales, has a current membership of over 22.5 million users listing 700,000 items daily on its web site. The Internet is seemingly an ideal marketplace for Akerlof’s lemons theory to be tested. It consists of virtual markets in that exchange goods are sold sight unseen; exchange participants interact from distant geographic locations, most often from different states or even countries; payment is remitted prior to shipment of the exchange good; the probability of conducting subsequent exchanges is quite low, especially for auction market participants; and formal legal enforcement is emasculated and impractical for policing and mediating Internet transactions. Under such conditions, adverse selection and moral hazard ought to prevail thwarting the development of markets in cyberspace. The empirical observation of thriving markets against such a difficult backdrop should direct our attention to the multitude of ways in which market participants gain assurance and earn trust from one another (see Klein 1997 and 2000).
Akerlof lemons and Hayek entrepreneurs The canonical example of Akerlof’s lemons model is the used car market. Determining whether a car is a lemon (poor quality) or a cream puff (high quality) requires experiencing the good – driving for a period of time in order to ascertain defects in the vehicle. Owners of used cars have this experience and, therefore, have better knowledge of the car’s quality than potential buyers. They know whether it has been properly maintained, whether it has been in an accident,
Entrepreneurial solutions in e-commerce 221 whether the brakes are safe, or whether there are hidden defects that appear intermittently like electrical shorts or vibrations. It is certainly conceivable that defects such as these could be concealed from potential buyers who are able to inspect such a vehicle in person. It is, on the other hand, certain that they could be concealed from a potential purchaser shopping for cars online whose only information about a particular car’s quality is gleaned from a seller’s description as posted on the Internet. From an Akerlof lemons perspective, the tendency would be for Internet markets to be inundated with malfeasants attempting to defraud potential purchasers outright by taking their money and neglecting to ship ordered goods, a potential problem more severe than misrepresented product quality. Given that goods are almost universally shipped only after the purchaser remits payment and that individuals can effortlessly remain anonymous while interacting on the Internet, it is an ideal environment for such miscreants to flourish. Offering to sell goods on eBay with no intention of shipping the expected good is easily achieved. After payment is remitted the miscreant neglects delivering the good(s), then subsequently changes his or her identity and begins anew, a rarely observed occurrence, even in online auction sites. Akerlof’s premise is correct in that an effective and efficient mechanism compelling cooperation being absent, human nature elicits opportunistic behavior in markets beset with asymmetric information. Sellers of poor quality goods can internalize the benefits of providing false or misleading information, imposing direct and indirect costs on others. What Akerlof’s model tends to ignore, we contend, is the dynamism of markets and the incentive mechanism driving entrepreneurs to discover ways to ameliorate problems associated with market exchange. Akerlof (1984: 16) does address the role of the entrepreneur in abating problems with asymmetric information, stating “[T]hose who can identify used cars in our example and can guarantee the quality may profit by as much as the difference between type two traders’ buying price and type one traders’ selling price,” a prescient reference to Carmax, a company founded 25 years after publication of this seminal piece. But, he denounces such entrepreneurial endeavor as a waste of resources; that in the neoclassical world of completely homogeneous goods there would be no variations in quality; hence the use of labor for anything other than production of these homogeneous goods is a wasted resource. Entrepreneurship, in other words, may work to minimize the problems of quality assurance in marketing, but in the process resources are diverted from production activities. Markets with product variation and the activities associated with assuring quality in such markets are impediments, rather than sources, of economic development in the Akerlof lemons model. Honesty and truth telling possess public goods characteristics in that individuals need not expend resources verifying accuracy and completeness of transactions when market participants act with the interests of others in the community (Hollis 1998).6 Economic goods are produced and sold in various forms and qualities, and disagreements between exchange partners often emerge as a result of miscommunication or dissimilar characterizations regarding quality or quantity
222 Mark Steckbeck and Peter Boettke attributes of these goods or services, not of intentional deceit. Lack of knowledge by both seller and buyer is as common in transactions as asymmetric information. It is this knowledge deficiency on the parts of both buyer and seller that makes middlemen (who are generally lower cost producers of information than consumers) a vital resource in market exchanges.7 Indeed, once the diffusion of knowledge in society is recognized, the various roles that entrepreneurs take on within the economic system can be seen as providing wealth-maximizing services rather than as detractions from other productive activities.8 Real estate, for example, is a heterogeneous good and is the single greatest asset in terms of cost in which most people invest. Additionally, most people engage in real estate transactions infrequently, leaving them with little experience in buying and selling a home. Real estate transactors, therefore, must rely on the recommendations of a local real estate agent who, by law, maintains a fiduciary responsibility to the seller. In response, relocation services have emerged, establishing relationships with real estate agencies to provide broker services to both buyers and sellers in exchange for providing them with referrals.9 These relocation companies act as monitors in a principal–agent relationship, providing assurances to buyers and sellers of real estate. To be considered for referrals, agents and their brokers agree to meet specific requirements, provide enumerated services, provide full disclosure statements, and generally to act in the best interest of the referred client. In return for the referral, the relocation service is remunerated for their matching and monitoring service through the commission received by the selling agent. Akerlof points to private institutional forces as a means of assuring quality. He cites as examples: (1) guarantees, (2) brand-name goods, (3) chains, and (4) licensing. That Akerlof alludes to the role private institutions and entrepreneurship play in markets is laudable. That he gives such short shrift to market forces for selfregulation is not. Contrary to Akerlof’s premise, markets serve to facilitate exchange between two or more individuals in an orderly process of human interaction; it is how and where relevant information is made known. With wants, behavior, and knowledge of market participants in constant flux, markets are more robust than how they are modeled in Akerlof’s lemons model. Markets incorporate a dynamic and evolutionary process whereby established standards, practices, and tools deemed to be the most efficient means of facilitating exchange today, are deemed inefficient tomorrow. Newer, more efficient standards, practices, and tools subsequently replace them. What is deemed efficient today becomes obsolete tomorrow, including means of maintaining social control. Social ordering, which encompasses a never-ending chain of human actions, can never be molded from a specific point in time, projected to carry forth into the future. Social interaction, including markets, entails an infinite number of individual actions based on individual knowledge of time and place. Consequently, markets are not creatures of conscious human planning of their entire structure. Market processes emerge from a series of trial and error experimentations, derived from a progression of finding a more efficient means of facilitating exchange. The role of the entrepreneur is, by continually updating
Entrepreneurial solutions in e-commerce 223 information, to discover more efficient means of promoting human interaction, thus facilitating exchange. No single individual can acquire the knowledge of time and place known only to each and every individual transacting in the market in order to enumerate specific rules and procedures of market behavior. The market and all of its rules are products of “human action, not of human design” (Hayek 1967). To illustrate the robust nature of markets with regard to ameliorating potential problems in realizing the gains from exchange, we look at the market for used and out of print books that has emerged on the Internet.
Applications to e-commerce The Internet market for used books It has been estimated that domestic sales of new books will reach $38 billion by 2004, up from $23 billion in 1994.10 The used and out of print book market is a specialty market within the broader market reflecting our literary culture. The used, out of print, and antiquarian book market provides an appropriate example of the dynamism of Internet markets. There are approximately 10,000 used and out of print booksellers from around the world belonging to one or more of the three major used book networks operating on the Internet. Abebooks,11 Bibliofind,12 and Alibris,13 network thousands of booksellers from around the world, with networked booksellers sharing common pooled resources, most notably a search engine. Acting independently, these booksellers would not be able to take advantage of the economies of scale in the production of search and, to a lesser extent, payment services (Abebooks and Alibris only), nor would they be able to develop the credibility these networks are able to assure. The three networks provide the same basic matching service pairing sellers of used and out of print books with prospective purchasers. Each, however, employs a distinctly different means of assuring the outcome of the transaction. Consumers vary in their preferences, especially as they pertain to Internet markets. Fears about stolen credit cards or other personal information abound, many of these fears being unfounded;14 consumers seek different assurances about product quality and merchant credibility; time preferences vary; and Internet consumers are price conscious. Balancing consumer wants in order to ascertain efficiency can be achieved only through a trial and error process. The entrepreneur’s job is to use local knowledge to determine how best to meet the needs of book consumers and merchants. Alibris provides the greatest level of direct assurance, fully intervening in the exchange process. Books sought by Alibris customers are purchased outright from individual merchants, taking possession of each book, verifying its quality relative to the merchant’s description, handling all transaction processes, and then providing a satisfaction guarantee. Abebooks, the largest network in terms of number of member merchants and sales, involves itself less in the exchange process than does Alibris. Abebooks
224 Mark Steckbeck and Peter Boettke provides primarily matching services through their search engine while also offering payment remittance services for member booksellers. They do not involve themselves in mediating disputes, offering only to guarantee the satisfaction of a book if it was ordered directly through Abebooks using its payment acceptance service. Bibliofind offers matching services only, bringing sellers and potential buyers of used and out of print books together to negotiate prices and shipping. Should a dispute arise, the buyer and seller are left to mediate their own disputes. Bibliofind no longer accepts payments, leaving the buyer and member bookseller to negotiate payment on their own. Finally, Bibliofind offers no assurances of the quality of books sold by member booksellers through its web site although they will take action to remove a bookseller if Bibliofind receives an unspecified number of complaints or if they fail to adhere to Bibliofind’s enumerated policies. These three used book networks have one objective: to facilitate the exchange of used, rare, and out of print books. Each, however, employs a different strategy to foster reputable dealings by its individual members and provide quality assurances, thus eliciting trust among its customers. By being part of such a network, members share a common good that would be prohibitively costly and less effective to produce if acting independently. The value of the network to member booksellers is dependent on customers’ trust in dealing with members of the network. These used and out of print bookseller networks regulate members’ behavior by enumerating specific rules, monitoring their compliance to such rules, and sanctioning defectors. Each network employs different practices for assuring the reputation and credibility of both the network and its individual members, differentiated by their levels of intervention. Bibliofind, the network intervening the least in the exchange process, has merged its services with Amazon.com, the largest Internet seller of new books. Alibris, which intervenes the most in the exchange process, has fewer sales and fewer member booksellers than Alibris or Bibliofind, a result certainly attributable to higher book prices and lower fees paid to booksellers. What has emerged is the fascinating aspect of network markets. No formal planning existed specifying the safest and most efficient means of matching used and out of print booksellers with potential book buyers. In fact, the three networks have all devised different means of effectuating transactions based on their level of intermediation in the exchange process. The incentive to bring booksellers and buyers together exists, but it is also imperative to elicit cooperation between sellers and buyers in order to achieve success. This incentive invokes creativity and entrepreneurship, devising a marketplace where such exchanges can be consummated.15 As more information is acquired and processed, the incentive is to improve continually the exchange process, enhancing successful practices and scrapping unsuccessful ones, an outcome highly unlikely were the process to be regulated by actors outside of the context of particular time and place. Given the recent innovation of the Internet as an exchange medium, there exists little historical knowledge specifying which practices lead to the most effective and most efficient means of facilitating exchange. It is, of course, impossible to
Entrepreneurial solutions in e-commerce 225 make such determinations ex ante. It is only through trial and error and experimental practices entrepreneurs undertake in their quest for the greatest return on the private capital they risk that such determinations can be made. Those skillful or fortunate enough to make the correct decisions are rewarded through increased profits; incorrect or inefficient decisions result in losses and being forced from the market (Alchian 1950). Internet auction sites eBay, the dominant Internet auction site, performs a service similar to that of the used bookseller networks. eBay matches buyers and sellers of new and used merchandise, maintaining a hands-off approach in mediating the exchange process. Sellers list their wares on eBay’s web site including descriptions of their goods – in terms of quantity and quality, their terms of trade, and their shipping policies. Prospective buyers use this information, as well as feedback information provided by the seller’s previous exchange participants about the seller’s credibility, to assess their offer price for the seller’s exchange good(s). Having determined their willingness to pay, sellers utilize eBay’s web site bidding on desired goods in predominantly second-price auctions. The information sharing employed by eBay enables sellers and potential buyers to assess each other’s integrity based on each other’s exchange histories. eBay takes no role in assuring the satisfaction of an exchange, settling transactions, resolving disputes, or remedying inequities. This does not imply that all exchanges, consummated or not, result in completely satisfied participants or that deception does not occur on eBay, but exchange participants are provided relevant information allowing them to discount, if needed, the product information provided by individual sellers. As Steckbeck (2001) finds, sellers who have acquired negative feedback in prior transactions receive lower prices for goods they subsequently sell on eBay, a cost compelling both buyers and sellers to cooperate. Credit card use on the Internet Another characteristic of electronic commerce underscoring trust and assurance problems is the security of credit card information when transactions are consummated over the Internet. As previously stated, consumers have two general fears about submitting credit card information over the Internet: an unfounded fear that credit card information will be stolen in transit, and a founded general threat that computer hackers will unlawfully access credit card information stored on a web merchant’s database.16 Although transactions conducted over the Internet make up just 2 percent of all purchases made by credit card, it is estimated they make up about 50 percent of all credit card fraud.17 Using a credit card to make purchases generally, and over the Internet in particular, adds a layer of security for consumers. A consumer encountering fraud or deceit can dispute related charges to the issuing bank, thus protecting him or her from charges paid to unscrupulous merchants. If the exchange good fails to
226 Mark Steckbeck and Peter Boettke be delivered, or if the quality is less than what was conveyed at the time of purchase, the purchaser can simply refuse to make payment on the disputed charge(s). Disputed charges are then arbitrated between the card-issuing bank and the merchant’s bank. It is also the case that paying for goods and services over the Internet by credit card reduces transaction costs and may eliminate delays in the exchange process, especially delays in the shipping process. Should credit card usage pose a security problem to either consumers or merchants, this added layer of security and convenience becomes defunct resulting in a financial loss to credit card companies. Credit card companies have responded to online credit card fraud both as a means of assuaging consumers’ and merchants’ fears – thus promoting credit card usage – and to protect themselves from loss. Visa and Mastercard, for example, have jointly developed Secure Electronic Transaction (SET), employing digital certificates used to accurately identify all parties to an exchange. Additionally, both parties have devised a proprietary encryption technology for transferring credit card data over the Internet. Other examples of credit card companies devising security measures include American Express providing its cardholders the ability to obtain randomly generated disposable account numbers for making transactions that expire after each use.18 Credit card information unsecured on a merchant’s web site is therefore rendered useless. Visa, responding to the theft of credit cardholder information from merchants’ web sites, is requiring merchants to meet security guidelines to maintain their privilege of accepting Visa credit cards.19 Consumer demands for assurances to protect their privacy and security is the impetus behind these programs developed by credit card companies. As credit card companies discovered, and continue to discover, inadequacies in their security measures, they devise and implement new practices to protect consumers and merchants from fraud. Credit card companies have great incentives to provide online security as they gain from the use of credit cards as a means of payment for Internet purchases. There is no single entity responsible for the practices implemented to abate credit card, either online or off. Various firms, attempting to use their specific knowledge of time and place, experiment with diverse measures, attempting to discover the least-cost means of providing the most security for transacting over the Internet. For no other reason than the securing of existing profits, or the acquisition of additional profits, do credit card companies devise and implement newer and more efficient security measures; they are the most adversely affected parties if credit card fraud reduces the use of credit cards for transacting online.
Conclusion Individuals respond to incentives and different situations provide different incentives. Developing precise rules or programs to rectify all problems occurring in all situations a priori is not feasible. Responses are situation dependent and most
Entrepreneurial solutions in e-commerce 227 effective when left to the people who possess the knowledge of time and place to adjust on the margin to changes in the situation. In addition, especially with respect to the Internet, formal legal institutions are largely unavailable for monitoring behavior, mediating disputes, and effectuating remedy. Contrary to observing the collapse of markets under such circumstances, profit incentives direct entrepreneurs toward devising solutions to overcome such problems, including assuring cooperation in Internet markets. Akerlof’s lemons model suggests the fragility of market arrangement in the face of informational asymmetries and thus the problems identified should be prevalent in Internet markets. But as we have shown, the apparent problems in these markets today are simply tomorrow’s profit opportunities as actors possess strong pecuniary incentives to adjust their behavior to ameliorate these problems and realize the gains from exchange. Contrary to Akerlof and others, the argument for free competition in economic affairs does not rest on the conditions defining perfect competition. As Hayek (1948: 104) taught, perfect competition may indeed imply allocative efficiency, but it does not imply that competition will fail to bring about the most effective adjustment of plans and use of resources in situations where the nature of the situation is imperfect. Competition from this Hayekian perspective is an activity, not a state of affairs. Actors compete on a multitude of margins to satisfy the demands of consumers in order to secure profits. “In actual life,” Hayek argued, the fact that our inadequate knowledge of the available commodities or services is made up for by our experience with the persons or firms supplying them – that competition is in a large measure competition for reputation or good will – is one of the most important facts which enables us to solve our daily problems. The function of competition is here precisely to teach us who will serve us well: which grocer or travel agency, which department store or hotel, which doctor or solicitor, we can expect to provide the most satisfactory solution for whatever particular personal problem we have to face. (1948: 97) The market for assurance and trust has existed throughout the history of known markets. Businessmen have attempted to advertise their trustworthiness as much as their products to potential customers throughout the history of commerce (see Mueller 1999: 21–56, 72–98). Decentralized exchange provides incentives and generates the knowledge required for economic actors to coordinate the plans of the most willing suppliers and most willing demanders so that the mutual gains from exchange are realized. But underlying this process of coordination is also the various ways in which people elicit promise keeping from one another (see Klein 1997 and 2000). We have argued that Akerlof’s “lemons” model has indeed identified some of the ways in which markets may stumble on informational grounds, but that Hayek’s theory of competitive processes highlights the resilience of the spontaneous institutions of a voluntary society that assure trust within complex and uncertain conditions. In particular, we have seen how various
228 Mark Steckbeck and Peter Boettke practices and institutions can serve to ameliorate the problems of asymmetric information about product quality and discipline the behavior of sellers to minimize their ability to cheat their customers. By focusing on the Internet market, we have taken a market which on the face of it seems to be ripe for Akerlof problems, yet we have provided suggestive evidence that an array of Hayek-type solutions emerge to self-police market participants and spread trust and civility in economic affairs.
Notes Financial assistance from the Oloffson/Weaver Fellowship at George Mason University is gratefully acknowledged. 1 Akerlof (1970: 488). 2 Ibid. 3 See for example, ActivMedia Research available at: http://www.activmediaresearch. com/body_free_newsroom.html. eMarketer has also published electronic commerce data, both historical and projections, available at: http://www.individual.com/ frames/story.shtml?story=b1031090.801&level3=2848&date=20001101&inIssue= TRUE. 4 “eAuctions bid up online revenue,” eMarketer, 17 October 2000. Available at: http://www.emarketer.com/estats/dailyestats/b2c/20001016_auctions.html. 5 ActivMedia Research; op. cit. 6 Hollis contends that since trust promotes market exchange, and since individuals realize that without trust markets inevitably collapse, individuals tend to cooperate for communal reasons, regardless of the costs they internalize from cooperating. It could be argued that Hollis’ premise describes “altruistic” behavior, but that would rely on a very myopic view of “self-interest.” Ludwig von Mises (1966: 143–76) argued that social cooperation emerges through competitive processes through which men learned how to act in their enlightened self-interest. Adam Smith’s discussion of the doux commerce thesis postulates a reputational mechanism that brings self-interest in line with social cooperation. See Shearmur and Klein (1997). 7 On the role of middlemen and speculators within a market economy see Heyne (1994: 173–92). Also see Landa (1994) for an exploration of the role of traders in regimes of contract uncertainty. 8 Kirzner (1973) is the classic reference on the positive “internalist” role of entrepreneurship in the economics system. Schumpeter’s focus on the “creative destruction” role of entrepreneurship in terms of innovation within the economic system is “externalist.” On these two perspectives on entrepreneurship see Kirzner (1999). 9 Examples include Cendant, the largest relocation firm in the US, USAA, a homeowner’s insurance company with military affiliations, and the Navy Federal Credit Union, the latter two providing such relocation services to their policyholders or account holders. 10 See the Book Industry Study Group press release on trends within in the industry at http://www.bisg.org/pressrelease_aug21_2000.html. This figure does not include the used, out of print and antiquarian book market. 11 http://www.abebooks.com. Abebooks began as a brick and mortar store in Victoria, BC, specializing in used, out of print, and antiquarian books. The problem the company faced was the same all independent booksellers face in this market. If a customer seeks a certain book and the owner does not have it in stock, the exchange goes unrealized. Consumers face high search costs and if a bookseller is able to lower those costs of search by matching buyers with sellers, they could realize gains from the
Entrepreneurial solutions in e-commerce 229
12
13 14
15 16
17 18
19
resulting exchange. Abebooks created a website where customers can search the inventory of thousands of booksellers, effectuating their transactions online. Abebooks relies on various techniques to signal quality to potential consumers, including the posting of an endorsement by Forbes as one of the best sites to buy on the internet. http://www.bibliofind.com. Now a subsidiary of Amazon.com, Bibliofind has ceased accepting payments from customers after a computer hacker compromised their security system. Customers must now arrange for payment directly with the sellers whom they were matched with through the Bibliofind search. See Abreau (2001) for the story on Bibliofind. http://www.alibris.com. On Alibris’s ability to raise $30 million in venture capital funds in 2000 see Landry (2000). Credit card fraud is by far a greater problem for merchants, not consumers (see Bicknell, C.; “Credit card fraud bedevils Web,” Wired, 2 April 1999; available at: http://www.wired.com/news/print/0,1294,18904,00.html). Credit card information is rarely gleaned while in transit in cyberspace. Rather, it is generally taken from merchant’s servers who have failed to adequately protect their sites from hacking (see Gold, S.; “Visa invests in online battle against fraud,” Newsbytes, 1 March 2001; available at: http://www.newsbytes.com/news/01/162583.html). The strategies invoked involve branding (Abebooks’ use of the Forbes endorsement and Bibliofind joining forces with Amazon.com) and guarantees (privacy and consumer satisfaction guarantees are explicit on Alibris). Two cases have received much publicity during the past year. First, a hacker gained access to the web site of CDNow (http://www.cdnow.com) stealing credit card and address information of over 100,000 customers. The hacker attempted to extort money from the web merchant and has been able to remain anonymous to US authorities, finding refuge in another country (assumed to be an Eastern European country, rumored to be a former Soviet bloc country); see Markoff (2000). The second was a breach of Bibliofind’s web site we have already discussed where hackers had been lurking undetected for about 5 months, appearing to steal credit card and other personal information from the customer database. See Gold, S.; op. cit. note 14. Trombly, M. “American express offers disposable credit card numbers for online shopping,” Computerworld, 7 September 2000. Available at: http://www.computerworld.com/cwi/Printer_Friendly_Version/0,1212,NAV47_S TO49788-,00.html. Delio, M. “Visa to require e-security rules,” Wired, 17 November 2000. Available at: http://www.wired.com/news/business/0,1367,38655,00.html.
References Abreu, E. (2001) “Bibliofind closes its books after hack,” The Industry Standard (March 6). Online. Available www.thestandard.com. Akerlof, G. (1984) An Economic Theorist’s Book of Tales, New York: Cambridge University Press. Alchian, A. (1950) “Uncertainty, evolution, and economic theory,” Journal of Political Economy 58(3): 211–21. Coase, R.H. (1960) “The problem of social cost,” Journal of Law and Economics 1. Hayek, F.A. (1948) Individualism and Economic Order, Chicago: University of Chicago Press. Hayek, F.A. (1967) “The results of human action but not of human design,” in F.A. Hayek, Studies in Philosophy, Politics and Economics, Chicago: University of Chicago Press, pp. 96–105. Heyne, P. (1994) The Economic Way of Thinking, Englewood Cliffs, NJ: Prentice Hall.
230 Mark Steckbeck and Peter Boettke Hollis, M. (1998) Trust Within Reason, New York: Cambridge University Press. Kirzner, I. (1973) Competition and Entrepreneurship, Chicago: University of Chicago Press. Kirzner, I. (1999) “Creativity and/or alertness: A reconsideration of the Schumpeterian entrepreneur,” Review of Austrian Economics 11(1–2): 5–18. Klein, D. (1997) “Knowledge, reputation and trust, by voluntary means,” in Daniel Klein (ed.) Reputation: Studies in the Voluntary Elicitation of Good Conduct, Ann Arbor, MI: University of Michigan Press, pp. 1–14. Klein, D. (2000) Assurance and Trust in a Great Society, Irvington-on-Hudson, NY: The Foundation for Economic Education. Landa, J. (1994) Trust, Ethnicity, and Identity: Beyond the New Institutional Economics of Ethnic Trading Networks, Contract Law, and Gift-Exchange. Ann Arbor, MI: University of Michigan Press. Landry, J. (2000) “Albris books $30 million for literature,” Red Herring (April 5). Online. Available www.redherring.com. Markoff, John (2000) “An online extortion plot results in release of credit card data,” The New York Times (January 14). Online. Available www.nytimes.com/library/tech/00/01/biztech/articles/10hack.html. Mises, L. (1966) Human Action: A Treatise on Economics, Chicago: Henry Regnery. Mueller, J. (1999) Capitalism, Democracy and Ralph’s Pretty Good Grocery, Princeton, NJ: Princeton University Press. Steckbeck, M. (2001) “The spontaneous emergence of cooperation: An empirical analysis of eBay,” Ph.D. thesis, Department of Economics, George Mason University.
11 Big players in the ‘new economy’ Roger Koppl and Ivo Sarjanovic
Introduction The ‘new economy’ has drawn popular attention to the vital role of knowledge in economic affairs. The Austrian tradition of economic thought has long recognized the crucial role that knowledge production and distribution play in economic events. Its founder, Carl Menger, placed human knowledge and its growth at the centre of his theory. In the oral tradition of the Austrian school, political activism tends to frustrate and corrupt the market processes producing and distributing knowledge. Thus, as knowledge grows in importance, societies pay an ever-higher price for activist government policies. Austrian oral tradition warns us that the new economy needs a new commitment from governments to choose rules over discretion. Austrians should translate their oral tradition into testable theories about the use of knowledge in society. The theory of ‘big players’ is an example. The theory of big players clarifies one mechanism for the bad consequences of government activism. Big players are privileged actors who employ discretion in the exercise of their power. Because big players are hard to predict, they create ignorance and uncertainty in the markets they affect. In financial markets, this ignorance and uncertainty encourages herding. Big players induce herding in asset markets. The theory of big players helps to explain herding in financial markets. It has not yet been tested in the commodities markets. Using R/S analysis, we conduct such tests on wheat futures. Test results fit the predictions of the big players theory. (Popperian methodologists would say the theory is ‘not falsified’.) Our study thus gives theoretical and empirical support to Austrian doubts about activism in government policy. As we explain below, our study gives some empirical support to the Austrian suggestion that activist government policies may extract a higher price in the new economy than in the old economy.
The new economy Transaction costs are lower in the new economy The most characteristic feature of the new economy may be its low transaction costs. Advances in information technology are an important cause of reduced
232 Roger Koppl and Ivo Sarjanovic transaction costs. Lower transaction costs have led to an increase in the relative value of knowledge. The facts are sometimes exaggerated. In the past, we are told, people produced goods; they organized matter. Now, people produce knowledge; they organize information. We believe this distinction is mistaken. As Say argued, ‘Production is the creation, not of matter, but of utility’ (1971: 62). In the ‘old economy’, the salient features of production may have been physical exertions and transformations of matter. But the intellectual dimension was always present. Without a plan to guide production, no increase in utilities can be expected. In the ‘new economy’, the salient features of production may be the organization of information. But the material dimension is still present. The new economy has merely shifted the balance a bit more toward knowledge. This shift is consistent with the reduced costs of information. It has caused an increase in the average ratio of intangible assets to tangible asset. A firm’s know-how and its place in an industry network may account for a larger fraction of its value than was typical in the past. A noteworthy shift has occurred. It is not, however, something radically new. The new economy was familiar to Menger and Smith If the new economy is not new, discussions of it might be found in the history of economic thought. Carl Menger and Adam Smith wrote about the central role of knowledge in social and economic life. For Carl Menger, founder of the Austrian school, ‘Nothing is more certain than that the degree of economic progress of mankind will still, in future epochs, be commensurate with the degree of progress of human knowledge’ (1981: 74). Menger’s four ‘prerequisites’ of ‘goods-character’ include ‘Human knowledge of [the] causal connection’ between a thing and ‘a human need’ (1981: 52). He criticized ‘the materialistic bias of [his] time which regards only materials and forces (tangible objects and labor services) as things and, therefore, also as goods’ (1981: 54). For Menger, ‘monopolies, copyrights, customer good-will and the like’ are certainly goods (1981: 54). ‘Even relationships of friendship and love, religious fellowship, and the like’ can be classed as ‘goods’ in economic theory if they ‘are of such a kind that we can dispose of them’ (1981: 55). Knowledge is the key for Menger; even the division of labour must take a back seat to progress in knowledge (1981: 73). Knowledge is important in the new economy. It was also important in the old economy. Before Menger was Smith. Adam Smith listed three reasons for the increase of wealth produced by the division of labour. The third was that ‘labour is facilitated and abridged by the application of proper machinery.’ He elaborates on the point by noting that the invention of useful machines ‘seems to have been originally owing to the division of labour.’ When workers specialize, they ‘are much more likely to discover easier and readier methods of attaining any object’. As the division of labour progresses, the system ramifies. Speculation becomes a specialism. Like every other employment too, it is subdivided into a great number of different branches, each of which affords occupation to a peculiar tribe or class
Big players in the new economy 233 of philosophers; and this subdivision of employment in philosophy, as well as in every other business, improves dexterity, and saves time. Each individual becomes more expert in his own peculiar branch, more work is done upon the whole, and the quantity of science is considerably increased by it. (Smith 1937: 9) The division of labour creates a division of knowledge. The growth of knowledge is a necessary part of economic progress. What difference does the new economy make? Transaction costs are low in the new economy. This fact produces two competing tendencies. Assuming away innovation, low transaction costs should lead to a better approximation to general economic equilibrium. They also encourage innovation. The tendency toward general equilibrium follows from the Coase theorem. 1 If transaction costs were zero, resources would be allocated efficiently, regardless of the distribution of property rights. Lower transaction costs imply fewer idle resources. Widespread and persistent unemployment is less likely. Prudent inventory levels are lower. Because financial intermediation is cheaper, more of it occurs. Partly, this increase takes the form of financial innovations such as derivatives. It leads to greater participation in financial markets by ordinary households. Reduced inventories also encourage financial intermediation. Inventories tax our patience; they tie up value over time. The value previously tied up in inventories can be redirected to the purchase of financial assets, or to direct investment. The second tendency is somewhat inconsistent with the first. Lower transaction costs encourage innovation. Lower information costs make it easier for entrepreneurs to receive and process information. Searching more of the market and searching known parts more minutely increases the chance of seeing an opportunity. It is easier to coordinate the actions of many dispersed actors whose cooperation is needed to launch the new idea. Advertising is cheaper; thus, it is easier to get the word out. The more sophisticated methods of calculation made possible by lower costs of calculation allow entrepreneurs to make more precise and reasonable estimates of profitability. These superior estimates reduce the uncertainty entrepreneurs must bear and thus encourage enterprise. The overall implication seems to be that the system will be more dynamic. The new economy is more dynamic than the old economy. It probably cannot be decided, however, if the new economy will have more coordination or less coordination than the old economy. If our accounting is about right, two implications seem to follow. First, in the new economy, the value of the innovations lost to discretionary policy will be greater. If the new economy is more dynamic, then there will be more innovations to be discouraged by big player activity. Second, the value lost to discoordination of plans caused by discretionary policy will be less. Discretionary policies will throw fewer people out of work, but they will strangle more innovations that are useful.
234 Roger Koppl and Ivo Sarjanovic Our conjectures about the new economy and discretionary policy are probably not subject to direct test. We cannot observe the innovations that do not occur because of big player influence. Nor do we have a useful measure of our distance from general equilibrium.2 But we can make an indirect test. We can test the claim that the discretionary policies of ‘big players’ encourage herding in financial markets by creating ignorance and uncertainty. If empirical studies give strength to the idea that big players create ignorance and uncertainty, then the general considerations we offered above would suggest that discretionary policy does indeed extract a higher price in the new economy than in the old economy. This approach gives us a kind of proxy for a more direct test. We turn, therefore, to the theory of big players.
The theory of big players A big player is defined by Koppl and Yeager (1996) as ‘anyone who habitually exercises discretionary power to influence the market while himself remaining wholly or largely immune from the discipline of profit and loss’. Big players can disrupt order in the market process. In the short run, a private actor (such as a fund) may show big player features. In the long run, however, it is difficult for a private actor to challenge the profit and loss screening mechanism. When they use discretion, government entities are typical big players. A paradigmatic case is a central bank (even if independent of political influences) not bounded by a general rule. On the other hand an orthodox currency board would the case of a ‘ruled’ body; it operates without discretion. In the absence of big players and given the appropriate institutional framework, markets would be expected to exhibit tendencies towards coordinated outcomes. Well-defined and enforceable property rights, stable constitutional rules and competition are filtering conditions that promote the efficient production and distribution of knowledge. Markets then would generate prices and other signals that tend to reflect underlying economic fundamentals. Now, if the filter works badly, prices may wander freely from fundamentals values. Whether the filter works well or not is an empirical question. We do not need to assume the attainment of fully coordinated outcomes, but the market process under the conditions described would be orderly and error correcting. In the presence of a big player the market process changes. Big players are actors who combine three features. (1) Big players are big because they have the power to influence the market(s) where they operate. (2) Big players are insensitive to profit and loss. They are funded by a pool of fiscal resources. (3) Big players are discretionary. They do not follow rules but they do not follow any strategy of profit maximization either. A big player’s actions will affect market signals. The filter of profit and loss is corrupted when big players derange markets. Hence, prices will no longer accurately reflect market fundamentals that were previously being generated by decentralized, self-interested entrepreneurs seeking to discover unexploited arbitrage and speculative opportunities. The big player’s actions short-circuit, at
Big players in the new economy 235 least partially, the advantages derived from the division of knowledge. Now prices will also reflect the big player’s influence. Sometimes these cases are empirically evident, as when a central bank uses its discretion to move short-term interest rates or when it tries to defend an exchange rate. Other cases are more difficult to isolate empirically, such as the impact of certain fiscal decisions on interest rates. What matters in all cases is not only that the big player’s action will affect in some direct way an economic variable, but that, in addition, the quality of the affected signals (e.g. a price) itself becomes distorted. Thus, the epistemic value of market signals is diminished. The big players theory has been already applied with success to financial markets. Koppl and Yeager (1996) study an important episode of Russian nineteenthcentury monetary history using data gathered by Yeager. Koppl and Nardone (2001) and Broussard and Koppl (1999) apply different statistical techniques to Yeager’s data. Ahmed et al. (1997) have studied the behaviour of closed-end countries funds in the late 1980s. Koppl and Mramor (2003) study a recent episode in Slovenian monetary history. Gilanshah and Koppl (forthcoming) study US money demand from 1950 to 1990. Finally we, in the rest of this chapter, apply for the first time this theory to commodities futures market prices, in this case wheat, a highly politicized market.
Herding and contra-herding as error duration Koppl (2002: 135–8) argues that Parke’s ‘error duration’ model (1999) gives us a simple model of the herding and contra-herding produced by big players in asset markets.3 In Parke’s error duration model, ‘the basic mechanism . . . is a sequence of shocks of stochastic magnitude and stochastic duration. The variable observed in a given period is the sum of those shocks that survive to that point’ (1999: 632). The model will produce long memory if the probability of an error enduring k periods, pk, declines slowly as k grows. The values pk are called ‘survival probabilities’. When autocorrelations exist, their rate of decline is determined by partial sums of the survival probabilities. Koppl models bubbles as error duration processes (2002: 135). Big players increase the survival probabilities of the errors and decrease their rate of decline. With big players, Koppl explains, noise stays in the system longer (2002: 135). Thus, big players increase volatility, and herding. Big players may also induce or increase long memory. Contra-herding is consistent with this error duration model. The price of an asset can be described by
Pt = Ft + Bt + ε t , where Pt is the asset’s price, Ft is its fundamental value, Bt is a bubble and εt is an i.i.d. error term. The bubble can be modelled as an error duration process. Following Parke, let {εt; t = 1,2, . . . } be a series of i.i.d. shocks having mean zero and common variance σ2. Each error is a price movement induced by news that does
236 Roger Koppl and Ivo Sarjanovic not convey new information about the asset’s fundamental value. These errors are distinct from the εt. Koppl posits two distinct sets of traders in the market, namely fundamentalists and noise traders (2002: 136). (This common assumption is used, for instance, by Day and Huang 1990.) The fundamentalists correctly estimate Ft up to an i.i.d. error, εt. Noise traders behave like fundamentalist except that they respond to false signals ignored by fundamentalists. Through its effect on noise traders, each false signal induces a price change of ∈t. Some of them may be movements in excess of the level properly implied by the news to which fundamentalists respond. Each error ∈s lasts ns periods beyond s, where ns is a random variable. The error disappears when noise traders revise their initial interpretation of the false signal. Following Parke, let gs,t denote ‘an indicator function for the event that error ∈s survives to period t’ (1999: 632). The indicator, gs,t, switches from one to zero after period s + ns and stays at zero thereafter. The indicator is one as long as the error endures. Assume ∈s and gs,t are independent for t ≥s. The probability of an error enduring at least k periods is pk. Thus, pk = P(gs,s + k = 1). Clearly, p0 = 1 and the series {p0, p1, p2, . . . } is monotone non-increasing. The realization of the process is just the sum of the errors. Thus,
Bt =
t
∑ g s ,t ∈s .
s =−∞
As Parke notes, the survival probabilities ‘are the fundamental parameters’ of the model (1999: 632). He shows (Parke 1999: 632–3) that if autocovariances exist, they are given by ∞
γ k = σ 2 ∑ p j. j= k
If λ exists, the variance is σ2 (1 + λ), where ∞
λ = ∑ pi . i=1
In this case, the first-order autocorrelation is ρ1 = λ/(1 + λ). The process has a long n
memory if lim Σ kpk is infinite (Parke 1999: 633). Assume the survival probabilities n→∞ k=1
have the form pk=k–2+2d with d ≤ 1⁄2. In that case λ and the λk’s exist. The process has a long memory for 0 < d ≤ 1⁄2, and a short memory if d ≤ 0. (If 1⁄2 ≤ d ≤ 1, λ and the λk’s do not exist. If d > 1, pk > 1.) In this model, a sufficiently small (i.e. large, negative) value for d will make any bubbles difficult to detect; pk will be close to zero even for small k. As d grows,
Big players in the new economy 237 bubbles are increasingly large and persistent. If 0 < d ≤ 1⁄2, the bubble process is stationary with a long memory. In Koppl’s view, many of the effects of big players can be represented by an increase in d (2002: 137). A larger d means larger survival probabilities. The increase of d results from the greater ignorance of traders regarding the meaning of any piece of news. Noise traders will thus take longer, on average, to revise their interpretations of false signals. The increase in d raises the autocorrelations of the bubble process. If the increase is from a value below 0 to one above 0, then the big player will produce long memory. If long memory is already present, he will cause it to increase. It is perfectly possible, however, that the big player will cause d to increase from one value below 0 to another value that is still below 0. Big players cause or increase herding, but they need not cause long memory. The increase in d will cause an increase in λ and thus in σ2 (1 + λ), the variance of the bubble process. Thus, big players cause an increase in volatility. Big players may also cause and increase in σ2, the variance of the ∈t. Assume ∈t is the sum of several independent random variables each corresponding to a process unrelated to the asset’s fundamental value. Big players may increase the number of unrelated processes noise traders respond to. In that case, the variance of the ∈t will grow.4 This representation of the effects of big players is consistent with contra-herding according to Koppl (2002: 137–8). One of the false signals followed by noise traders may be last period’s movement, ut–1, in the non-bubble part of the asset’s price. Some noise traders will see a trend. Others may expect a ‘correction’. If the sheep prevail, ∈t = ut–1. If the contrarians prevail, ∈t = –ut–1. In either event, |∈t| = |ut–1|. (More generally, ut–1 error will be added to or subtracted from ∈t according to whether sheep or contrarians prevail.) The winning interpretation will then endure for nt periods beyond t.
The wheat market Our hypothesis is that the wheat world market was and is inhabited by big players of different sizes but that the big player influence fell after 1990. Practically no agricultural market in the world is totally free of government interventions. As an example, OECD countries spent during the year 2000 US$ 370 billon in different kinds of subsidies (minimum prices for farmers, export subsidies, soft credits, etc). There are also regulations such as export and import licenses, tariffs and quotas affecting the commercial flows. The wheat market is the one most affected by the presence of players who are more or less immune to the tight mechanism of profit and loss. Wheat, as bread or pasta, is one of the main sources of carbohydrates in the world population’s diet. In some places this role is also performed by rice. Wheat is the ‘energy’ input that our body needs to work properly. Not all the wheat, but most of it, is used as a food grain. However, a share of the world’s production is used as a feed grain. As a basic food ingredient, wheat is a very political animal. Therefore, governments around the globe have been and are, very involved in guaranteeing on the supply side its availability via mainly minimum prices and
238 Roger Koppl and Ivo Sarjanovic on the demand side, trying to make it available to the main sectors of the population as cheaply as possible, via subsidies. The world wheat production of about 500 million metric tons (denoted myn mt) is widely spread among different nations of the world. Today, the main producers are: China (100 myn mt), European Union (90 myn mt), India (70 myn mt) and the USA (60 myn mt). The world trade flow (exports and imports net of domestic utilization) is about 90 myn mt. The main exporters are USA (30 myn mt), Australia (16 myn mt), Canada (14 myn mt), Argentina (10 myn mt) and the European Union (10 myn mt). Other than Argentina, which is a real free market, all the other countries suffer different degrees of interventions in their markets. Among the main importers we can list Brazil (7.5 myn mt), Iran (7 myn mt), Egypt (6.2 myn mt), Japan (6 myn mt) and Indonesia (3.5 myn mt). Most of the big producers used to subsidize production heavily via minimum prices. These subsidies generate excessive supplies that are sold into world markets at parities that reflect a discount relative to domestic prices. In the last decade, thanks mainly to the Uruguay Round of the GATT, subsidies have been decreasing and therefore the involvement of governments in export markets is less than before. The 1980s saw a heavy subsidies war to gain market share especially between the USA (through the USDA – United States Department of Agriculture) and the European Union (through the Agriculture Commission in Brussels). Among the different export support programs promoted by the US government (PL-480, GSM, Donations, etc.) the Export Enhancement Program (EEP) was by far the most actively utilized and most popular one. It was available from 1985 and it was used for the last time in 1994. The mechanism of the EEP was the following. An international buyer checks prices with different USA exporters and one day decides to give a bid for a US commodity under EEP. The exporters estimate the subsidy needed to make possible the sale abroad. The subsidy is the price difference between the domestic price and the export price. The USDA then decides if they grant the subsidy or not and if so for what volume. This decision is absolutely discretionary and it is what makes the USDA a big player in the wheat market. The USDA decided how to allocate these subsidies, specially monitoring the actions of its rival the European Commission who still follows a very similar approach (but less actively than in previous years). Every Thursday, the commission in Brussels grants to exporters a subsidy called restitution to cover the price difference between local and international prices. Trading houses bid for restitutions, which, once granted, are used to make viable exports of European origin to world markets. USA and Europe allocate these funds from their budgets to those sellers who request the smallest subsidy to make the operations feasible. Let us assume that the free market price of USA wheat delivered to Morocco was US$ 150/mt on a cnf basis. But Morocco was buying French wheat (subsidized) by the European Union at US$ 100/mt. Then if a Moroccan buyer decided to shop in USA, they should give a bid to a USA private exporter who will ask the USDA for a subsidy of US$ 50/mt to make it competitive versus the European origin. But you do not know in advance if the USDA will decide to approve the subsidy in order to compete with the European wheat, how big the subsidy is going
Big players in the new economy 239 to be, for how much volume they are ready to do it, for how long they will subsidize sales to that destination. Thus, there is a considerable degree of discretion. Through the amount of subsidies the USDA can decide how much is going to be exported, when and at what final price affecting drastically the final stocks situation, which is a fundamental element to determine price direction. The impact in the futures markets is the following. One morning you arrive at the office and the USDA announces that they have decided to accept bids for 1 million metric tons of wheat to Egypt at US$ 40/mt subsidy (in order to make it competitive versus French wheat). You know shippers who sold that amount, subject to the EEP allocation, will need to hedge their sales buying futures in Chicago. But the final say on the sale is in USDA hands, not the sellers. On the other hand, that same morning you could find the USDA declining bids for Brazil at US$ 20/mt subsidy because, for political reasons, they want to avoid problems with the Argentine government. (Argentina is the traditional wheat supplier of Brazil.) And surely, the same date you will have some buyers from central America buying wheat at free market prices because, unfortunately for them, none of the other big players in the market showed any interest in selling cheap wheat to those destinations. The European Union has full discretion to decide every Thursday if they are going to grant restitutions or not, and if so for what amount, and for what volume. Some days nothing is allocated. Other days they accept bids for, say, 100,000 mt at US$ 20/mt subsidy. On still other days, if they decide to get rid of a good proportion of their stocks, they can accept 2 myn mt with subsidies ranging from US$ 20/mt to US$ 30/mt. The European Union also has the power to apply ‘negative restitution’, which in fact is an export tax if the world markets are quoting at higher values than the local European markets in order to ‘protect’ local availability and to ‘avoid inflationary pressures’. Tracing a parallel with financial markets, it would be similar to having an official bank managed by politicians who can decide at their discretion (although given the limit of a budget allocation) whether they will lend money cheaper than the market, which sectors of the economy they will decide to stimulate, for what amount, for how long, etc. In the case of the USA, the subsidy received needs to be used to sell to a specific destination, in Europe this is not the case, and exporters can decide how to use them. Canada and Australia, the other major wheat exporters, have their exports monopolized by grain boards that buy all the national production from the farmers and sell it into the world market. These entities behave as price discriminators because they sell to different destinations at different levels following how the USA and Europe price their sales to those places. If the Canadian and Australian boards lose money, they receive direct support from their treasuries. Such occasions, however, have been rare in recent history. We can then say that with the sole exception of Argentina (its grain industry was also nationalized until 1976) most of the wheat supply in the world is subject to political forces and therefore not sensitive to tight economic feedback. Nevertheless, the reduced role of government subsidies since about 1990 has marked a significant decrease in big player influence on wheat futures.
240 Roger Koppl and Ivo Sarjanovic A similar reduction in big player influence has occurred on the demand side. Until the early 1990s, before the privatization wave, governments were mainly in charge of buying wheat. Private flour millers were rarely involved in international transactions. A quick summary of the 1970s and 1980s shows that government boards made about 80 per cent of the purchases. Among the biggest were the USSR, Brazil, China, Egypt, Pakistan, Iran and Indonesia. Buying decisions were mainly a ‘physical’ decision, not an economic choice. There were needs and they had to be filled to preserve social peace, certainly as cheaply as possible so as not to waste public funds, but without any relation to the industry profitability, industry margins, risk management criteria, exposure, etc. Today less than 50 per cent of the imports are still handled by public officials. Big markets like Brazil, Indonesia and partially Egypt are already in private hands. These changes have had a big impact in terms of buying decisions. Purchases are much more price conscious. Volume is not the only driver. Traded lots are smaller because buyers are more atomized. The timing of the purchases is very different. It is linked more with the margin prospects of the industry, which looks at a better risk management. Purchases are spread more smoothly over time. The trader has a different risk perception of the different members of the industry instead of a common approach to country risk. And so on. The 1990s show a big difference with the 1970–80s market. The demand side is more private, the presence of big public buyers more limited. The USDA finished their EEP programme in 1994 and Europe intervenes less because with lower production subsidies they have lower export surpluses. This does not mean at all that agricultural markets are not distorted by subsidies (which are still very heavy and widespread utilized) but the presence of big players is less evident.5 Figure 11.1 plots wheat prices from 1970 to 2000. (A value of 1000 corresponds to $1.00 per bushel. All prices are for contracts traded on the Chicago Board of Trade.) What are the hints that market players use to figure out the potential activities of the big players in the wheat market? Among others: their fiscal capacity to grant
Figure 11.1 Wheat prices from 5 January 1970 to 22 June 2000. Note: Vertical bars indicate the beginning and end of 1990, whose data were omitted from the study.
Big players in the new economy 241 subsidies; how the agricultural budget is discussed; how the budget is allocated among different programmes and by whom and when those decisions are made; the domestic political situation of different countries trying to predict their priorities and urgencies (guns or butter?); the political relationship between producer and consumer countries (via purchases importers could decide to strengthen relationships with a specific exporter regardless of relative prices); availability of soft credits; public statements of the public officials in charge of the exports and imports programmes; trips to key exporter nations giving the hint that some deal will be negotiated; who is the person in charge of the grain boards; the political willingness of big producers to grant food donations; embargo decisions like the USA embargo towards the USSR (the biggest wheat buyer of the world at that moment) in the early 1980s following the Afghanistan invasion, etc. Dealing with big players is much more complex than trying to figure out when a price will be considered high enough to induce sellers to market their crops or make buyers reluctant to move or when a price will be perceived as cheap enough to induce farmers to hold stocks and buyers to look for coverage. Not all futures markets have had the big player influence of the wheat market. The soybean market is very different from the wheat one in terms of political involvement. Most of the beans grown in the world are crushed to convert them into by-products: they contain about 17 per cent crude oil and 80 per cent meal. Only a very minor proportion of these oilseeds is used directly as human food, mainly in the Far East to prepare tofu and tempe. Oil, once refined, is consumed directly by people and the meal is used to feed animals (mainly hogs and poultry), as a source of protein. The oil and meal relative contribution to the seeds’ value varies in time, but in general more than 60 per cent of the soybean’s value is ‘imputed’ from the meal leg. We can say that given that meal should be fed to animals first and only after a certain period of time are animals ready to be consumed as food by humans, beans are one stage higher than wheat in a Hayekian structure of production. Other than that, oil and meat are consumed mainly by people of relatively high income, while bread is vital for people of low income. Therefore wheat is a deeper political issue than soybeans. Wheat availability, perceived by governments as a problem, requires in the minds of politicians a more centralized solution. They cannot afford to trust in the invisible hand to guarantee its supply. One big difference between these two markets is that the world production of wheat is much more decentralized geographically. Moreover, wheat has two crops a year, a winter crop and a spring crop. These facts explain why weather risks differ for wheat and soy. The soybean world production was basically located in the USA until the early 1980s and it is still concentrated in America: USA and South America. The main consequence of this fact is that weather concerns in the soybean market affect the CBOT futures market more directly. In other words, the soybean market is much more volatile than the wheat with respect to weather. The relatively low importance of weather in governing world wheat prices gives us a greater chance of isolating the effects of politics on the wheat market.
242 Roger Koppl and Ivo Sarjanovic The picture we have painted is one of a substantial big player influence in the wheat market. But it is also a picture of reduced big player influence in the 1990s. The theory of big players, then, predicts a reduction in the amount of herding after 1990. Our statistical test follows.
Statistical methods and results Our review of the wheat market suggests that there was a decrease of big player influence after 1990. The theory of big players predicts an increase in herding. To test for herding we ran a rescaled range (R/S) analysis on wheat data from the Chicago Board of Trade. The method of R/S analysis has been explained in a number of big player studies including Ahmed et al. (1997); Gilanshah and Koppl (forthcoming); Koppl and Mramor (2003); and Koppl and Yeager (1996). The essence of the technique is to see if the accumulations of the series under study wander around too much to have been produced by a random walk. Rescaled range analysis tests for ‘persistent dependence’ in time-series data. The concept was invented by H.E. Hurst (Hurst et al. 1965). The use of R/S analysis as a test was developed by Mandelbrot and others (Mandelbrot 1971, 1972; Mandelbrot and Wallis 1968, 1969a, 1969b). Persistent dependence in time-series data creates ‘aperiodic cycles’, irregular ups and downs in the data that cannot be attributed to ‘short-period’ autocorrelation. Statistically, persistent dependence is the failure of the error terms to die off geometrically or quicker. In the case of interest to us, positive persistence, the series moves up and down in long waves that are inconsistent with short-memory processes. Long-memory processes are ‘nonergodic’. Short-memory processes are ‘ergodic’. The condition that the error terms die off at least geometrically is expressed in the following equation.
ρ( k ) ≤ Cr − k , where C is positive and r is between zero and 1 (Brockwell and Davis 1991: 520). R/S analysis tests for persistence assuming the series is ‘stationary’ in the sense that the expected value of any function of {Xt} is the same for {Xt–k}. Let us consider such a series {Xt} for which we have a sample of size T. The cumulative sum of the series, X*(t), is just the sum of the values up to t: t
X * (t ) = ∑ X u u =1
where t ≥ 1. For t = 0 we define X*(t) = 0. For any interval of length s beginning at t, the range of the interval is
Big players in the new economy 243
{
]}
[
R(t , s ) = max X * (t + u) − X * (t ) − (u / s ) X * (t + s ) − X * (t ) 0≤u≤s
{
[
]}
− min X * (t + u) − X * (t ) − (u / s ) X * (t + s ) − X * (t ) . 0≤u≤s
To rescale the range, divide through by the sample standard deviation, S(t,s), of the original series {Xt}. The ratio R(t,s)/S(t,s) is called the ‘rescaled range’. This ratio grows with interval length, s. Mandelbrot and Wallis (1969b) report that, in the limit, the rescaled range is proportional to a power of s. In other words,
R(t , s ) / S (t , s ) ~ Cs h where C > 0 and 0 < h < 1. The Hurst coefficient, h, is a measure of persistence. If the series is ergodic, h = 1⁄2. If the series has ‘negative persistence,’ h< 1⁄2. Finally, if it has ‘positive persistence,’ h > 1⁄2. Figure 11.2 illustrates R/S analysis. In the figure, X* is plotted on the vertical axis and time is plotted on the horizontal analysis. The line showing the cumulative values of Xt goes up and down because {Xt} is stationary with some values positive and some negative. The range, R(t,s), of an interval is the vertical distance between
Figure 11.2 Rescaled range analysis.
244 Roger Koppl and Ivo Sarjanovic Table 11.1 Estimated values of the Hurst coefficient (standard errors are in parentheses)
GH (10)
1970–89
1991–2000
0.596
0.558
(0.008)
(0.008)
two straight lines tangent to the cumulative series and parallel to the straight line connecting X*(t) and X*(t + s). The range just measures how much the cumulative series deviates from trend over an interval. The rescaled range adjusts this figure to correct for the size of the variance of the original series over the interval. Herding would create positive persistence; the rescaled range would grow rapidly with interval length, s. Using the ‘classic Hurst’, we estimated the persistence in our wheat data. Our reasons for preferring the classic Hurst to Lo’s technique are the same as those given in Ahmed et al. (1997). Our data were prices of wheat as set on the Chicago Board of Trade. Given the importance of the USA as a world producer and exporter in the wheat market, the Chicago prices tend to be considered good proxies of world prices. We started with each day’s closing price beginning with 5 January 1970 and ending with 22 June 2000. We then calculated the return for each day except the first. The return is the change from the previous day’s price over the previous day’s price. We tested for persistent dependence in the return series created in this way. Recall that 1990 was the transition year from a regime of relatively large big player influence to a regime of relatively small big player influence. We chose, therefore, to omit 1990 from our sample and look only at the period from 1970 to 1989 and the period from 1991 to 2000. The results of our test are reported in Table 11.1. Our result is consistent with the theory of big players.
Conclusion Our results give empirical support to the idea that discretionary policy does induce ignorance and uncertainty and, therefore, herding in financial markets. Previous studies have tested this claim in the foreign exchange market of Czarist Russia, closed-end country funds in the period 1989–90, US money demand in the postwar period, and the Slovenian stock market of the transition period. As in these other studies, our results are consistent with the theory of big players (see Ahmed et al. 1997; Koppl 2002; Koppl and Mramor 2003). These results are consistent with our conjecture that big players frustrate innovation in the new economy. Thus, they give some empirical support to the Austrian suggestion that activist government policies may extract a higher price in the new economy than in the old economy.
Big players in the new economy 245
Notes We thank Jack Birner, Klaus Schredelseker and two anonymous referees for comments. 1 There are many causes preventing convergence to general equilibrium. The Coase theorem deals with only one type, transaction costs, although one might lump them all into this type by definition. Transaction costs are, however, important. Their reduction tends to bring us closer to an imaginary state of perfect coordination described in general equilibrium theory. 2 It is not even clear that the economy is further from equilibrium in the bust than in the boom if the boom was a false one induced by monetary growth, as in the Austrian theory of the trade cycle. 3 Parke uses his model to represent persistence in the volatility of asset prices. 4 In this case, the several errors of a period would all endure for the same number of periods. Dropping this assumption would complicate Parke’s error-duration model. Koppl conjectures that it would not alter any of the conclusions about big player effects (2002: 220). 5 In USA the farmer still has a guaranteed minimum price (loan level) that could be higher than the free market value. If the market does not reach the target level, the government reimburses directly to the farmer the difference between the free market price (sold to an exporter or a processor) and the target level so we can say that instead of an EEP allocation these days, farmers receive the subsidy directly. If the ‘free market’ price stays below the target level the price does not act as a signal for the farmer any more because he will get the target price anyway so the selling decision gets somehow disconnected from the final income they will get. In other words low prices no longer constrain supply.
References Ahmed, E., Koppl, R., Rosser, J. and White, M. (1997) ‘Complex bubble persistence in closed-end country funds’, Journal of Economic Behaviour and Organization 32: 19–37. Brockwell, P.J. and Davis, R.A. (1991) Time Series: Theory and Methods, 2nd edn, New York and Berlin: Springer-Verlag. Broussard, J. and Koppl, R. (1999) ‘Big players and the Russian ruble: Explaining volatility dynamics’, Managerial Finance 25: 49–63. Day, R.H. and Huang W. (1990) ‘Bulls, bears, and market sheep’, Journal of Economic Behavior and Organization 14: 299–329. Gilanshah, C.B. and Koppl, R. (forthcoming) ‘Big players and money demand’, in J. Backhouse (ed.) Modern Applications of Austrian Economics, New York: Routledge. Hurst, H.E., Black, R. and Sinaika, Y.M. (1965) Long-Term Storage, an Experimental Study, London: Constable Publishing Company. Koppl, R. (2002) Big Players and the Economic Theory of Expectations, New York and London: Palgrave Macmillan. Koppl, R. and Mramor, D. (2003) ‘Big players in Slovenia’, Review of Austrian Economics 16: 253–69. Koppl, R. and Nardone, C. (2001) ‘The angular distribution of asset returns in delay space’, Discrete Dynamics in Nature and Society 6: 101–20. Koppl, R. and Yeager, L.B. (1996) ‘Big players and herding in asset markets: The case of the Russian ruble’, Explorations in Economic History 33: 367–83. Mandelbrot, B. (1971) ‘When can price be arbitraged efficiently? A limit to the validity of the Random Walk and Martingale models’, Review of Economics and Statistics 53: 225–36.
246 Roger Koppl and Ivo Sarjanovic Mandelbrot, B. (1972) ‘Statistical methodology for nonperiodic cycles: From the covariance to R/S analysis’, Annals of Economic and Social Measurement 1: 259–90. Mandelbrot, B. and Wallis, J.R. (1968) ‘Noah, Joseph, and operational hydrology’, Water Resources Research 4: 909–17. Mandelbrot, B. and Wallis, J.R. (1969a) ‘Some long-run properties of geophysical records’, Water Resources Research 5: 321–40. Mandelbrot, B. and Wallis, J.R. (1969b) ‘Robustness of the rescaled range R/S in the measurement of noncyclic long run statistical dependence’, Water Resources Research 5: 967–88. Menger, C. (1981) Principles of Economics, trans. J. Dingwell and B.F. Hoselitz, New York: New York University Press. Parke, W.R. (1999) ‘What is fractional integration?’ Review of Economics and Statistics 8: 632–8. Say, J.B. (1971) A Treatise on Political Economy, or the Production, Distribution, and Consumption of Wealth, trans. C.R. Prinsep, New York: Augustus M. Kelley. Smith, A. (1937) An Inquiry into the Nature and Causes of the Wealth of Nations, New York: Modern Library.
Part VI
The monetary sector in the Internet
12 Bubble or new era? Monetary aspects of the new economy Antony P. Mueller
Every business cycle is the same with the exception of some fundamental difference that characterizes that particular cycle, and rarely, if ever, is in evidence in other cycles. The crucial issue is to identify what that particular phenomenon is. (Alan Greenspan1)
The new economy in a Misesian framework Conventional views of the American boom during the 1990s may hold that it was primarily a technological phenomenon characterized by the breakthrough of innovations and their rapid dissemination. In a Schumpeterian perspective, the boom may be interpreted as the occurrence of one of the seminal clusters of inventions and innovations that put the economy on higher levels of productivity and growth, while interpretations along the lines of the real business cycle models may note that the booming economy was brought about by major technological advances that spread about the entire economy. All of these interpretations assume an impact that will transform the economy as a whole and make economic activity more efficient and finally more beneficial for consumers. Methodologically, they concentrate on the “real economy,” largely ignoring the financial and monetary factors. But while non-monetary explanations do reveal some aspects of economic behavior, ignoring monetary conditions makes the analysis incomplete and often quite misleading. In a monetary economy, economic calculation is done in monetary terms based on profit and loss as they appear in accounting. The central categories of a monetary analysis of this kind are asset valuations, prices, profit and loss, credit and the interest rate. Such an analysis, based on a Misesian framework (Mises 1998: 535), would detect various elements in the boom that suggest that a monetary cycle has taken place. The starting point of Mises’ monetary theory is the proposition that the monetary rate of interest may deviate from the natural rate due to money creation (or its contraction) in the credit markets.2 If the monetary rate of interest falls below the natural rate, it will deviate from the original valuation between present and future goods, and, as future goods have become relatively cheaper demand for them increases. In the Misesian perspective money is not neutral; changes in the supply
250 Antony P. Mueller of money come along with shifts of the individual wealth positions. By going beyond the Wicksellian theory (Wicksell 1898) in two important aspects, Mises, first, applies a strict subjective valuation approach by introducing the “originary rate of interest” as the point of reference; and, second, applies “sequential analysis” in contrast to the “all-at-once-adaptation” that characterizes most monetary models since David Hume and is still the trademark of models in the tradition of the quantity theory of money. In its originary form, the interest rate is the discount that human action must give to later available goods compared to the earlier available goods, which may render the same service. Human action implies categorically a preference for the more immediate over the more remote in time. Otherwise, in an imaginary world without an originary interest rate, saving would become infinite.3 Sequential analysis of monetary effects shows that changes in the money supply affect economic agents heterogeneously. In a prolonged process of credit expansion4 fundamental differences occur in relative wealth positions depending on who gets the newly created money first. Money cannot be neutral because it enters the economy not at once or at the same time nor in the same quantities for all economic agents alike. While money creation may or may not change the overall price level, it will always change relative prices and with it the relative fortunes of individual economic agents. Even if the change in the quantity of money could be known in advance, and if it were known for which kind of activities it enters the economy, it is impossible to know ex ante how this will affect the different prices later on. Only perfect foresight could transform the monetary rate of interest into a neutral rate. But it is principally impossible to foresee how, when and to what degree individual valuations will change, and thus the formation of expectations about a certain direction of prices is disparate and must remain uncertain. This monetary theory based on individual valuation and sequential analysis leads to a theory of the business cycle which holds that the expansion of circulating credit brings about deviations of valuations due to the transmission of false signals as to the availability of real funding in terms of sustainable savings, thereby causing a misallocation between the production of immediate and future goods. Easy money creates an illusion of wealth and thus instigates the enlargement of the production process, while consumers may aspire for the acquisition of goods that rank higher in their preference scale and that previously were out of reach. While such an artificial boom may go on for a prolonged period of time and thereby irreversibly transform the economy, it cannot be maintained in the long run5 because it is based on money creation and the accumulation of debt in the presence of insufficient savings. Then, disproportionalities6 occur within the economy, which later on require reversals brought about by a recession or a severe slump depending on the extension of the bubble. In the meantime, apparent wealth and economic and technological progress may have been created temporarily. But given that real savings are lacking, more pressing needs will emerge sooner or later that force the abandonment of the ambitious economic projects.7 The monetary trade cycle, which begins with credit expansion of fiduciary money within the fractional reserve bank system at home or abroad, or by outright money creation or external debt accumulation by the government, changes the
Bubble or new era? 251 pattern of investment and consumption in an economy. It is a characteristic of a monetary business cycle that growth is concentrated in specific sectors of the economy where clustered investments occur. This primary effect of a monetary business cycle tends to become more obscure in the course of time when secondary effects take place in the form of a spillover to other economic activities. It is the primary effect of money creation, which makes for the interpretation of a modernization of the economy, of an innovation boom or as “economic progress” in general; and it is the secondary effects, which usually become regarded as “the economic boom.” But real funding has been overextended in both cases, and the new capital structure and consumption patterns cannot be maintained in the long run. Economic actors have overestimated the permanence of their wealth based on financial illusion. When applying this framework to the American boom of the 1990s several factors, which otherwise receive little attention, are put into the center of the analysis. First among these ranks the phenomenon that the take-off took place within an environment characterized by ample financial liquidity brought about by loose monetary conditions in combination with the provision of a monetary safety net encompassing domestic institutions and the international financial arena, while all of this was accompanied by widening financial imbalances, particularly in terms of falling private savings. With ample financial funding available and the lowering of the risk perception, highly concentrated investment activity resulted, predominantly in information technology. The apparent strength of the US economy in these areas must be contrasted with the fact of increasing debt levels, particularly in the corporate and foreign sector of the US economy.
A debt-driven boom High real economic growth rates along with very low levels of unemployment and inflation had been thought of as being incompatible based on concepts such as the NAIRU. The apparently miraculous performance of the US economy during the 1990s instigated the search for alternative models of explanations with the “new economy” paradigm becoming the most prominent. According to this model, the world economy, led by the United States, has entered a new development stage characterized by high productivity due to technological innovation. For monetary policy, this view implied that the US economy would be capable of maintaining much higher levels of real growth rates and employment than previously without risking inflation. Productivity growth, even though it might not clearly show up in all statistics, came to be seen – particularly within the Federal Reserve System and on Wall Street – as the prime mover of the economic transformation in order to explain the performance of the US economy in the second half of the 1990s, when high real growth occurred in the absence of significant inflation rates (cf. Onliner and Sichel 2000). For US monetary policy such a hypothesis opened the prospect of lowering interest rates below levels that were previously thought of as a sure way to cause inflation. But according to the new paradigm, the presumed existence of a new economy represented the pursuit of a beneficial cycle, where fast
252 Antony P. Mueller technological advances drive productivity enhancements that account for a noninflationary growth potential. Low interest rates then appear as conditional for high new capital investments that in turn bring forth productivity growth. In a paradoxical way, the conclusion said that the ample availability of financial funding is causal for containing inflationary risks. A change of the monetary policy bias towards restriction would abort this beneficial circle. Higher interest rates would reduce capital formation, and lower capital investment would reduce productivity growth. In this case, the economy would be at risk of inflationary tendencies. The new guidelines for monetary policy suggested that money and credit must be made amply available if the full fruits of the technological boom should be harvested. For this interpretation, however, the problem remained that the new economy did not show up convincingly in the statistics, and the technological innovations appeared rather small compared to the great inventions of the past (Gordon 2000). Only creative interpretations made it seem possible to diagnose a productivity jump (Nordhaus 2001). Based on standard economic indicators and official figures, the thesis of a new economy seemed hard to justify (see Table 12.1). While the US real growth rates have been quite high (but not exceptionally so) and inflation moderate (but in no way absent), remarkable productivity growth does not show up in the official statistics despite exceptionally high investment rates. Rather, the foremost difference of the 1990s’ cycle in contrast to the earlier ones appears to be – besides a high level of equipment investment – the exorbitant increase of debt levels in households, businesses, and in the external accounts. An analysis of the flow of funds accounts (Godley and Izurieta 2001) supports the diagnosis that unsustainability has been building up. Additional figures show that in the 1990s, US credit markets expanded at an unprecedented pace (for data see BIS 2000: 111). In the private sector, annual international and domestic debt securities net issues increased sharply, amounting to around 500 billion US dollars in 1995 and increasing to 1.2 trillion US dollars in 1999, dwarfing the respective Table 12.1 US economic expansion 1991–9 in historical perspective
GDP volumea Productivitya GDP deflatora Household debt/income Business debt/outputb,c Current account/GDPb Equipment investment/GDPb,d
1991Q2– 1999Q4
1995Q4– 1999Q4
1983Q2– 1990Q4
1961Q1– 1969Q3
3.6 2.1 1.9 89.6 75.3 –1.7 8.3
4.4 2.6 1.6 94.2 76.3 –2.3 9.7
4.3 1.7 3.3 74.4 70.6 –2.4 6.0
5.0 2.9 2.6 63.4 54.9 0.5 3.4
Source: National Data, Bank for International Settlements, 70th Annual Report, 2000: 14. Notes a Annual percentage changes. b Average of period. c Non-financial corporate sector. d In volume terms.
Bubble or new era? 253 figures for Europe and Japan and historical standards. While the surplus of the central government budget received high publicity it has been largely ignored that the other public sector, i.e. independent and quasi-public entities of the United States, issued debt instruments rising from around 300 billion US dollar in 1995 to almost 800 billion in 1999. Consolidating these figures for 1999, new net debt issues by the private and public sector together reached two trillion US dollars in that year. The trend continued in 2000 and 2001, when in the first half of 2001 government-sponsored enterprises like Fannie Mae and Freddie Mac8 issued coupon securities and bills amounting to 590 billion US dollars (Federal Reserve Board 2001: 15). Due to a level of internal absorption, which is surpassing domestic production, the US trade deficit has widened during the second half of the 1990s and at the beginning of the new decade it is widely recognized that this trend is unsustainable. The overall debt accumulation in the United States, which has occurred in the private and external sectors of the economy and by the semigovernmental agencies, is particularly disturbing when taking into account that this explosion of credit was accompanied by a virtual breakdown of household savings, where the rates turned negative in 2000 and 2001. A further element of doubt as to the sustainability of the boom is suggested because high-tech goods, which currently represent less than 8 percent of total manufacturing output, contributed two-thirds of the increase in manufacturing output between 1995 and 2000 (Greenspan 2001b). Finally, figures on productivity, prices, and growth may be biased in the official statistics due to the application of the so-called hedonic technique, which is now applied to about 18 percent of the gross domestic product (Aizcorbe et al. 2000), thereby boosting “real growth” and productivity figures while at the same time reducing inflation rate figures.
Monetary policy Shortly after announcing his concerns about “irrational exuberance” in financial markets, the chairman of the Federal Reserve changed stance, and from 1997 onwards the Federal Reserve System seems to have become more convinced each year that the new economy is not a chimera. In December 1996, Alan Greenspan was still concerned about the potential effect of high equity prices for economic stability when he said: But what about futures prices, or more importantly, prices of claims on future goods and services, like equities, real estate, or other earning assets? Is stability of these prices essential to the stability of the economy? Clearly, sustained low inflation implies less uncertainty about the future, and lower risk premiums imply higher prices of stock or other earning assets. We can see that in the inverse relationship exhibited by price-earning ratios and the rate of inflation in the past. But how do we know when irrational exuberance has unduly escalated asset values, which then become subject to unexpected and prolonged contractions as they have in Japan over the past decade? And how do we factor that assessment into monetary policy? We as central bankers need
254 Antony P. Mueller not be concerned if a collapsing financial asset bubble does not threaten to impair the real economy, its production, jobs, and price stability. Indeed the sharp stock market break of 1987 had few negative consequences for the economy. But we should not underestimate or become complacent about the complexity of the interaction of asset markets and the economy. Thus, evaluating shifts in balance sheets generally, and in asset prices particularly, must be an integral part of the development of monetary policy. (Greenspan 1996) After that declaration, and particularly from 1997 onwards, by announcing the United States as the leader in the creation of the new economy, the expectation seems to have gained hold more strongly within the Federal Reserve Board that the tolerance towards credit growth despite falling private saving rates and rising external debt would pay off. Monetary and other policy-makers increasingly began to ignore monetary and credit growth rates along with the accelerating build up of foreign and domestic debt levels and instead turned to the growth rate, employment, and “price stability” as guideposts. With very few exceptions, the caveat could rarely be heard that excessive internal and external credit growth as measured in relation to nominal gross domestic product and relative to the savings rate along with excessive valuations of the stock market relative to profits cause transformations in the real economy that go beyond mere aggregate level effects. With disequilibria between demand and sustainable production already established, instigating further consumer demand may then disrupt the intertemporal allocation structure further. Easy monetary conditions are imperative according to the “Greenspan model,”9 which says that productivity growth keeps inflation down. In order for that to continue high capital investment is warranted, which in turn requires an accommodating monetary policy. In early 2001, the Federal Reserve lived up to this paradigm by initiating massive cuts of the Federal Funds Rate – irrespective of considerations that money supply had risen drastically since 1995, reaching a growth rate of 13 percent for M3 in the first half of 2001 (Federal Reserve Board 2001: 32). The rise of the new economy was accompanied by formidable advances of the stock and housing market valuations raising the question about the causal link between these phenomena. In accordance with the theory of financial market efficiency, the conventional view would hold that stock market prices reflect superior future profit potential of the new companies. In this view, technological progress forms the basis for higher productivity, which in turn results in higher profits and thus justifies stock market valuations that may seem excessive by historical standards, but are not out of line when evaluated in terms of the new economy. As the new technologies facilitated drastically the globalization of business activities, the new economy paradigm gained a wider meaning. With the emergence of new companies that became global players almost overnight, the existence of a new economy became to be seen as a phenomenon of global reach where companies would be left behind should they not follow the bandwagon. The
Bubble or new era? 255 wider perspective of a global new economy meant for US monetary policy the new imperative of extending the boom beyond the domestic economy and spreading it abroad by maintaining its underlying momentum. This seemed to justify that the United States become the “consumer-of-the-last-resort” for the rest of the world, this way additionally diminishing the importance of increasing current account deficits as a warning light. In the second half of the 1990s, US monetary policy began to assume center stage as the guardian of the new technologies and its globalization.
Institutionalization of moral hazard Making the boom continue at home and abroad became the prime focus of US monetary policy in the second half of the 1990s. But among the unintended consequences of the new paradigm ranked the broadly based lowering of perceived risk levels in the financial markets and a global spread of careless investment activities. The series of international financial crises that occurred towards the end of the 1990s and continues in the new decade is not a new phenomenon when isolating each case; it is new, however, in so far as it is a single theme, where the individual cases represent connected parts in a chain of events. In this perspective, it is not the lack of control of financial market but the institutionalized system of bailouts and the credit creation that comes along with it, which causes the spread of financial fragility. The logic of this argument says that the bailout of Mexico laid the foundation for the crisis in Asia, which formed the groundwork for the financial and monetary crises in Russia and Brazil, which in turn brought forth the Argentinian and Turkish crises in 2001. In the wake of the Mexican bailout, and reinforced by the series of IMF-sponsored financial packages for the crisis countries, debtors and borrowers felt ever more confident when increasing their exposures. During the 1990s,10 the presumption of the existence of an effective lender of the last resort became a common learning experience for the financial market actors. As with the debt-driven expansion strategies of many new economy enterprises, risk considerations increasingly became to be substituted by the relentless aim of expansion. The practice of bailouts on a national and international scale has established moral hazard as a paramount effect and works towards undermining the international financial and economic structure. Bailout expectations cause an overextension by financial actors as considerations of risk levels of debt become diminished. Excessive liquidity creation – as part of any bailout – lays the foundation for malinvestment, which remain disguised as long as the boom continues.11 Central banks or other institutions as the sources of excessive liquidity are then creating these very externalities that they are presumed to contain, when their policies mislead individual investors into erroneous behavior by exposing them to wrong price and interest rate signals. Moral hazard in financial markets is more ample in its ramifications than the traditional concept of moral hazard as it was developed for insurance markets.
256 Antony P. Mueller While moral hazard in insurance markets refers to the incentives that the coverage provides for the insured, moral hazard as a result of bailouts affects the whole monetary system, as the provision of additional liquidity has also non-specific consequences, because it implies additional liquidity creation that potentially affects the economic system as a whole. In the modern environment, these effects are also not confined to just one economy, but have ramifications for the international financial system. Bailouts change the expectations of the market participants and they transform financial data. The lowering of interest rates allows the continuation and resumption of investment projects and facilitates the beginning of new projects. With the resources already under stress, this artificial continuation of the boom tends to make existing maladjustments more rigid and later on more difficult to unravel. While the supervisory bodies and intervening institutions may well be quite aware of this build up of misallocation of capital, there is a trade-off involved, which aids to forego restrictive measures and interventionist restraint. While nonbailouts and monetary restriction cause a prompt contraction almost certainly, the continuation of the boom could also end in a “soft landing.” Together with new technologies, the new economy as a phenomenon also relates to the so-called globalization. International financial markets appear to be fundamentally transformed by a number of factors, which can be summarized in terms of volume, volatility, velocity, and virtuality that characterize current international financial market operations (Mueller 2001a). This process has been accompanied by rapid financial innovation and liberalization leading prominent observers to the conclusion that “today’s international financial system is sufficiently different in so many respects from its predecessors that it can reasonably be characterized as new, as distinct from being merely a continuing evolution from the past” (Greenspan 1998). Modern communication technologies help to overcome the barriers of distance and time, and facilitate international transactions due to strongly diminished transaction costs. Financial flows have become more massive and more abrupt, thus making distinctions, which form the basis of modern financial market theory – such as those between market risk, credit risk, and liquidity risk as well as the effects of diversification – seem obsolete. Risk models that are based on these theories may actually have contributed to providing false assessments of risk, as they usually fail to incorporate other than normal distributions and base their risk standards on limited historical data.12 Taking into account that modern means of communication allow for very short reaction time and given the dimension of the assets involved, along with the application of leverage, markets are exposed to higher degrees of volatility. Interrelationships emerge, which are not only unpredictable or beyond the realm of standard economic modeling, but also show the tendency to produce extremes of volatility and deviations from otherwise reasonable valuations. In contrast to traditional banking, ever more international transactions take place in a sphere of virtuality without personal knowledge of the counterparts and are quite frequently conducted by actors, who due to other specific requirements for their job, do not possess a deeper understanding of the economic and socio-political
Bubble or new era? 257 specifics of the countries whose debts and currencies they deal with. Short-term volatility afflicting stock and bond markets and of exchange rates go hand in hand with more profound deviations from fundamentals and trigger various forms of contagion among different assets classes and regions when perceptions are not sufficiently differentiated. However, pointing to increased speculation or irrationalism when explaining apparent excessiveness misses the point, because excessive capital flows and asset price inflation tendencies must have a footing in a loose monetary base, and there must be sufficient credit creation by financial institutions. When observing the recent trends, the pattern emerges that the futile attempt of the Japanese central bank to prop up the economy by monetizing government debt and by reducing its call money rate step by step in the past years to a rate that effectively attained zero, has been a constant source of ample liquidity in international financial markets. With Japanese bonds moving towards historical low yields, incentives have been introduced for capital exports to search for higher yields abroad opening additional profit opportunities for private financial institutions by practicing the so-called yen-carry trade. But while both the Japanese monetary policy and the provision of additional liquidity by the US central bank had their own rationale in each of the specific cases, they also produced the side-effect that domestic and international investors were taught to trust these institutions in guaranteeing ample liquidity, with the IMF as the third party to provide loans in cases of emergency, when emerging economies were concerned.
End of the miracle? In the US, during the late 1990s, when unlimited prosperity seemed feasible based on an intelligent monetary management, the economic and financial boom had been helped to continue, when monetary policy was guided by the model of a new economy, and additional expansionary biases were introduced due to events in the international financial environment or special circumstances such as the expectation of a “year 2000 problem.” Monetary policy ignored the build up of debt and left aside considerations of massive disequilibria such as the current account deficits and the disappearance of a financial surplus in the private sector. While the deficit reduction of the central government was widely noticed, the debt inflation in the semi-governmental sector has hardly attracted attention – despite the speed of its accumulation and its dimension. As long as expectations prevailed that financial wealth creation by the stock markets could go on indefinitely, disequilibria hardly mattered, as the economy appeared to have access to sufficient funding. US monetary policy under Alan Greenspan’s chairmanship of the Board of Governors of the Federal Reserve System fits none of the dominant economic models. It defies monetarism as much as Keynesianism or its variants, and it works counter to the assumptions of rational expectations theory. By abandoning theoretical models, American monetary policy has become increasingly discretionary and highly dependent on the chairman’s views about the state of the economy.
258 Antony P. Mueller Particularly during the second half of the 1990s, when the “Greenspan model” began to dominate US monetary policy, some of the drastic policy moves, particularly the aggressive rate cuts in 1998 and in 2001, were motivated as much by international as domestic concerns and do reflect the new role of the American central bank to act – in conjunction with international monetary institutions, particularly the International Monetary Fund – as the global lender of the last resort. For the domestic economy, the new imperative now said that the boom at home must be kept going as an essential part of the global requirements, and that this was also highly beneficial for the domestic economy. Given the relatively stable inflation rates at home, a strong currency and high real growth, this monetary policy stance seemed justified. However, the arguments for justification are deeply flawed when growth is debt-driven; when the dollar exchange rate is strongly influenced by capital inflows from abroad; and when the moderate inflation rates – besides statistical bias – are also the result of cheap imports. The readiness of the monetary authorities to provide ample liquidity in conjunction with the provision of so-called rescue packages to a series of emerging economies by the International Monetary Fund has led to the dramatic trade-off that while a stop of these measures would reveal the distortions immediately, the preference has shifted towards the continuation of the policy as long as new liquidity and further credit expansion may be expected to do their job. During the 1990s, debt accumulation became so massive that almost any measure seemed justified to maintain its structure; but with each measure to safeguard the system, temporary respite was bought at the cost of more systemic fragility. By ignoring that monetary and fiscal impulses change relative prices and the capital structure of an economy, the policy formulations based on the analysis of aggregates leave aside the consideration of the effects that take place at the micro-level and emerge symptomatically as macroeconomic disequilibria. More so, when measures to “keep the boom going” are applied repeatedly, more profound transformations of the capital structure will occur. This will make the economy less and less efficient leading to a bust and finally to economic paralysis. The emergence of a recession signals a misdirection of investment due to de-coordinated economic plans that need correction. Plans and economic actions must undergo revision in order to establish a new tendency towards equilibrium. But with new liquidity or active demand management, the process of reassessment and correction gets postponed and the trend towards a widening of disequilibria is made to continue (cf. Mueller 2001b).
Is there a future for the new economy? The emergence of the new economy in terms of the rapidity of its extension is closely linked to the specific monetary conditions and the international environment of the 1990s. It is hardly conceivable that the new technologies could have spread so rapidly without loose credit conditions, while the policies of easy money must also be seen as reflections of the conditions for economic policy when financial markets are globally connected. What has been so impressive about the emergence
Bubble or new era? 259 of the new economy – most notably in the Internet area – was the speed of its global dissemination. Many of the companies in this area appeared almost from the start as instant global players. Technology, without doubt, is part of this performance. But it has become evident as well that many of these new companies not only expanded too fast and too far, they also did not take properly into account their customers’ willingness and ability to pay for the service. The neglect of generating profits cannot be explained by technology. The occurrence that companies could expand in the absence of profits is a monetary and financial phenomenon. It suggests the existence of a bubble economy brought about by exceptionally loose monetary and credit conditions. Will the end of the boom signify the end of the new economy? Not necessarily. Economies are evolving systems, and as such they do not simply return to initial conditions. Many of the structures that were built in the past decade will survive. But valuations must adjust. This means that while market prices for physical investment and specific know-how will fall, debt positions in the balance sheets constitute a future burden. Isolated malinvestments and the ensuing problems for individual companies to meet debt obligations are a common feature in a market economy. What makes the situation different for a financial bubble is the dimension of the required adaptation. The credit expansion has brought forth erroneous economic activities, which potentially affect the entire economy. At the root of investment decisions lie errors due to misleading signals from a monetary rate of interest, which has deceived economic actors about the sustainability of funding of their various projects. The value of money – a function that modern economic systems entrust to a central planning committee – has sent false signals to the whole economy, and like the “planning errors” familiar from the centrally planned economies of the past, wrong investments have taken place on a formidable scale. The existence of many projects that do not generate noticeable profits implies that capital has been squandered. Working off that burden is a time-consuming process accompanied by lower levels of consumption. Trying to boost economic activity again when recession has struck will make the situation worse, as the necessary adaptation process gets postponed.
Notes 1 Alan Greenspan quoted in Woodward (2000: 180). 2 A somewhat different starting point is given by Hayek (1941) as his theory also contains elements of the “real business cycle,” making his approach in this regard somewhat “un-Austrian”; for Mises, in contrast, the central point is prolonged growth of circulating credit, which may also be the result of reduced risk perception, when government or central bank bailout guarantees are presumed to exist. 3 The difference between the originary rate and the monetary rate of interest becomes obvious under the hypothesis of a complete elimination of interest income (by expropriation or taxation): under such a condition, saving would stop in favor of the consumption of accumulated capital, precisely because the originary rate of interest cannot be removed from human valuation – whatever the monetary rate of interest might be (Mises 1998: 523). 4 The term “credit expansion” as applied here refers to circulating credit, i.e. creation of
260 Antony P. Mueller
5
6
7
8
9 10
11 12
money and money substitutes (such as shares and stock options as means of payment) that is not backed by savings. It is not possible to time a boom and bust sequence or to measure its extension in terms of years. While it is usually relatively easy to ascertain the occurrence of a bust, as this is a singular event in most cases, the formation of the bubble may stretch back considerably in time and constitute a prolonged learning process by economic actors as to the assessment of risks and price expectations. It is widely presumed that as long as inflation is tame, liquidity growth must not be diagnosed as being excessive; but this may be deceiving, because inflation numbers represent averages and do not adequately reflect specific price movements, in addition to being exposed to various distortions ranging from special statistical devices (such as the application of hedonic calculations) to the impact of the exchange rate. Even the finest electronic equipment will lose its immediate value when, for example, energy supplies become insufficient; and the most luxurious and modern houses and cars lose their utility and turn into a financial burden when the supply of complementary goods is lacking. It is disproportionalities of this kind that bring about the abortion of the artificial boom. Although these agencies are not officially counted as a part of the government sector, they receive their financial privileges due to an implicit bailout guarantee by the government and it is the common assumption among the investment community that they represent government institutions. See various Greenspan testimonies, particularly those given in early 2001 (Greenspan 2001a, 2001b). The first stage of this learning experience may even go back to 1987, when the US Federal Reserve Bank staged the rescue of the US stock market and – together with the government – urged global monetary expansion. The Japanese authorities, by exuberantly following this advice, triggered an asset bubble whose fall-out continues to plague the Japanese economy up to the present day. For a modern analysis of the capital effects based on the Mises–Hayek theory of the business cycle, see Garrison (2001). Compare the conclusions put forth by the Bank for International Settlement’s 69th Annual Report, op. cit., p. 149 and passim: “Through various channels, financial institutions including banks are becoming exposed to higher levels of market risk. Moreover, market risk is more highly correlated with credit risk than previously thought, since market exposures are often built on leverage, and credit risk is also more highly correlated with liquidity risk than earlier realized. Furthermore, it is now evident that risk models can also offer a false sense of security because they may lose their predictive powers in extreme market conditions. Indeed, their mechanical use may actually contribute to market turbulence.”
References Aizcorbe, A., Corrado, C. and Doms, M. (2000) “Constructing price and quantity indexes for high technology goods, industrial output section,” Division of Research and Statistics, Board of Governors of the Federal Reserve System, 26 July. Bank for International Settlements (2000) Annual Report, Basel. Federal Reserve Board (2001) Monetary Policy Report, submitted to the Congress on 18 July. Garrison, R. (2001) Time and Money. The Macroeconomics of Capital Structure, London and New York: Routledge. Godley, W. and Izurieta, A. (2001) “As the implosion begins? Prospects and policies for the US economy: A strategic view,” Strategic Analysis, Jerome Levy Economics Institute, New York, June.
Bubble or new era? 261 Gordon, R.J. (2000) “Does the ‘new economy’ measure up to the great inventions of the past?,” draft of a paper for the Journal of Economic Perspectives, May 2000, Northwestern University. Greenspan, A. (1996) “The challenge of central banking in a democratic society,” remarks to the annual dinner and Francis Boyer Lecture of the American Enterprise Institute for Public Policy Research, Washington, DC, 5 December. Greenspan, A. (1998) “Speech at 34th Annual Conference on Bank Structure and Competition,” Federal Reserve Bank of Chicago, 7 May. Greenspan, A. (2001a) “Current fiscal issues,” testimony of Chairman Alan Greenspan before the Committee on the Budget, US House of Representatives, 2 March. Greenspan, A. (2001b) “The challenge of measuring and modelling a dynamic economy,” remarks by Chairman Alan Greenspan at the Washington Economic Policy Conference of the National Association for Business Economics, Washington, DC, 27 March. Hayek, F.A. (1941) The Pure Theory of Capital, London: Routledge. Mises, L. von (1998) Human Action: A Treatise of Economics, The Scholar’s Edition, Auburn, AL: The Mises Institute. Mueller, A.P. (2001a) “Reforming the world financial order. Institutional and theoretical aspects,” Zeitschrift für Wirtschaftspolitik 50: 15–34. Mueller, A.P. (2001b) “Financial cycles, business activity, and the stock market,” Quarterly Journal of Austrian Economics 4: 1–19. Nordhaus, W.D. (2001) “Productivity growth and the new economy,” NBER Working Paper W 8096, January. Onliner, S.D. and Sichel, D.E. (2000) “The resurgence of growth in the late 1990s. Is information technology the story?,” Federal Reserve Board, Washington, DC, May. Wicksell, K. (1898) Geldzins und Güterpreise. Eine Studie über die den Tauschwert des Geldes bestimmenden Ursachen, Jena: Fischer. Woodward, B. (2000) Maestro. Greenspan’s Fed and the American Boom, New York: Simon and Schuster.
13 Possible economic consequences of electronic money Jean-Pierre Centi and Gilbert Bougi
What is now urgently required is not the construction of a new system but the prompt removal of all the legal obstacles which have for two thousand years blocked the way for an evolution which is bound to throw up beneficial results which we cannot now foresee. (Hayek 1978: 130)
Introduction It is frequently said that money is what money does! Gold was once the dominant medium of exchange, then came paper money, and now electronic money is a reality. This chapter examines the economic consequences of changing the product money in the information age. What are the economic consequences of electronic money? What are its implications from the view of economics? The change of payment practices that we are observing today owing to progress in electronics is significant enough to ask questions about the future of the monetary economy. Entrepreneurial actions are at the origin of electronic moneys. Undoubtedly, this is an innovation. Is there something really new about such innovation? To answer this question, we need to place money in the context of a procedural theory of money (first section) that has unfortunately been neglected (if not thrown out) by mainstream monetary theory. Our contention is that if on the one hand, the emergence of electronic money is explained by the procedural theory exactly in the same way as the other previous and past innovations, on the other hand, its productivity within the information network not only entails changes in the mechanisms of the money supply but also a change towards a full laissez-faire in money and banking that makes obsolete the dominant monetary theory itself, which is based on the existence of a monopolistic high-powered money controlled by the State. The characteristics of electronic money foreshadow the cashless society, and by the same token what could be a monetary system of fiat private and competitive moneys with different brand names. Yet many people seem to be frightened by the harms caused by the new payments technology. These are of course the costs of electronic money, but these kind of costs always existed in the past and we need to take into account that even though they are positive (or even if they are increasing in absolute value, which has to be proved) they are diminishing
Economic consequences of electronic money 263 in relative value before the widening freedom released by electronic money (i.e. because the subjective value that people place in the services provided by electronic money increases considerably). In other words, the subjective opportunity costs of the services provided by electronic money are lower than those related to the services from traditional means of payments (second section). The substantial changes in the demand for money and the supply of money, and the loss of effectiveness of the usual transmission mechanisms make the monetary policy inefficient or outright useless (third section). Therefore, a new monetary order is being created, where money will be managed according to the rules of laissez-faire banking systems and the discipline of monetary competition (final section). This evolution will change the rules of the monetary game in a way such that they will belong entirely to the market process, in concordance with the real nature of money as an institution.
An institutional approach to electronic money Money as a procedure and money as a product Monetary theory focused on money as a product: initially material product, then immaterial product. In concentrating on the notion of product, economists were induced to include in their analyses the objective notion of production cost and to make of it the dividing line between material and immaterial money. The whole neoclassical theory faced the difficulty of integrating such a view of money as a product and value theory. Even the most recent models that take into account transaction costs and/or uncertainty and attempt to give logical explanations of the role of money (comparatively with barter), are always partial and more often than not biased explanations as regards the nature of money. Implicitly or explicitly, it is said that immaterial money owes its existence to some deus ex machina and the value of money could not be preserved without some exogenous constraining force. In the standard micro-monetary theory, money is viewed as a special device of exchange determined by reason, and whose value should be controlled by the State. According to the general equilibrium approach, once inefficiencies are taken into account, the only suggestion from the comparative statics experiments is that it is possible to jump from one equilibrium to another (Pareto-superior) – that is to say from some closed universe defined by some institutions to another closed universe associated with better institutions – if and only if the latter is exogenously created in order to achieve a well-defined optimal social state of the economy. Such an approach does not discover anything really new regarding the benefits that money obtains, and it is either dumb or unable to deal with the issue of the emergence and genesis of money. Some recent attempts within the micro-monetary theory have been made, notably by Kiyotaki and Wright (1989 and 1993), the so-called models of search. This kind of explanation of the use of money is based on non-cooperative games; however, it focuses on the acceptability of money rather than its emergence, and it is not because we gain some appreciation of the general acceptability of fiat money (Kiyotaki and Wright 1989) that we have some explanation of the origin
264 Jean-Pierre Centi and Gilbert Bougi and the emergence of money. Some others, such as Duffy and Ochs (1999), through an experimental study and using game theory, attempt to deal more explicitly with the issue of emergence. Our main skepticism here comes from the use of game theory, because the latter, although it is widely used in evolutionary economics, is only able to capture parametric uncertainty but not structural uncertainty. Such an approach notably abstracts from the question of how evolved rules (i.e. institutions) are, outside static contexts and within the dynamics of real time that paves the way to entrepreneurial discoveries, the unintended outcome of human actions. By cutting the real nature of money, this approach dares to push forward questionable and hazardous proposals regarding the supply and the management of the product money. It is Carl Menger’s (1892) contention that moneys emerge from subjective entrepreneurial perceptions. The latter, following an ongoing and competitive process, give rise to repeated practices whose regularity fully reflects the existence of an organic social institution. Menger (1871) emphasized the notion of saleability of goods in order to show how, through an ongoing market process, the most easily saleable goods become “under the influence of custom” widely acceptable in trade. Three points, at least, are to be derived from the Mengerian theory of money. First, money in its medium of exchange function is intrinsically involved in the market process, which is viewed, as Hayek explained, as an exchange process of services among individuals who are ignorant of their own ignorance. Second, interpersonal trust in trade is part and parcel of the knowledge problem, that is the problem of ignorance (Hayek 1945). Thus money is a procedure to achieve individual exchanges when a general lack of mutual confidence prevails, which is typical of the open society in comparison with the primitive society. This is not to say that money is irrelevant in reducing transaction costs but that such reduction is related to an institutional or procedural aspect that cannot be crowded out and should be addressed by starting from lack of confidence among individuals. Third, regularity is far more important than the physical attributes of the goods. Therefore to trust in money is to trust in a general rule that emerges from below as a regularity that people observe but are themselves unable to rationalize because of their ignorance. According to Austrian theory, as money is a general rule resulting from a self-imposed procedure spontaneously applicable to all people, this also means that no one invented it, exactly as no one invented the market; nor is it the explicit outcome of a rational agreement. Thus, money as an undesigned social institution is a basic principle of the Austrian theory and this is how we may understand the following comment of Hayek (1978: 52): “it has been rather a misfortune that we describe money by a noun, and . . . it would be more helpful for the explanation of monetary phenomena if money were an adjective.” More than a theory of the product money, the Austrians developed a theory of the monetary procedure: it is a procedural theory and in focusing on the product we may forget the procedure.
Economic consequences of electronic money 265 New concepts for monetary theory and new rules for monetary discipline To give precedence to the institutional approach of money does not mean of course to give up either the functions or the forms of money. That only means that there is a kind of hierarchy. The first and basic approach is the one that introduces money as a social institution. Second comes the functional approach of money in connection with which we should agree that the medium of exchange function is the major role of money. Third comes the formal approach of money. The form of money may be constituted by commodity moneys (as media of exchange and stores of value) or immaterial moneys (such as currency notes, pen moneys, and electronic moneys). It is a great mistake to argue that money as an institution has been invented by the state and imposed on population by fiat. Nevertheless, monetary innovations do exist, although they do not cancel the basic principle of money as a social institution. These innovations represent themselves as changes which are not the result of human design. That happened when we passed from commodity moneys to bank notes, then to scriptural money, and happens today while we are passing to electronic money. If policy-makers aim at controlling these spontaneous changes, they will also undermine the nature of money and produce unintended side-effects which will weaken money as an institution. The demand for some form of money or another is nothing else than the demand for liquidity in a changing world in places and real time. These various forms of money as a product are due to endogenous innovations, but by no means should the form take precedence over money as an institution. Although the form of money may change according to places and times, they are like various shapes of the basic institution, all conveying the same message. As was expressed above, money as an institution is capitalized trust reducing complexity in social relations: it is a knowledge and information repository. We should not confuse the information conveyed by the product money and its medium; the medium is being changed once again with the discovery of electronic money, but the message being conveyed is (should be) always the same, that is the continuation of the market process and price mechanism abiding by private property and commutative justice.1 As a result, a mismanaged money produces wrong information; considerations of political expediency overturn monetary and economic soundness, and go in the opposite direction. If money is mismanaged as a product, the institution will not produce the best monetary services. Once the State assigns itself the exclusive power of producing and managing the product that should be the certainty equivalent of money as social institution, it does not do anything other than attempt to separate money from the market process, which generates pernicious effects, that entail in succession other public regulations that economists attempt to bear out by frantic rationalism. In the past, the innovations in the product money have always been intermingled with the market process and, accordingly with the procedural theory of money, came from spontaneous changes.2 Most of the time, such innovations of the product money were born through diversion of the public regulations in force, but they
266 Jean-Pierre Centi and Gilbert Bougi have more or less swiftly always been seized by the State. The concepts of monetary theory have almost always (at any rate in the mainstream) leant on this principle of public control over money and the latter has also been justified in a rationalistic way. Consider the following comment of Vera Smith (1936/1990: 195): But whatever may be our verdict as to the comparative outcome of free banking and the system of central control in terms of stability it is unlikely that the choice can ever again become a practical one. To the vast majority of people government interference in matters of banking have become so much an integral part of the accepted institutions that to suggest its abandonment is to invite ridicule. The event of electronic money is particulary interesting because it practically leads to a refutation of the general assertion expressed in Vera Smith’s quotation. We would like to lay stress on two points that seem to characterize such change. On the one hand, as important is its potential for change in the payments system and its potential for extending exchanges, such monetary innovation is one among others that took place in past centuries and that were the unintended outcome of the market process. The procedural theory of money does not teach something really new when applied to electronic money: money as a product should be the certainty equivalent of money as an institution. On the other hand, the information flow and the way information is being processed owing to electronics allow electronic money to reinforce the unfolding of the market process such as it is based on the rules of private property, voluntary exchange and relative prices. Electronic money provides evidence of the epistemological superiority of the market in processing information over the political decision-making process and therefore comes to the point of undermining the monetary power of the State. This entails two consequences: first, the economic distortions induced by the statist management of money tend to vanish, and second, the basic concepts on which monetary theory was built up in the mainstream are themselves set aside by monetary and financial innovations. The expansion of electronic money entails a radical change in monetary theory together with a substantial change in monetary mechanisms. The monetary theory that takes form seems more and more to be based on a full integration of the money supply into the private sector of economic activities (i.e. into the market process). This new theory is necessarily built on the basis of laissezfaire banking and monetary competition, two proposals that fit with the evolutionary approach of money and are on the opposite side of the predominating constructivist approach of money over the twentieth century.
An economic overview of electronic money What is electronic money? Electronic money is broadly defined as an electronic store of monetary value based on a technical device that may be widely used for making payments without
Economic consequences of electronic money 267 necessarily involving traditional bank accounts in the transaction, but acting as a prepaid bearer instrument. In other words, electronic money, e-money, is money that moves along multiple channels, i.e. the Internet,3 partly outside the established network of banks, and paper currency overseen by central banks. The advent of Internet commerce permitted a much more widespread use of e-cash transactions. In response to this, new forms of electronic money began to appear in early 1995, some aimed at assisting Internet commerce, others aimed at making all traditional forms of currency electronic. There are two different types of electronic money: prepaid cards and prepaid software products (virtual money). With prepaid cards, the electronic value is stored on a computer chip (or integrated circuit) embedded in the card and value is typically transferred by inserting the card into a card reader (smart cards and stored-value cards). With software products, the electronic value may be stored on the hard disk of a computer and transferred over communication networks such as the Internet when payments are made. The smart card gives the appearance of a credit card with an embedded microchip that can be loaded up with e-money. A number of smart cards are on the market today, and these are used in a wide range of applications. Today, Mondex is a big company for bank-issued smart cards. Mondex has received much recognition in the financial press. It is primarily working on smart cards, which will store electronic value on a chip on the card itself. Mondex can only transfer funds to another Mondex card, but users can transfer funds among each other. The company has plans to create a card reader that attaches to a computer so that individuals may use these cards online. Virtual money is actually a piece of information with intrinsic value that is stored on an individual’s personal computer’s hard drive or in an electronic wallet and could be used to make online purchases. It can also be transferred from a personal computer to a smart card for use offline. Many companies have developed their own forms of virtual money. The most revolutionary products developed in financial and Internet circles in decades have been the true “DigiCash” products (Chaum 1992). DigiCash’s system was established by Chaum, and started its experimental system in 1994. Virtual money by DigiCash is a new concept in payment systems. It combines computerized convenience with security and privacy that improve on cash paper. It adds value to any service involving payment, and its versatility opens up a host of new markets and applications. Another virtual money form is the one issued by NetCash. It is the first functional form of virtual currency. NetCash works very much like physical cash but without the physical symbols. Rather than a wallet filled with bills, we have a virtual wallet4 filled with virtual bills. These are nothing more than information, specifically a serial number identifying each bill and the monetary value it represents. Electronic money products differ in their technical implementations. To store the prepaid value, card-based schemes involve a specialized and portable computer hardware device, typically a microprocessor chip embedded in a plastic card, while software-based schemes use specialized software installed on a standard personal
268 Jean-Pierre Centi and Gilbert Bougi computer. How does this work? An Internet user opens an account with real money at an Internet-based bank. Currently, other methods are being experimented with to obtain digital cash amounts through financial assets transfers. The customer asks the bank to issue a certain amount of digital cash for use on the Internet. The bank issues this digital cash using encryption and deducts the fund from the established account.5 The digital cash is a combination of two huge integers which have a special mathematical relation. No other person or institutions can imitate this relation. Typically, four types of service providers will be involved in the operation of an e-money scheme: the issuers of the e-money value, the network operators, the vendors of specialized hardware and software, and the clearers of e-money transactions.6 Let us suppose that a customer (X) wants to purchase goods or services through the Internet. These unique data that define digital cash are given to the merchant previously selected by the customer. The merchant in turn sends these data to his bank for confirmation. The merchant’s bank has to make contact with the customer’s bank for confirmation. If the bank confirms the digital cash, the bank credits the merchant’s bank account by that amount. Only the bank can confirm that this digital cash is legitimate and has not been used elsewhere. The bank cannot know who used the digital cash, as long as customers of the bank do not use it twice. Characteristics Security: the use of encryption If the transmitted information is of a sensitive nature (i.e. financial data), then it needs to be protected so that only those authorized to read it may do so. The science of cryptography, which is the science of keeping digital data secure, makes this possible.7 Encryption is the process of scrambling data into ciphers or code so that they can only be unscrambled (decrypted) by individuals who have the key essential to accomplish this task. There are two kinds of cryptography: symmetric key and public key. The key is what we use to “unlock” a message. In symmetric key cryptography, the sender and receiver have the same key. In public key cryptography, there is a public key for sending and a private key for receiving. As noted, public key technology is used in combination with symmetric algorithms to protect the messages in transit and authenticate origins. The public key algorithm is RSA8 and the symmetric algorithm is DES.9 Negligible transaction costs and interest earnings The high-speed communications network benefits the users of e-money, and of course, makes instantaneously available goods and services for anyone in the whole world. Moreover, banks themselves suffer less processing costs as people switch from currency notes and checks to e-money to make payments. On the other hand,
Economic consequences of electronic money 269 as has just been described, it is technically feasible to supply prepaid cards that can be charged against an account bearing interest. Under competitive pressure, banks themselves will induce people to hold and use more electronic money by offering attractive interest rates on their electronic balances. Transnationality E-money activities are based on technology that by its very nature is designed to extend the geographic reach of banks and customers. E-money is not constrained by national borders. Those using electronic money can purchase services and goods from any site anywhere on the Internet. Such market expansion is not without certain risks. Although banks currently face similar types of risks in international banking, it is important to note that these risks are also relevant to the cross-border conduct of electronic money10 (Solomon 1997). As the evolving internet market is international in scope, banks issuing electronic money will themselves be induced to extend their competition in the cyberspace. Accordingly, each bank will have cybercustomers everywhere in the world and this will make the denationalization of money a reality, all the more so as geographic boundaries will, from now on, be artificial constraints. The banks will have tremendous incentives to promote their own moneys among cybercustomers: electronically initiated debits and credits, being more and more attractive through competition in banking, will tend to generate competitive private moneys and to abolish the monopolistic supply of government money. It is often argued that the monopolistic supply of money provides economies of scale at the national level. Yet the potential for bad government money is also widely proven in practice and in theory as well. Then the argument goes like this: since there is equal probability to have good money and bad money, and since a good money at the national level concerns all the people while private competing moneys will concern fractions of the same people, the social benefit from the national provision of money by the State is higher than that which would result from a competitive supply of money. But it is clear that if money is a bad one, the extent of its use does matter as its ill effects (inflation, unemployment, stagflation) are imposed on the whole nation. In contrast, the ill effects of some bad money in a competitive framework are more limited, while a good competitive money tends to be emulated and, owing to freedom to choose, its use will extend. Perhaps, according to some scholars, the monopolistic supply of money is desirable for the purpose of transferring wealth (as Keynesians, for instance would maintain) and increasing social welfare, yet is clear that a necessary prerequisite for such money is strong barriers to exit – i.e. to choose alternative moneys – for all those who expect to lose wealth through transfers.11 The development of e-moneys together with the competition among the issuing banks pave the way for people to choose among alternative moneys. Such possibility allows people to evaluate the soundness of the money they use and to move from one money to another. The vast potential of cyberspace and the correlative transnationality of e-moneys tend to prove that there is no reason to believe that any
270 Jean-Pierre Centi and Gilbert Bougi particular national boundaries constitute optimum currency areas. In fact, transnationality also means that there is no special problem regarding the size of the area of some competitive money: competition makes it an endogenous variable. Untraceability and anonymity Electronic money transactions are untraceable and anonymous. To achieve untraceability and anonymity on the Internet, encryption has to be fully employed. Chaum (1992) proposed untraceable electronic payment systems using advanced encryption technology. This technique offers the potential for a greater degree of privacy in financial transactions. E-money systems allow the parties to the transaction to deal with each other directly, without the assistance of a regulated financial institution. The use of e-money systems will mean fewer face-to-face financial transactions. The anonymity of electronic money will make knowing your customer much more difficult. The origins of funds are relatively opaque and the identity of the individual or entity transferring them difficult to determine. In fact, payer anonymity12 is a central characteristic of some proposed systems. For virtual money transfers, transaction anonymity could lead to law enforcement being diverted. Failures of market self-regulation? A further aspect for consideration is that electronic money issuers may not have sufficient market incentives to implement policies aimed at promoting their financial integrity. It is a commonly held view that the issuers have a clear commercial interest in avoiding failures, but at the same time they might be subject to a number of constraints, such as, for instance, the pressure on the part of shareholders to obtain high returns and to reduce costs, which might induce them to implement inadequate investment policies and security measures. Therefore, regulation is seen as necessary to cushion the possible failure of market incentives. Potential risks for transactions Two main types of cross-border use of electronic money can be foreseen. First, the customer using e-money and the issuer may be located in one country, while the merchant is located abroad. Second, issuers established in one country may implement electronic money schemes by which they offer electronic money in another country, presumably in the customer’s home currency. It is important to note that this question also regards the status of the issuer of electronic money. The main difference among countries lies in the firms that are allowed (or will be allowed) to issue e-money (European Central Bank 1998). There are several possible types of issuers: banks13 or other regulated non-bank financial firms and non-financial firms. For instance, the latter two categories of firms are allowed to issue e-money in some states in the United States. The identity of the issuer has implications for the security of the electronic money system. In the meantime, regulations are only the response of governments.14 If
Economic consequences of electronic money 271 issuance of e-money is limited to banks, the regulatory framework already in place can be extended to cover the new products but competition and innovation might be more limited. In contrast, if a greater variety of institutions can be issuers, a greater degree of competition could yield commensurate benefits. Electronic money systems face risks that the systems may not be well designed or implemented. For example, an electronic bank is exposed to the risk of an interruption or slow-down of its existing network system if the electronic money system it chooses is not compatible with user requirements. Electronic money products could suffer from accidental corruption or loss of data stored on a device, malfunction of an application (such as accounting or security functions), or failures in the transmission of messages. External attacks are on electronic money products themselves, i.e. counterfeiting, fraud, or disruption of the system.15 An unsophisticated method of attack would be to steal consumer or merchant devices and fraudulently utilize the balances recorded on them. Data stored on devices could also be stolen via unauthorized copying16 (Committee on Banking Supervision 1998: 5–8). Fraud could also be attempted through repudiation of transactions made with an electronic money payment. In fact, there are different security measures in the electronic money system. They are designed to safeguard integrity, authenticity and confidentiality, and to protect against losses due to fraudulent duplication or repudation of transactions. Of course, measures to detect and contain fraud may also have an important deterrent function and thus serve to prevent as well. Detection measures are those taken to alert the issuer or system operator to an occurrence of fraud and to identify the source of the fraud. Containment measures are intended to limit the extent of any fraud once it has been committed. Potential exploitation for criminal activities The criminal activities, associated with electronic money schemes, are tax evasion and money laundering. Should electronic money schemes offer the possibility of executing anonymous transfers of large amount of money, they could be increasingly used for such criminal purposes. The emergence of e-money has sounded an alarm in government. This is especially so concerning traceless electronic money systems that provide payers and payees more anonymity than they would have with paper currency. Tax evasion can be accomplished with a few strokes of the keyboard. In a world of anonymity, e-money can be transferred around the globe at the speed of light. The more government tries to tax, regulate, control, and confiscate, the greater the incentive for business and investors to leave. In the electronic payments system, we cannot both keep the present income tax system and enforce it, and at the same time keep our liberty and privacy. What are the economic consequences of tax evasion? An argument against financial privacy is that it will make the collection of some types of taxes more difficult for governments. This reduction in the tax base and tax rates that will be required in the digital age will make the relative size of government smaller. It was
272 Jean-Pierre Centi and Gilbert Bougi only with the rise of Keynesian political economy in the twentieth century that an ever-growing government sector was viewed as desirable. In the Keynesian macroeconomic tradition,17 it is recommended that the government adopt fiscal policies to manage demand, supply will take care of itself. To avoid depressions, Keynesians recommended an important government sector. Today, do we need a smaller or bigger government sector? Remember that, in the past, government spending had been a relatively small share of GDP in most countries.18 Since electronic money is untraceable, not leaving well-defined records for a tax authority to follow, taxation will not be easy.19 This problem may need to be solved by a whole new view of international taxation. Reasons given by governments for increasing regulation include the need for monitoring the soundness and safety of financial institutions. The cost of trying to enforce taxation may well exceed the revenue collected and certainly will exact a price in terms of lost economic efficiency and lost privacy rights that exceed the benefits of their continued taxation. Government authorities cannot stop this change, because it is a worldwide change with too many people having knowledge (Hayek 1945). The rapid expansion of unregulated and untaxed activities in cyberspace is indeed seen as a threat to state power. Regulation will not be effective because progress in developing the means of evasion will always be far ahead of those who are trying to restrict it. In the same way, governments have largely given up trying to control the flow of information, because technology has made it an impossible task. Electronic money technologies have the potential to make money laundering much more widespread than traditional methods,20 as well as complicating efforts to fight it. First, electronic money laundering has the potential to undermine the financial community because of the sheer magnitude of the sums involved. Perceived ease of entry to one country attracts an undesirable element across borders, degrading the quality of life and raising concerns about national security. Second, it makes crime pay. It allows drug traffickers, smugglers and other criminals to expand their operations. This drives up the costs of law enforcement and health care. Finally, laundering diminishes government tax revenue and therefore indirectly harms honest taxpayers (a kind of prisoner dilemma). At present, most stored-value-card and electronic-purse pilot projects have established limits for consumer cards ranging up to the equivalent of 1,000 US dollars. It is too early to determine whether market pressures will cause products to evolve in such a manner as to become more or less attractive for money laundering. Moreover, the combat of money laundering by governments leads to the destruction of financial privacy. Note that countries like Switzerland that have maintained financial privacy for their citizens, have lower crime rates than the United States, and Swiss citizens are no more subject to terrorism than American (Rahn 1999: 109–12). The fact is that the fears of both sides are correct. This fact does not justify imposing further restrictions on legitimate money tranfers by honest citizens to the detriment of our liberties.
Economic consequences of electronic money 273
Implications for the conduct of monetary policy Monetary policy is defined, in broad terms, as the apparatus according to which a central bank decides how to act in order to achieve its final objectives (sustainable growth, high level of employment and price stability). As for the implications of electronic money regarding the conduct of monetary policy, it is necessary to take into account that the development of electronic money will reduce the use of central bank notes and coins (Bank for International Settlements 1996). Effects of replacing central bank currency We will examine electronic money’s potential to replace central bank currency. For this, we will show how a representative agent would choose among various payment instruments once the latter exist. The following discussion is based on a model by Santomero and Seater (1996) in order to show what are the conditions and implications of an increasing use of electronic moneys. A reduction in the demand for bank notes Santomero and Seater study the behavior of a representative agent facing the opportunity to choose among different generally accepted payment instruments. Attention is focused on how the characteristics of these moneys affect the representative agent’s choice of transaction vehicle, transaction frequency, and average balance in various media. We present a simplified version of their model, focusing on the demand for electronic money. The representative agent receives his income Y at the beginning of a period of fixed length. During this period, he spends the entire income by buying and consuming G different commodities g, with g = 1, 2, . . . , G. There are K moneys, Mi, with i = 1, 2, . . . , K. This agent can use any or all these moneys to buy each type of good. During the period, he makes Zgi shopping trips to buy commodity g with money Mi. Each shopping has a cost Bgi. The representative household spends only a fraction of his income during one shopping trip. Unspent income is held in a single savings asset, S, and in money balances. The savings asset earns the rate of return rs, and the various kinds of money earn rMi. The return on the savings assets is larger than the return on any of monetary assets.21 There are Ti conversion trips to obtain money Mi and each such trip has associated with it the conversion cost ai. The household seeks to maximize the profit from managing its assets over the payment period. Because all conversion and shopping trips are evenly spaced and consumption proceeds at a constant rate, the profit function of the representative agent can be written in terms of average values of the respective assets. k
G
k
i =1
g =1
i =1
L
G
Π = rs S + ∑ rM i Mi + ∑ rX g X g − ∑ Ti ai − ∑ ∑ Zgi Bgi i =1 g =1
(13.1)
274 Jean-Pierre Centi and Gilbert Bougi We suppose that Xgi is the amount of commodity g that is bought during a shopping G
trip with money Mi. Suppose that A = ∑ XgEM is the total electronic money spent. g =1
Operating appropriate substitution into the profit equation, one can show how the household chooses Ti, Zgi and Xgi. From the first order condition, we get the expression of the demand for electronic money:22
MEM
aEM A = 2(rs − rMEM
1/ 2
B X − ∑ gEM gEM g =1 2(rMEM − rX g G
1/ 2
(13.2)
According to (13.2), the demand for electronic money depends on several parameters: • • • •
the cost of transferring electronic money on a smart card or a computer hard drive, aEM; the cost per shopping trip, BgEM; total electronic money spending, A; the interest differential between the savings asset and electronic money balances, rs – rMEM.
The cost of using electronic money to buy commodity g, i.e. the cost per shopping trip BgEM, consists mainly of potential fees per purchase charged by the shop owner or issuer of the smart card.23 It is likely that the use of electronic money will be free for consumers because competition between electronic money issuers and the competition between electronic money and costless paper money will prevent issuers raising fees on electronic money transactions. If the cost per shopping trip is zero, i.e. BgEM = 0, the equation for demand for electronic money MEM is reduced to the standard Baumol (1952) and Tobin (1956) square-root formula:
aEM A MEM = 2(rs − rMEM
1/ 2
(13.3)
It is important to note that the cost of conversion aEM is likely to be small and will decrease further as technology improves and smart card readers are more widely distributed.24 Competition among issuers of electronic money makes it likely that consumers will pay no per transaction fees and negligible cost. Demand for electronic money depends on the total amount of electronic money A spent each period. A will grow because of all characteristics and low fees provided by an electronic money. What remains to be considered is the effect on the interest differential rs – rMEM. Competitive pressure could force the electronic money issuer to pay interest on electronic money balances. The smaller this differential, the larger the demand for digital money. In fact, given the lower cost of transferring value into the card, it may dominate cash in the near future. So what are the consequences for the central bank?
Economic consequences of electronic money 275 The loss of central bank seigniorage revenue The replacement of banknotes by privately issued electronic money in retail payments implies a reduction of base money and the shrinking of the central bank’s balance sheet. The substitution means a reduction of the demand for real balances in base money and unless this kind of money is imposed by coercion-like reserve requirements, one issue is how to face the desired balances without questioning the monopolistic power of the central bank. The replacement will likely make it more difficult for a central bank flexibly to absorb a liquidity shock. The effects of electronic money on the implementation of monetary policy will depend upon whether its primary impact is on the demand for bank reserves or on the central bank’s capacity to supply these reserves. Monetary policy is based on the ability of central banks to determine the conditions that equilibrate demand and supply in the market for bank reserves. The effect on demand would result from the substitution of electronic money for reservable deposits. As e-money reserves would increase, there would be a substantial reduction in a bank’s demand for settlement balances in base money. It is conceivable that a very extensive substitution could complicate the operating procedures used by central banks to set money market interest rates. Substitution of e-money for central bank currency would increase cash holdings of banks. The banks would observe that their cash holdings exceeded the optimal amount and they would return cash to the central bank, thereby increasing their reserves on the books of the central bank. Consequently, substitution for central bank currency would increase the total supply of reserves, and this is equivalent to an expansionary open market operation that provides additional reserves to the banking system. Central banks could be forced to step in and absorb these reserves by selling central bank assets. However, special circumstances could arise in which the central bank might not be able to implement reserve-absorbing operations swiftly on a large enough scale25 because it would lack sufficient assets on its balance sheet. Because central bank currency is by far the largest liability of central banks,26 an extensive substitution could reduce the monetary base to the extent that it would adversely affect monetary policy implementation and eventually remove it. Since banknotes in circulation represent non-interest-bearing central bank liabilities, a substitution of electronic money for cash would lead to a corresponding decline in central bank asset holdings and the interest earned on these assets that constitutes central bank seigniorage revenue. This revenue has been associated with central bank operating costs. It could fall substantially and become too small to cover the cost of central bank operations. However, if the spread of electronic money was extensive enough, the loss of seigniorage could become a concern to central banks, which might in consequence become more dependent on other sources of revenue, i.e. on other income sources such as government subsidies (themselves covered by taxes).27 Such contingency would be nonsensical and could not last long. That means that either central banking would lose any reason to exist and the government would have nothing to do with a central bank, or government would lose its monopolistic status in supplying high-powered money and compete with private money suppliers. Both cases boil down to denationalization of money and
276 Jean-Pierre Centi and Gilbert Bougi the substitution of the micro-profit motive for any macro-objective funtion defined in terms of social welfare in the present as well as future periods. Ineffective monetary policy We will consider the potential effects of electronic money on the monetary transmission mechanism. As the whole monetary base would tend to disappear, monetary policy itself would be jeopardized. The velocity of money How does electronic money affect the income velocity of base money? Here, we study the implications of electronic money for the level and the stability of the income velocity of money of the monetary base. The income velocity of money is of interest to central bankers who rely on monetary aggregates either as indicators or as ultimate targets. Stable velocity of money is traditionally crucial to them. Although the recent models dealing with monetary policy are based on the game theoretic framework and are rather turned towards the time-inconsistency issue, the stability of the demand for money function rermains, at least implicitly, a sine qua non to determine the monetary aggregate. The theoretical background underlying the use of the monetary aggregates is provided by the quantity theory. If the velocity and real GDP are known in advance, the central banks can control the price level by choosing the appropriate level for the money stock. To do so, however, two requisites must be satisfied: first, the velocity of money must be predictable and stable; second, the central bank must be able to determine the money stock. As we suggest that electronic money could substantially replace central bank currency, this would, assuming central bank reserve-absorbing operations, reduce the monetary base. Consequently, the income velocity of base money would increase.28 A large increase in velocity is troublesome, even if measured correctly. As the velocity of money would be increasing, its variability would make it more difficult to maintain financial stability. Failures in achieving monetary targets have larger unwanted effects on nominal income, as suggested by the quantity equation. From the money multiplier to mutual fund banking The public, in substituting central bank currency by electronic money, gives less freedom to central bankers. The emergence of electronic money seems to strengthen the case for a strict monetary base rule (Selgin 1997). Yet such a prediction is not so simple. Our monetary system depends on the money multiplier whose formula is:
1 (13.4) b + r − br where B is the monetary base, b the public’s desired currency notes and r the banks’ desired reserves (including required ones). M = kB, with k =
Economic consequences of electronic money 277 With any sort of rule (including a zero inflation rule), the central bank could always appeal to unforeseen changes in factors beyond its control. A long-standing argument against a monetary base rule is that such a rule would not allow the central bank to adjust the base in response to unforeseen changes in the public’s desired currency ratio. We can conceive another argument. Electronic money seems to strengthen the case for monetary base rule by helping to eliminate the public’s desired currency ratio as a factor influencing the money multiplier. The multiplier would then simply be the reciprocal of the banking system reserve ratio. The challenge of monetary control would be simplified accordingly: with one less variable to worry about, the central bank would not need so much freedom to improvise. Yet, we have to take into account two other real possibilities. First, even the emergence of electronic money is not enough to make a strict monetary base rule work perfectly. Undesired fluctuations in nominal incomes and prices could still occur as a result of unforeseen changes in the demand for money or in bank reserve ratios. Second, electronic money contributes also to reducing r in various ways. It seems to be clear that if currency notes are increasingly rare among the public, commercial banks will themselves be induced to economize on the use of unproductive central bank reserves. Moreover, electronic payments technology today allows real time gross settlement (RTGS) systems to operate in many countries: such a system means that the settlement is completed at the time of transaction, comparable to cash payment. What happens at the level of ordinary transactors is also happening at the central clearing level of interbank payments. With electronic money, all retail payments are transacted in real time, in which case there will be – due to the new monetary technology which saves a tremendous amount of time – no need for a whole interbank clearing system (whose final purpose is precisely to save time).29 Settlements are effected continuously throughout the day, with no need for extensions of credit to the payments system by the central bank. A major step was taken by Switzerland in 1987, when it introduced a RTGS system that eliminates the risk of a large, contagious end-of-day settlement failure. More recently, RTGS has been adopted by the European Union countries.30 Real time gross settlement is probably a source of decline in reserve holding. Indeed, it can offer a powerful mechanism for limiting (reducing) settlement in the interbank settlement process, because it can effect final settlement of individual funds transfers on a continuous basis during the processing day. The implication of real time settlement, in conjunction with various forms of electronic money is that systemic risks to the payments system can be controlled without resorting to lender of last resort facilities. This system eliminates the risk of a large contagious end of day settlement failure. Moreover, banks may choose to lend to each other during the day, employing the same prudent standards by which they lend to each other overnight. In such a system, the holding of bank’s reserve would be useless, or at least inefficient. This information technology sheds light on the real nature of banking. One can observe that capital markets become more and more liquid and mutual funds gain direct access to the payments system. Banking will no longer be limited to financial
278 Jean-Pierre Centi and Gilbert Bougi institutions that undertake to redeem their deposit liabilities at par. The value of the liabilities of a mutual fund bank will, as with any mutual fund, always reflect the current value of its assets. More significantly, mutual funds are not subject to runs: there is no incentive for mutual fund depositors to form a queue to redeem investments whose value is continuously marked to market. In this system, to pay for goods or services, the buyer offers electronic money issued by a mutual fund of his choice. The seller’s point of sale terminal immediately communicates through the clearing system with the buyer’s mutual fund, confirming the buyer’s payment instruction and the seller’s instruction to transfer funds to the mutual fund of his choice. We can point out that the credit worthiness of the buyer can be instantaneously verified by the seller through the immediate electronic access to the buyer’s wealth account. There is no more need for the holding of the bank’s reserve. Together with electronic payments, the development in technology allows people to collect information about the creditworthiness of the business sector (the non-financial spending units). Hence, there are more and more purely financial transactions concerned with the direct transmission of saving from consumers to investment in business. The famous “new view” of the bank as a financial intermediary is out of date as securitization and mutual fund shares are developing (Fama 1980, 1985). In backing transaction media by equity claims, wealth can be securitized to form liquid wealth accounts. This change is also giving banks other kinds of reserve and clearing media that are more profitable than central bank deposits owing to the transfer of mutual fund shares. Finally, as the transnationality of e-money is enhancing competition in banking, banks belonging to different jurisdictions with different minimum reserve requirements (even zero requirement) intensifies pressure to reduce holdings of base money. As a result, the money multiplier will make no more sense, government money will tend to be abolished, and outside money will disappear together with its theoretical implications from the lender of last resort function of the central bank to the quantity theory of money.
Towards a new monetary order From the foregoing points it seems to be clear that central banks are no more able to make the issue of electronic money exclusively a central bank activity and to give e-money the status of legal tender. The more plausible scenarios are either the disappearance of the principle itself of centralization in money issuing, or the immersion of the central bank within the new payments technology so that it would become one money issuer among others, without any privileged status. An intermediary scenario would be overseeing and some kind of regulation exercised by monetary authorities. In any case, base money and outside money would become ineffective concepts as well as the exogenous money supply, since the central bank’s leverage over the commercial banks would disappear. By the same token, its influence over the interest rates and exchange rates would be ineffective. A new monetary order would then take form.
Economic consequences of electronic money 279 Increased efficiency of exchanges Transnationality provided by the use of electronic money makes international transactions more efficient in several ways (Table 13.1). To transfer money in the traditional way, conventional banks maintain many branches, clerks, automatic teller machines, and specific electronic transaction systems. Costs for all this bureaucracy is generated in part from fees for money transfers and credit card payments. Since electronic money uses the existing Internet network and the specific computers of its users, the cost of digital money transfer is much lower, close to zero.31 Table 13.1 Features of electronic money, currency, checks, and debit cards32 Characteristics
Electronic money
Currency
Check
Debit card
Legal tender Acceptability Marginal cost per transaction Payment finality face-to-face transaction Payment finality non-face-toface transaction User-anonymity
No ?
Yes Widespread
No Restricted
No Restricted
Low
Medium
High
Medium
Yes
Yes
No
No
Yes Yes
No Yes
No No
No No
To understand the importance of transnationality, let us assume that electronic money is completely domestic. That is, only a bank in a given state can issue electronic money in that state’s currency. Only the citizens of the state can use this electronic money and only with merchants for products and services within the state. Now, with the transaction completed within the Internet, exchanges become more open and less expensive. The cost of transfer within a state is almost equal to the cost of transfer across different states. The cost of international money transfers, now much higher than transfers within a given state, will be reduced. This lower cost enables micro-payments, like 10 cents or 50 cents, to be possible, which in turn may encourage a new distributional system and fee structure for music, video, and computer software. This ability to finally handle micro-payments might also provide a solution for the payment of fees to authors and publishers for use of copyrighted materials in electronic form. Electronic money payments can be used potentially by anyone with access to the Internet and an Internet-based bank. While credit card payments are limited to authorized stores, digital money makes person-to-person payments possible. The cash can be transferred through computer networks and off the computer network into other storage devices. Electronic wealth should not be restricted to a unique, proprietary computer network. An electronic money token in a given amount can
280 Jean-Pierre Centi and Gilbert Bougi be subdivided into smaller pieces of cash in smaller amounts. Moreover, the digital money does not expire. It maintains value and even yields interest until lost or destroyed, provided that the issuer has not debased the unit to nothing or gone out of business. Thus, even very small businesses and individuals can use digital money for all sorts of transactions. Multinational small businesses will become a dynamic new force in local and regional economies. New technologies will enable people to acquire the goods they want without holding or handling money,33 which is a troublesome, non-earning asset. With electronic money, trade will be executed by instantaneous and simultaneous debiting and crediting liquid wealth accounts, held by both banking and non-banking institutions. The new electronic digital payments technology will enable property rights claims on real assets, such as stock and bond funds, or gold, to be utilized as the medium of exchange for virtually all transactions. In any market economy, property rights perform the crucial economic function of facilitating decentralized decision-making. Property rights enable each person, family, or firm to make decisions about the things most important to them, based on their own diverse plans and desires. The new technologies dramatically reduce foreign-exchange transaction costs. A resident of a country with a chronically weak currency could easily shift his savings into stronger ones. Indeed, there could easily be a massive flight of electronic financial assets from weak currencies to stronger ones, effectively driving the weaker, less significant currency out of existence. Is it not a sort of Gresham’s law in reverse? The future of currency competition The arguments in favor of competing moneys are the usual arguments in favor of competition generally. Hayek (1978), has notably argued that competing suppliers of currencies would be required to provide market participants with moneys exhibiting the characteristics that are most widely desired. Users of moneys are generally assumed to prefer moneys that provide a stable expected (or at least predictable) value (Klein 1974). What Hayek did not foresee in 1976, nor did anyone else, is how a combination of technologies 20 years later would enable and end the run around the forces of monopoly and inertia. Again, to review, those technologies that will make Hayek’s ideal not only a utopia but a reality are: • • • •
the Internet: the communication system can be accessed by all on a global basis; public key encryption: which enables anyone to communicate with anyone else on the globe with a degree of privacy; global commodities futures markets: which enable anyone on the globe to know the price of any freely traded currency or any specific commodity; asset securitization: enabling many assets to serve as the backing for private moneys;
Economic consequences of electronic money 281 •
smart cards: which provide an easy and convenient way to store, dispense and collect electronic money (i.e. to buy and sell).
These new technologies protect each individual’s right to privacy and provide sound money. As the current trend on the Internet demonstrates, robust economic commerce depends on a flexible, responsive monetary system which can best be provided by unbridled market competition. As today we live in a completely artificial monetary system, the main problem facing any producer of (base) money is to produce trust in money. Trust in money today depends above all on controlling the power of the State34 (Selgin and White 1994). The electronic money system seems to be going to change the way monetary trust is created and maintained. As transnationality and competition among banks across the cyberspace would induce people everywhere to choose among alternative moneys, there would be various and different groups of cybercustomers, each group using one money. The competitive cyberbanks would have tremendous incentives to form transnational groups by establishing their own moneys and to preserve the qualities of their moneys. The money quality is mainly observed on the markets and as Klein (1974) argued, in order to guarantee money quality, the money issuer has to invest in brand name capital. Thus, for private fiat competitive moneys, brand name differentiation is essential, both for money producers and for money users. On the one hand, investment in brand name is a way to produce trust in money and to build reputation. On the other hand, the falling costs of information and communication technologies make brand identification pervasive so that it becomes increasingly easy to compare the qualities of moneys and this would encourage the issuing firms to stabilize the expected values of their respective moneys. As Klein (1974) and Hayek (1978) maintained, flexibility of exchange rates among competing moneys is a condition to dissuade banks from overissuing because overissue by some single issuer would involve a disinvestment in its brandname capital and a loss of customers, as the latter would opt out of this money into a competing currency with a more stable expected value. Accordingly, adverse clearings at the level of interbank payments would no longer be necessary. Membership in a group using the same money serves as a bond assuring others that this issuing bank is reliable, while repeated and voluntary dealings are conducive to the development of trust. Therefore a good (high quality) money can spread as one group emulates the money that proves effective in another group and as individuals can “exit” (i.e. can move from one group to another) or even belong to more than one group (i.e. use more their one money). Some important features of this possible future monetary order deserve attention. 1
2
The monetary area, defined as individual membership in a group using the same money, has nothing to do with the national territory. This is indeed denationalized money. The size of any monetary area is endogenously determined.
282 Jean-Pierre Centi and Gilbert Bougi 3 4 5 6
The monetary area is not necessarily defined in terms of a specific territory or geographical area. One individual may belong to more than one group, i.e may use more than one money. Many groups, each using one money, may geographically coexist. No money is imposed from above.
As it is the case for customary norms, the monetary discipline would evolve spontaneously from bottom up rather than being intentionally imposed by legislation. Money is as much a social norm as an object or technology. Its use value in a given form depends critically upon widespread acceptance of that form. Electronic value exchange is mainly an institutional issue not a technical one. It is about developing a set of relationships among people, each of whom possess only partial knowledge, and creating guaranteees of general trust.
Conclusion The issuance of electronic money is likely to have significant implications for monetary policy in the future. Many points are not developed in our study. It must be ensured that price stability and the unit of account function of money are not endangered. A significant development of electronic money could also have implications for monetary policy strategy and control of the operational target. Electronic money could severely affect the central bank’s position as the sole supplier of currency. Market forces are on the side of private suppliers of electronic money and those forces will make a system of private competing currencies inherently stable. This monetary innovation could allow individuals to hold all their assets in highly liquid and divisible mutual-fund shares that reflect current market values, and, consequently, would represent economically viable exchange media. What is money will be determined by what buyers and sellers accept and use as money rather than by government definitions. Many scholars believe that there should be a single centralized legal money within a national jurisdiction, that is a monopolistic supply of money. Why is a monopolistic money seen as desirable? Many arguments have been introduced in monetary theory and although they have been taken for granted in the mainstream, the necessity of governmental control over money is not theoretically proven. The emergence and unfolding of electronic money today seems also to show practical contemporaneous evidence that government monopoly of money is unnecessary (Hayek 1978: 107) and that money will be more and more self-regulated through the market process. The challenge is to develop an institutional framework that provides transparent rules for the elctronic payments system, safeguards the value of money, and protects individual freedom.
Economic consequences of electronic money 283
Notes The authors would like to thank the referee. Acknowledgements are also adressed to Stefan Schmitz and George Selgin for helpful comments. The authors are responsible for any remaining errors. 1 In the Eastern European socialist countries, before the Berlin Wall fell, there were forms of money, but they had absolutely no importance (no value) because they had no message to convey from any monetary institution: no “money as an institution” existed whatsoever. 2 Bills of exchange and bank notes were the creation of merchants and bankers within the private sector. A related issue is whether legislation (regulation) was a source of monetary innovation; of course, historically this has often been the case, but first, innovations were made by the private sector to escape regulation (for instance, look at the expansion of scriptural money after the enactment of Peel’s Act in 1844); second, one cannot argue that the innovative process is accelerated owing to legislation rather than without it. 3 The Internet is a data infrastructure that connects computers via telecommunication networks. It originated in the 1960s and 1970s, when the United States Department of Defense Advanced Research Projects Agency (ARPA) funded a small group of computer programmers and electronic engineers to redesign the way computers were operated. This resulted in the creation of the first network of computers. Since 1990, the number of users on the Internet has grown more rapidly as a result of recent developments. The Internet is merely a medium through which people transmit information to one another. It enables the formation of virtual communities and virtual corporations including shops and banks. Cyberspace is a non-physical “space” created by networked computers. The most commonly discussed part of cyberspace is the Internet, a combination of highspeed communications lines and computer networks. 4 It is a file in our computer. 5 This new payment system deserves the name of digital cash because it is almost equal to a cash payment in terms of security, fee, peer-to-peer payment, and untraceability. DigiCash’s system has been realized with the establishment of the Mark Twain Banks. We can visit Mark Twain Bank at http:// www.marktwain.com/. 6 The network operators and vendors are only supplying technical services, while clearing institutions are typically banks or specialized bank-owned companies that provide a service that is no different from that provided for other cashless payment instruments. From a policy point of view, the most important providers are the issuers, since e-money is a balance-sheet liability of these institutions. 7 Cryptography was originally developed by the military for sending secret messages past the prying eyes of enemy forces. 8 There are many secure modern public key algorithms. RSA, named after its inventors Rivest, Shamir, and Adleman, is the most widely used primarily because of its simplicity and because it works well. 9 Data Encryption Standard (DES) remains the most frequently used. 10 Banks may face different legal and regulatory requirements when they deal with customers across national borders. 11 The argument here is drawn from the comparison made by Osterfeld (1989) between “good” and “bad” legislations at the national and local levels, respectively. 12 The identity of the party initiating a cyberpayment value transfer. 13 Credit or deposit-taking firms are defined differently in different countries. 14 As we are saying, the issuer of e-money could be a non-bank financial firm as well as a non-financial firm. So governments feel a desire to regulate not just who can issue e-money but also the types of e-money product that can be offered. For example, restrictions might be placed on the maximum value that consumers and retailers are
284 Jean-Pierre Centi and Gilbert Bougi
15 16
17
18 19 20 21 22 23 24 25 26 27 28 29 30
31
32 33
allowed to hold or on user-to-user transactions, or scheme operators might be required to monitor transactions. For example, credit cards are subject to fraud and counterfeiting losses that are estimated to be well over one billion US dollars each year. For example, in a note-based system, an attacker could intercept messages between a genuine user and an issuer, or insert an unauthorized software program into a user’s personal computer that enables the attacker to copy electronic notes stored or in transmission, and then use the notes to perform transactions. According to Keynesian theory, business cycles are directly related to weaknesses in the demand side of the economy. The theory is that since government expenditure is a significant accounting component of demand, it is possible for the government to stimulate demand by increasing spending. They show that, in OECD countries, total government spending as a percentage rate of GDP rose from an average of 8.3 percent in 1870 to 47.2 percent in 1995. Supposing an American software developer uses a server in France to sell his software, say to a customer in Prague. Which sales tax rate should be applied, and by whom? Which country should benefit from the tax? According to the Financial Action Task Force, an inter-governmental body created by the G-7 countries in 1989, estimates of the amount of money laundered annually worldwide from the illicit drug trade alone range between 300 and 500 billion US dollars. We suppose that rs > rMi > rXg, where rXg is the rate of return on goods. For more details regarding the mathematical demonstration, the reader may contact http://junon.u-3mrs.fr/afa10w21/. It also includes the expected cost of loss and theft, and the expected cost of a card becoming unbearable. Expected costs of loss and theft are similar to paper currency. The cost of conversion includes opportunity cost of time spent for this activity (for example, walking to a smart card reader) and per transaction fees charged by the issuer of electronic money and/or the provider of the telecommunication service. For example, to sterilize the effects of large purchases in the foreign exchange markets. Cash is a large or the largest component of central bank liabilities in many countries, so that a very extensive spread of electronic money could shrink central bank balance sheets significantly. Moreover, even a moderate loss of seigniorage could be of concern to some governments, particularly in countries with large budget deficits. Jordan and Stevens (1997). The authors suggest that the income velocity of base money could approach infinity: “what may be new and different about the 21st century is the possibility that the velocity of central bank money might approach infinity.” That does not exclude, as we have previously written, the constitution of bank reserves in electronic units. Rahn (1999: 156–7). The Group of Ten is now using RTGS systems. The use of RTGS systems is also growing outside the Group of Ten and the European union. For example, RTGS systems are already in operation in the Czech Republic, Hong Kong and Thailand, and it is reported that, among others, Australia, China and Saudi Arabia will introduce RTGS systems in the near future (see Bank for International Settlements 1997, “Real Time Gross Settlement Systems,” March). If the cost of connecting the Internet and personal computers is taken into account, the cost of electronic money is high. But with the recent explosion of the Internet and its attraction to businesses, banks and individuals, the actual cost of electronic money transfers will be recognized as negligibly small. It is a summary table (European Central Bank 1998). The estimated annual costs of handling central bank currency by US retailers and banks are 60 billion US dollar, which includes costs of processing and accounting of money, storage, transport, and security.
Economic consequences of electronic money 285 34 Historical monetary research shows that banking regulations have increased the frequency of recent bank failures. In the United States, federal deposit guarantees have undermined depositor discipline and encouraged banks to take greater risks.
References Baumol, W. (1952) “The transaction demand of cash: An inventory theoric approach,” Quarterly Journal of Economics 66: 545–56. Chaum, D. (1992) “Achieving electronic privacy,” Scientific American, August, 96–101. Duffy, J. and Ochs, J. (1999) “Emergence of money as a medium of exchange: An experimental study,” American Economic Review 89(4): 847–77. Fama, E. (1980) “Banking in the theory of finance,” Journal of Monetary Economics 6: 39–57. Fama, E. (1985) “What’s different about banks?,” Journal of Monetary Economics 15: 29–39. Hayek, F.A. (1945) “The use of knowledge in society,” American Economic Review 35(4): 519–30. Hayek, F.A. (1978) Denationalisation of Money: The Argument Refined, Hobart paper no. 70, London: Institute of Economic Affairs (1st edition 1976). Jordan, J. and Stevens, E. (1997) “Money in the 21st century,” in J. Dorn (ed.) The Future of Money in the Information Age, Washington: The Cato Institute, pp. 115–25. Kiyotaki, N. and Wright, R. (1989) “A contribution to the pure theory of money,” Federal Reserve Bank of Minneapolis Staff Report 123: 37. Kiyotaki, N. and Wright, R. (1993) “A search-theoretic approach to monetary economics,” American Economic Review 83(1): 63–77. Klein, B. (1974) “Competitive supply of money,” Journal of Money, Credit and Banking 6(4): 423–54. Menger, C. (1871) Principles of Economics, New York: New York University Press. Menger, C. (1892) “On the origin of money,” The Economic Journal II(6): 239–55. Osterfeld, D. (1989) “Radical federalism: responsiveness, conflict, and efficiency,” in G. Brennan and L. Lomasky (eds) Politics and Process. New Essays in Democratic Thought, New York: Cambridge University Press, ch. 6, pp. 149–73. Rahn, R. (1999) The End of Money and the Struggle for Financial Privacy, Washington: Discovery Institute. Santomero, A. and Seater, J. (1996) “Alternative monies and the demand for media of exchange,” Journal of Money, Credit, and Banking 28(4). Selgin, G. (1997) “E-money: Friend or foe monetarism?,” in J. Dorn (ed.) The Future of Money in the Information Age, Washington: The Cato Institute, pp. 97–100. Selgin, G. and White, L. (1994) “How would the invisible hand handle money,” Journal of Economic Literature 32: 1718–49. Smith, V. (1936/1990) The Rationale of Central Banking and the Free Banking Alternative, Indianapolis: Liberty Press. Solomon, E.H. (1997) Virtual Money: Understanding the Power and Risks of Money’s High-Speed Journey into Electronic Space, New York: Oxford University Press. Tobin, J. (1956) “The interest-elasticity of transactions demand for cash,” Review of Economics and Statistics 38(3): 241–247.
Other reading Bank for International Settlements (1996) Implications for central banks of the development of electronic money, Basel.
286 Jean-Pierre Centi and Gilbert Bougi Committee on Banking Supervision (1998) Risk management for electronic banking and electronic money activities, Basel. European Central Bank (1998) Report on electronic money, Frankfurt.
Part VII
The legal framework
14 The emergence of private lawmaking on the Internet Implications for the economic analysis of law Elisabeth Krecké
Introduction Most lawyers share the prevailing legal conception in which command, obedience and repression appear as essential attributes of law, and according to which positive law is the only possible form of public order. As a consequence, they are generally reluctant to consider private, alternative means to overcome legal problems as being part of what they call law. The Internet, however, is a direct and obvious challenge to the tradition of legal positivism. Not only does the new economy make apparent the limits of traditional, centralized forms of lawmaking, but, more fundamentally, it allows for the emergence of a new, decentralized process through which individual actors themselves develop and experiment rules, practices and solutions which help them face the problems posed by electronic transactions. This chapter discusses the legal nature of these emerging, self-regulating market practices. Can they produce anything that might be labeled law? Can they engender legal rules, procedures or institutions, competing with the traditional, centralized systems of Internet governance? These questions will lead us to deal with more general issues such as to know who sets the moves of the legal system, and by which forces the law is shaped and adapted to change. We will therefore have to invoke the problem of the definition of law, an issue which for all times has constituted a fundamental and controversial preoccupation for legal theory, but which has considerably become up to date in recent years with the development of the Internet. The spontaneous emergence of a private, self-governing process within cyberspace and its astounding ability to address major problems that previously might have been thought to require a centralized form of law production, indeed deeply challenge our common understanding of law. Emphasizing the decentralized and private nature of the demand for legal change that accompanies the development of the new economy, this chapter argues that sometimes the shaping of legal evolutions emanates from sources that are outside the rationale and the doctrines which have erstwhile ruled the law. In other words, the legal system, to an important extent, may advance through events and decisions that are extra-legal. While social sciences such as sociology, history, anthropology and of course legal philosophy have been invoked to study the relationships between social, economic and technological changes and the extent
290 Elisabeth Krecké to which these changes may engender legal change, the role of economics has been largely underestimated. Nevertheless, the methodology of economic analysis brings into the debates of legal theory a new perspective which could be particularly enriching in fields which are often described as legal voids, such as the seemingly ungovernable cyberspace. My purpose is to prospect some of the ways in which economic analysis can contribute to an understanding of the relationship between the economic order and the evolution of legal systems. To go a step further, I will try to discern the implications of economic theory, and in particular the interdisciplinary movement of law and economics, for legal theory, and in this sense, invoke the role economics might play in pushing legal change.
Competing governance systems for the Internet Shortcomings of traditional forms of lawmaking in the context of the new economy The Internet continually raises a range of new and complex legal problems which, if they are not solved rapidly and efficiently, risk impeding the development of the enormous potential that the new economy is likely to offer our societies. Yet, the non-geographical and decentralized character of the Internet impairs a lot of traditional, territorially based legal institutions, which in many situations reveal themselves as largely unsuited to the task of governing such a borderless, changeable and rapidly growing entity. As the Internet is not a conventional territorial entity, it is difficult to apply current, territorially based legal rules in this context. Strengthening the role of international law, or even establishing a supraterritorial, internationally based model of governance are repeatedly discussed options, yet they would pose considerable problems (Gould 1997). Because of the complex nature of the Internet, the idea that a centralized institutional structure could span the impressive range of issues raised by the worldwide web seems rather dubious and even undesirable. Such an organization would have to cope not only with problems of jurisdiction, questions of content of policies and regulations, major issues such as domain names and at the same time countless minor, concrete issues, but there are also far-reaching questions such as the balancing of powers within the new organization, the legitimacy and the enforcement of its decisions, the risk of capture of the organization by particular governments or industrial rent-seeking interests, the protection of unpopular minorities, etc. ( Johnson and Post 1997). Considerable informational as well as motivational obstacles cast serious doubts on the efficiency of such a model of governance for the Internet. National legal systems are not really suited for the task either, because they can hardly control activities for which physical location cannot be established with certainty. Interventions of a national authority in an international context could only have a partial impact, unless the authority seriously transgresses her legitimate power. Legitimating national policies which are likely to directly or indirectly affect the jurisdictions of other countries would be a highly problematic enterprise. The problem of relationships between different national legal regimes, as well as the
Private lawmaking on the Internet 291 conflicts that may appear among governments and already existing network authorities, international treaties, organizations and the like, are important obstacles standing in the way of traditional, centralized forms of Internet governance. More importantly, unforeseeable side-effects risk transforming well-intended efforts to regulate cyberspace into a subtle threat to the development of the new, emerging economy within cyberspace.1 The difficulties of governing the Internet in a centralized way furthermore engender a vagueness which inevitably leads to a dilution of responsibility among involved actors. Indeed, it is not obvious to identify without ambiguity the legal liability of multiple actors such as domain name registries, sysops, users, etc., nor is it easy to define in a satisfactory manner offenses with respect to online activities (for instance in the fields of financial criminality, infringement of privacy, or violation of property rights). Complexity is enhanced by the fact that the Internet concerns various, highly different aspects of law. The governance of the Internet indeed has a national and at the same time an international dimension, whereby most of the time national and international concerns overlap. Furthermore it involves constitutional, public and political characteristics, as well as private law features. The biggest challenge of any centralized model of Internet governance is, however, the speed of technological evolution, which sometimes renders laws obsolete as soon as they emerge, because market conditions have changed between the conception of the laws and their application. Permanent and rapid changes in technology put an unprecedented pressure on the law. Regulating the new economy would involve long and difficult processes of learning and adaptation in an entirely new, unknown and changeable context, for which history offers no reference model. Legislators and regulators are often hopelessly overtaken by events in cyberspace and have serious difficulties not only to reach the fixed objectives, but also to define adequate objectives. Attempts to anticipate future changes in technology have in many cases led to the creation of highly complex, sometimes unreadable and thus unworkable pieces of legislation.2 The Internet is also throwing up technological developments traditional lawmaking was not prepared to deal with for political reasons, such as, for instance, private forms of encryption. So far, encryption technologies, considered as a military weapon, have been monopolized by public authorities. For reasons of national security, but certainly also because encryption procured them a significant instrument of power, governments were generally reluctant to open this market to competition. Yet, under the strong pressure of the technological evolution within cyberspace, a complex movement of liberalization of cryptographic policies appeared over the past few years in most industrial countries, leading to the emergence of a private market for encryption products in cyberspace.3 The encryption example illustrates particularly well the fact that legislators have to deal with the often conflicting demands of politics, as well as the consequences of the interaction of legislation with existing economic or legal policies, and in particular international policies. Legislators further have to face the strategic behavior of giant, conglomerate trusts which largely dominate the fields of media and communication,
292 Elisabeth Krecké and which use their extensive market powers to lobby for laws and policies that support their industrial interests. The comparative virtues of Internet self-governance It is precisely the shortcomings of centralized, top-down schemes of governance that may explain why the new economy could flourish the way it did. As it could largely escape from authoritative interventions, the Internet community had the chance to develop and continually test its own devices to cope with new, emerging problems. Domain name and IP address registries, for instance, have been setting up rules regarding the conditions to impose on the possession of an online address (Johnson and Post 1997: 67). Sysops are adopting rules determining which users to allow to sign on, which filters to install, with which other systems to connect, which local rules to adopt, etc. (Johnson and Post 1997: 67–8). Users are developing genuine standards of behavior to be applied, for instance, in news groups or chat rooms. Diverse Internet charters and codes of ethics recapitulate consensual, more or less informal and constraining rules of conduct prevailing within cybercommunities with regard to issues like politeness, privacy or copyright. Concerning electronic transactions, users have started to elaborate and customize sets of commercial norms and rules, which, with reference to the medieval “lex mercatoria,” have been summed up under the label “lex electronica.”4 As the constitution of reputation plays a crucial role in the context of Internet transactions in which anonymity undeniably creates a climate of mistrust and uncertainty, more and more sites, providers or organizations grant seals to traders which have proved their integrity and trustworthiness over longer periods of time. These seals progressively get accepted within cyber-communities as useful signals of reliability. In cyberspace, contract has naturally imposed itself as a primary element of selfregulation. Contracts can to a large extent avoid conflict by specifying the national laws that regulate the deal. More importantly, they might allow parties to modify the legal rules that apply to their transactions, in order to adjust them to their needs. As a consequence, new contractual arrangements, adapted to the specificity of dematerialized business, are constantly developed and spread throughout the Internet. They are designed to cope with the particular problems posed by commerce in a virtual world, where consequential new needs appear, such as, for instance, the necessity to protect the integrity and confidentiality of messages. Encryption is an additional method for protecting privacy and property in the context of digital business. The legal problems raised by the Internet, in particular with respect to copyright, create among economic actors an unprecedented interest for encryption technologies. As mentioned earlier, a new, private market for encryption products is emerging within cyberspace, rendering obsolete most of the strict legislation that has prevailed so far in this field. The Internet permanently gives rise to new issues of protecting intellectual property rights that are unique to the technology, and for which the established legal system is largely disarmed. Based on the protection of tangible goods, the traditional legal system is not really fitted to deal with forms of property “that can appear with the speed of light”
Private lawmaking on the Internet 293 (Schwartzstein 1994: 1068). Solutions, however, are coming from the concerned persons themselves, who have started to develop new forms of encryption. Allowing for a considerable reduction of transaction costs, encryption technologies developed on the Internet appeared very soon as indispensable factors for the development of the new economy. Keys and passwords not only permit traders to select those they want to deal with, but more fundamentally, encryption produces trust and confidentiality among contractors, in as far as it provides the means to create a high degree of assurance of authenticity (Benson 2000: 26). As violators can be expelled instantaneously, it also allows easy enforcement (Benson 2000: 26). Assuring innovators the control over new, digital objects (new software for instance), encryption technologies furthermore function as what Mackaay calls virtual fences, as extensions of property rights to new objects, extensions which do not emanate from legislation or judicial decision, but from those who commercialize the new objects (Mackaay 1997, 2000). Similarly, specific marketing techniques, such as regular updates, or tying arrangements like online assistance for legitimate customers of software, can be viewed as fencing techniques adapted to cyberspace. Here again, the efforts of protecting legitimate property emanate from interested persons themselves (Mackaay 1997 2000). In the absence of an efficient traditional copyright protection in the field of digital property, property owners indeed had no alternative but to elaborate themselves new, workable ways to protect their online activities. The development of private means of property protection becomes an important variable in economic choice when protection from the traditional legal system largely fails. Software producers, for instance, could not afford to rely on their governments to identify and sanction the numerous users without licenses. For a long time they have been exploring their own technical solutions allowing for a relatively efficient protection of their property interests. Similarly, the music industry, seriously concerned about the impacts of the increasingly effortless technological possibilities to download music from the Internet illegally, decided not to wait for legislative, regulatory or judicial solutions. Instead, it either promotes the development of protection technologies itself, or it tries to adapt the commercialization of music to the new conditions of the market, thus contributing to a large-scale process of discovery and experimentation of new fencing techniques in the Internet. The Internet not only provides private means to foresee and prevent, as much as possible, disputes in electronic transactions. It is also developing devices which help cybernauts to cope with disputes once they have occurred. In this perspective, services for settling disputes online have been multiplied over the past few years. Private and semi-private initiatives such as “e-resolution,” “Online Ombuds Office,” “Virtual Magistrate,” “CyberSettle” or “ClickNSettle” are examples of online arbitration and mediation which have had considerable success within cybercommunities. Thus, a growing number of successful rules, settlement systems and fencing techniques created by individuals or small groups seems to be increasingly approved by large communities of cybernauts as alternative means of dispute prevention and resolution. Mechanisms that reveal themselves as profitable for many users
294 Elisabeth Krecké (allowing them to efficiently protect their interest in online transactions by reducing costs linked to problems of trust, credibility and uncertainty), rules that every one has an obvious interest to follow, progressively expand to larger communities, and finally become generally accepted. The search for profit appears to be the drive for this process. As Mackaay points out, private means of dispute prevention and resolution, in particular fencing techniques, are themselves scarce goods like the objects which they fence in. Just like any other good, they are themselves subject to property rights (Mackaay 2000). The inventor of a new fencing technique may not only use this technique to protect his own property interests, but he may sell it to other property owners. This is a way to recuperate at least part of his investment in the production of the technique. The shortcomings of traditional law and the existence of profit opportunities linked to the production of fencing techniques create incentives for entrepreneurs to become builders and inventors of fences, even without having themselves a property interest to protect. Likewise, online services for settling dispute have become flourishing enterprises for those who provide them. The same applies to the emergence of online firms offering services that favor the quick development of communities of Internet traders (so-called “cyberclubs” or “islands of trust”), within which trade takes place under more trustworthy and hence safer conditions. Entrepreneurial decisions thus initiate most evolutions that take place in cyberspace, including the production of devices to cope with legal problems.
The dual nature of law production The mechanisms developed by Internet users in order to help them overcome the obstacles that arise with the growth of new technologies, are in no way tied to the law of any particular nation, nor do they require approval by the government. They work to a large extent without lawyers, court decisions, or the intervention of any governmental policy-making apparatus. A question of importance for the orientation of legal policy for the Internet is to determine whether these rules and devices are merely extra-legal means allowing individuals to deal with legal problems or whether they can truly be understood as legal means. When concerned individuals themselves, in a process of private, entrepreneurial innovation, are continually experimenting, developing and exchanging new technological and contractual devices allowing them to enhance the security and profitability of their transactions on the Net, will they produce anything that might be called law? Problems of definition On which grounds could encryption technologies or marketing techniques effectively be enforced by the legal system as a form of new, experimental5 property rights? Can private services for settling disputes online be compared with courts producing law? Can cyberclubs realistically produce trust and security in online transactions in the absence of the support of traditional enforcement mechanisms
Private lawmaking on the Internet 295 by the established legal system? What is the legal validity of contractual norms that are enforceable only within particular, private communities? How could private codes of conduct, which at best may induce voluntary commitments to respect given ethical rules be considered as lawful, while in fact they are not binding at all? There is no way to sanction their violation systematically, as it is very problematic to deploy physical force against lawbreakers in the Internet. Furthermore, how to deal with the fundamental legal notion of equality before the law, which does not make any sense in cyberspace? Built on a decentralized architecture, the Internet has generated lots of different systems, each having their own sets of rules governing the behavior of users. This plurality of rule-systems principally allows users to move among different spaces, choosing the system they want and deserting the one whose rules they find repressive. Different rule systems might apply to the same individual, and applicable rules are far from being the same for all users, because they depend on where the users go on the Internet and how their providers deal with incoming messages. As a result, the decentralized systems of rules coexisting in the Internet are largely inconsistent. The adoption of consensual rules of conduct in the Internet would thus eliminate features of the law which are traditionally considered as essential. Should they therefore not merely be considered as a matter of customs or ethics ( Johnson and Post 1997: 79)? What are the criteria which allow us to assume certain arrangements as lawful, while ruling others out as unlawful? What are the minimal requirements by which a set of social norms and rules can be deemed a legal system? The answers to these questions largely depend on the meaning given to the notion of law. There is indeed no concord among writers about precise definitions of law. The term is used in so many different senses, and in every language, it has various connotations. Writers from different disciplines give it very different meanings. Legal scholars have other definitions than sociologists, psychologists, philosophers, anthropologists, historians or economists, and even among scholars of a given discipline, there is rarely a consensus as to what law is.6 Nevertheless, in spite of numerous intrinsic differences, most contemporary legal scholars continue working in the tradition of mainstream thinkers7 and rest their visions on the conventional foundations on which western legal thinking has been built, thus assigning the concept of law a clear, objective and universal meaning. Closely related to the historical predominance of legal positivism, the mainstream legal conception largely tends to identify law with command and obedience. In this perspective, law is defined as the coercive command of a sovereign backed up by the threat of force through sanctions. Sovereignty and law thus appear as correlative notions (D’Entrèves 1963: 66). Internal and external sources of law The notion of law embodied in contemporary legal systems is largely the outcome of the tradition of legal positivism. Legal positivism has engendered the actually prevailing forms of territorially based and centralized lawmaking in which the legislator originates the formal statement of the law. More importantly, the positivist
296 Elisabeth Krecké vision provided the legal system, as it presently exists, with strong arguments to claim superiority over any other types of production of rules and norms. If sovereignty is considered to be the essential condition of legal experience, then of course, the previously described norms and practices emerging in the Internet cannot accurately be viewed as law. If command is the essence of law, then it is not possible to consider private, alternative means of dispute resolution and prevention as lawful. Yet, their emergence can be understood as a very efficient response to the shortcomings of the traditional legal system in the radically new and changing context of the virtual economy. Providing users with clear information regarding the required behavior in any particular online space, the various systems of rules coexisting in the Internet do not lead to chaos and anarchy, but they undeniably allow for the formation of a kind of online order ( Johnson and Post 1997: 79). They undoubtedly provide operative, widely accepted and applied means to solve interpersonal disputes. Is this not what we expect from the legal system? Is this not what makes law intelligible? Is this not to what it serves in society? If we understand law this way, then the decentralized process of rule creation in the Internet might well be considered as part of law. The Internet thus fundamentally challenges the prevailing conception of law. More than any other actual phenomenon, the emergence of private mechanisms to overcome legal problems raised by the Internet emphasizes the fact that there are two aspects to law production: besides the traditional centralized forms of lawmaking, there have always existed in society manifestations of spontaneous, decentralized forces for producing norms and rules of conduct in order to react to economic and social change. The law as it presently exists is, in its origin, not merely the outcome of the deliberate will of public forces, but is to a large extent the result of such alternative, private forces, which over time have been integrated into the official legal framework (Hayek 1973). In this sense, both types of forces, forces from inside and outside of the existing legal framework, can be seen as equally sound sources of law.
The impact of economics on legal theory Understanding and explaining this dual nature of law production is of crucial importance for legal philosophy, but amazingly the study of decentralized, private sources of law has traditionally been the object of economic rather than legal theory. As a matter of fact, economists are faced with much the same questions as are legal philosophers. Studying human action, or rather interaction, they have an obvious concern for the nature, the form or the structure of mechanisms that regulate individual behavior and promote social coordination. In this respect, they are naturally concerned about law, and in particular its relationship with alternative, private means of conflict prevention and resolution.
Private lawmaking on the Internet 297 Economic perspectives on legal phenomena In particular, Austrian economists very early investigated the understanding of such “spontaneous” structures that emerge within a market economy as the result of “human action, not human design” (Hayek 1960: 35). In order to grasp the complexity of legal relations governing society, as well as the interrelations and mutual interaction between legal, political, economic and social phenomena, the Austrian approach proceeds to a simplification of these phenomena to their modus operandi. The Austrians’ major concern is to explain what makes the law understandable, why it exists, how it works and what purpose it serves in society. This perspective makes clear that the nature of law cannot be fully understood, unless we go beyond the restrictive view according to which law is simply command or prohibition. The Austrian enquiry into the raison d’être of law is an emphasis of the fact that there is more to law than just a compelling aspect. Law is broadly understood as a conceptual framework for preventing and resolving disputes, its primary function being to teach,8 to qualify,9 to inform10 and to communicate what is just and what is unjust conduct.11 In this perspective, a law which lacks authoritative sanction may be considered as law, whereas there may be rules which take the form of commands and yet cannot be accurately called laws. If we define law as a mechanism that allows us to solve interpersonal conflicts in an efficient way, and hence contributes to produce a form of order, we may consider as lawful rules and practices that have no textual support in any constitution. Insisting on the relationship between social relations and legal institutions, the Austrian interpretation of legal phenomena appears to be a very particular perspective for understanding the law. One the one hand, it refers to the function of law (in terms of information and coordination of individual actions), and on the other, it emphasizes factors such as the passage of time, change, error and learning, thus providing a vision of law as an experimental, open-ended, and in this sense purposeless process,12 by which individual entrepreneurs (legislators, judges, lawyers, private market participants . . . ) seek to find solutions adapted to their needs. Austrians describe legal rules as standards of conduct which, in their origin, often appeared as responses to specific and individual problems, but over time revealed themselves as particularly successful in the resolution of recurrent interaction problems of similarly situated parties, therefore spread within larger communities, became generalized and adapted to change, and, after numerous iterations, finally became institutions which provide information and certainty to all members of society (Hayek 1960: 58). Understanding law as a social institution allows Austrians to consider as legal the mechanisms developed by private market participants in order to overcome problems of coordination. Already at the end of the nineteenth century, Menger (1883) referred to these phenomena as the unintended outcomes of individual actions that are undertaken in order to satisfy individual and social needs, and he calls them “organic” or “rational” law. This form of lawmaking is strictly distinguished from the one that is the outcome of a deliberate act of creation. Organic law, according to Menger, is the law made by individuals, not legislators. It emerges
298 Elisabeth Krecké as a response to a demand for legal change emanating from concerned individuals themselves and is, because of its peculiar evolution, particularly adapted to people’s economic and social needs, as well as to the changes of conditions of life in society. Hayek later described this decentralized, bottom-up form of lawmaking (in the sense of coming from the individuals) as spontaneously evolved law.13 Here again, the relevant criterion of lawfulness of a rule is its performance in the resolution of large-scale, recurrent problems of interaction. A similar kind of argument has been endorsed by contemporary mainstream economist Oliver Williamson, who on many occasions refers to the efficacy of various forms of spontaneous governance, such as private mechanisms of resolving contractual disputes, in so far as they allow for the emergence of what he calls a private ordering among contracting parties.14 Rather than remote judges, concerned parties are supposed to have a better knowledge of the specificity of the problems they have to deal with in their respective industries, and hence are better suited to develop adequate responses to these problems. Private responses to legal problems, such as mediation and arbitration, are supposed to reflect more directly and accurately the needs of the concerned parties than does traditional lawmaking. Insisting on their economic rationale, Williamson (1979) clearly regards private institutions of dispute settling as an important complement of contract law.15 Other studies have attempted to extend the argument of the relative efficiency of self-enforced as opposed to court-enforced commercial dispute resolution systems to a number of other areas of law. Ellickson (1991), for instance, envisages the possibility of an “order without law” in the field of property rights. It can be argued that interested persons themselves “may find creative ways to make de facto property rights more effective and credible” (Riker and Weimer 1993: 100). Insisting on the social norms that appear and operate outside the existing legal system and that do not depend on any governmental enforcement of property and contract rules, in particular Ellickson’s work has provided a basis for a new line of enquiry within the law and economics movement which, in a variety of social settings, aims at prospecting the mechanisms which explain the emergence of what Lemley (1999) calls a “true private ordering.”16 A lot of these law and economics studies are essentially descriptive.17 Their purpose is limited to understanding how social structures and informal rules emerge and develop at the margin of law. In recent years, however, prescriptive theories have been multiplied, suggesting that the existing legal framework should, at least to some extent, take into account and enforce spontaneously emerged social norms.18 Others advocate a legal system built exclusively on private means of dispensing justice (Benson 1989). Could we consider the private practices emerging in the Internet as an illustration of phenomena that economists have described as private or spontaneous orders? While in particular most Austrian writings remain essentially on an abstract level, some authors have recently followed their insights with respect to the concrete process of private rule creation in the Internet. Taking his line from Hayek, Bell (1997) for instance calls this process “polycentric law.”19 Johnson and Post (1997) qualify it as “decentralized, emergent law.” For Benson (2000), it is a form of “customary law,” comparable to the law that arose in the context of international
Private lawmaking on the Internet 299 trade. Referring to Mises (1957: 192), the author argues that a norm or a rule of obligation may well start out as a simple convention or contractual promise for some individual, yet it may progressively spread through the relevant community and become widely recognized, accepted and finally customized. Inspired by Hayek, Benson (2000: 11) assumes that “customary norms evolve spontaneously from the bottom up rather than being intentionally imposed by a legislator, and they are voluntarily accepted rather than being imposed, even though there may never have been an explicit statement declaring that they are relevant.” In a dynamic system of customary law like the one generated by cyber-merchants, contracting appears as the most important source of new rules. In other words, existing customary rules or norms are not inflexible and slow to change, but through negotiation and contractual innovation, they can easily and rapidly be adapted to new circumstances. For Benson (2000: 11), private contractual and commercial rules emerging in the Internet are “powerful norms with obvious legal character.” Towards an extension of the notion of law In contrast to the majority of legal theorists, there is a tendency among economists to adopt a polycentric vision of law. Economists acknowledge much more openly the fact that law is more than positive jurisprudence, that it has public and private origins and that it is created as the result of an array of complex interrelations between legal and extra-legal phenomena. It is in particular institutional economics which, from its beginnings in the late nineteenth century, has been directly concerned with the analysis of the mutual interrelations between legal and economic processes. For this line of thought within economic theory, “the law is a function of the economy, and the economy (especially its structure) is a function of the law . . . [Law and economy] are jointly produced, not independently given and not merely interacting” (Samuels 1989: 1567). Through the “legal–economic nexus” (Samuels 1989) are developed structures of the law and the economic system, where each serves as both dependent and independent variable in the construction of legal–economic reality (Mercuro 1998: 39). As described by Samuels (1989: 1578), the legal–economic nexus is “a continuing, explorative, and emergent process, through which are worked out ongoing solutions to legal problems.” Recognizing the evolutionary nature of legal–economic relations, many contemporary economists acknowledge more readily than legal theorists the idea that an important part of law might have an experimental and entrepreneurial20 origin. Indeed, some describe the logic of production and innovation in the context of the new economy as a “decentralized discovery process,”21 in which new ideas are constantly brought up, disseminated, tested, improved, adapted and imitated, within increasingly shorter periods of time, and they apply the same insight to the understanding of the production of technological or contractual devices to cope with legal problems, presupposing that private lawmaking in the Internet is inseparable from entrepreneurial innovation.22 These economists conceive that law, like any idea, emerges in the Internet above all as the fruit of human imagination and creativity, and that individual quest for profit motivates its
300 Elisabeth Krecké advent. In this respect, law on the Internet is diffused not by decree, but through practice. An economic understanding of law finally leads to an important extension of the notion of law, taking into account the complexity and heterogeneity of legal sources, as well as the unpredictability of legal change. A perspective which in the first place seeks to simplify the comprehension of law to an understanding of its raison d’être, ultimately allows for a far-reaching understanding of legal phenomena. Considering private mechanisms of rule creation as a genuine form of lawmaking opens up unexpected perspectives in terms of democratization and decentralization of the rule-making process. If we admit that the new technologies of information and communication allow individuals and cyber-communities to participate effectively in diverse stages of law production, we imply that clashing rules may coexist within the Internet, with no hierarchy between them, and that there may be a form of competition arising between different systems of rules. Diversified regimes indeed emerge as a result of the complex interplay between sysops defining the terms of use, and users making choices, regarding their preferences for systems and services (Elkin-Koren and Salzberger 1999: 255–6). Internet service providers, list moderators or web site owners have incentives to constantly adapt their rules to the changing needs and requests of users, and users who do not agree with such rules can leave and look for an alternative that better serves their interests (Elkin-Koren and Salzberger 1999: 256). As Johnson and Post (1997: 79) state, because the Internet allows easy movements among differing spaces, as well as easy (on-line) separation of people who do not agree on basic ground rules, it can afford not to be consistent. Its inconsistency gives all users an equal right to create or select the rule sets they find most empowering. This emergent law of the Internet can thus maximize individual, well-informed choice. Whether such a competition between diversified rule systems is actually workable depends to a large extent on the effectiveness of their respective mechanisms of enforcement and identification of wrongdoers,23 which is far from being an easy issue. In spite of these problems, however, the phenomenon of competition among different regimes on the Internet actually allows more and more rule systems to become genuine variables of choice for individual decision-makers, who in an emerging market of rules24 can choose, in accordance to their interests and needs, the laws that apply to them. The argument of jurisdictional choice not only applies to competition between these private, decentralized forms of Internet governance, it may be extended to the established legal system as well, which may be considered as one among a plurality of rule systems. Economic analysis can significantly contribute to the fundamental normative debate about the ways in which the established legal system should deal with this decentralized process of rule creation. Should it go against, or on the contrary tolerate, or even encourage this competition coming from outside the traditional legal parameters? Should the established legal system let the market experiment with new techniques to resolve legal problems (such as private
Private lawmaking on the Internet 301 mechanisms of protection of property rights, new contractual arrangements, etc.), and then integrate, enforce and eventually codify them (Mackaay 1997, 2000)? More and more economists insist on the fact that “the underlying premise of decentralized Internet governance is that the nations of the world would agree to enforce the rules established by sysop and user interactions, just as they now enforce contracts entered into by private parties” ( Johnson and Post 1997: 80). Others take a further important step, suggesting that the emerging private lawmaking in the Internet should replace public lawmaking (Benson 2000). Economic theory may contribute a great deal to the positive issue as well, which is concerned with the ways in which the legal system does actually react to the legal competition emanating from cyberspace. Economic analysis indeed makes us sensitive to the question of the impacts these new, competitive forms of law production have on traditional centralized lawmaking. What kinds of incentives do these forces produce for the traditional legal system to adapt to the changing conditions on the Internet? To what extent can the pressure of competition from outside affect what is inside the existing legal framework? Can competition between suppliers of rule systems substantially affect the evolution of law?25 And finally, to what extent can the plurality of competitive rule systems allow for a selection of efficient regimes? Arguing that competitive forces are as effective with governments as they are with private markets,26 economic analysis of law provides some interesting insights with respect to these questions – insights that would deserve to be developed more in depth with respect to the governance of the Internet. A fact is that the actual legal system somehow has to come to terms with the changes engendered by the continuous development of new technologies of information and communication. A major challenge that the Internet poses to the legal system is to find ways to protect legal rights (such as intellectual property and privacy) without hindering the growth of technology, in other words, to achieve a compromise between legal concerns of protection and at the same time the promotion of innovation and economic progress. Experience shows that in an increasingly complex economy, where traditional legal systems present major shortcomings, more decentralized lawmaking might be adapted.27 Whether this implies that public law has no role to play in the regulation of the Internet, as suggested by many of the recent private ordering approaches,28 is an intricate question which is beyond the scope of this chapter. As the actual evolution of social norms in cyberspace shows, it seems difficult to conceive that the private, self-ordering regimes thrown up by the market could subsist in the long run without any established background of laws against which to enforce them. Hence, the issue of concern in this chapter is not whether the private forms of Internet governance could operate as a full-scale substitute to traditional, public forms of governance. This chapter merely stresses the fact that in the context of the new economy, economic analysis provides strong arguments for advocating a fruitful cooperation between the market order and the legal order.29 It is concerned with the impact Internet norms may have on the evolution of the existing legal framework. In this context, the significant role economics might play as a primary conveyor of new theories of law can be emphasized.
302 Elisabeth Krecké Economics as a source of legal change The emphasis of the relationships between law and the market order casts an entirely new light on our initial question of the ways in which the private rule systems emerging in the Internet could be considered as part of law. Presenting under a completely different angle central elements of the law such as contract, property, tort, crime or liberty, an economic interpretation of what is going on on the Internet makes clear that no precise line can be drawn between what is law and what is not, because what is inside and outside the parameters of the law is not necessarily unalterable and established for all time. As has been recognized on many occasions, legal parameters themselves undergo important changes over time. One of the factors which may considerably influence changes in legal parameters is precisely economic theory. The evolution of anti-trust law30 over the past few decades provides an interesting example of a genuine revolution in the content of law, with a shift from very strict rules to a much more tolerant and open-minded case law. This legal revolution has been preceded by an important evolution of economic theory with respect to competition and concentration, and may be explained as a reaction or an adaptation of the legal framework to this change in economic theory. Following for many years the main insights of the conventional Harvard school of industrial organization, and in particular the resulting structure– conduct–performance paradigm, anti-trust authorities traditionally considered as anti-competitive, and hence illegal, numerous market practices31 that were attributed to the exercise of horizontal or vertical market powers. Under the influence of contemporary economic theories, such as the Chicago school of law and economics, the Austrian school of economics, institutional and new institutional law and economics, transaction cost economics, contestable markets approaches, dynamic theories of innovation, strategic and predatory behavior, anti-trust law now has come to embrace many previously deemed anti-competitive practices as an integral part of the competitive process. The legal system finally came to adopt a more extensive vision of competitive and anti-competitive behavior and to accept as lawful many contractual restraints, forms of horizontal collusion, exchanges of information among competitors, as well as private forms of dispute settling, which formerly had been treated with great suspicion. Could an economically sound extension of the notion of law be imagined with respect to the actual practices emerging on the Internet? Could it be possible that rules which today are still outside or in the shadow of law might over time be integrated into the legal framework? Could we expect them to become part of the legal system, once they have stood the test of time and proved their capacity to solve efficiently recurrent and large-scale problems of coordination? To what extent is it possible to integrate into the legal system spontaneously generated rules and institutions which evolve at an extremely high speed? Traditionally, indeed, the emergence of legal institutions has been a matter of decades, even centuries, whereas in cyberspace, legal evolution comes in a matter of years. What does this imply in terms of certainty and stability conveyed by the new, emerging legal institutions in cyberspace? How do people behave when confronted with rapidly
Private lawmaking on the Internet 303 changing sets of legal rules and institutions? Does the fact that the Internet phenomenon considerably speeds up technological and, as a consequence, legal evolutions, entail that people are also learning and adjusting to change faster? Faced with highly complex questions never encountered before, the production of law in the field of the Internet, which is still in a very early stage of development, is definitely a domain in which economics is likely to influence legal evolutions. Providing an external perspective of the legal system, economists can more readily adopt an extensive vision of the law than traditional legal analysts, who generally embrace an exclusively internal viewpoint32 of the law. Emphasizing the economic mechanisms underlying legal phenomena, economists can help to promote change in various legal fields including the regulation of cyberspace – be it merely by causing legal decision-makers to stand back and adopt a more global perspective on the legal system. In this respect, the so-called law and economics movement could play a central role in translating, interceding and adapting the terminology typically used by economists to the one that is currently applied in legal professions.
Conclusion The failures of traditional institutional structures to cope with the diverse, intricate and large-scale problems raised by the Internet contrast with the efficacy of a decentralized, self-regulatory process of governance which starts to emerge within cyberspace and which does not closely resemble the current hierarchical structures of law production. More importantly, the shortcomings of traditional, positive lawmaking have created strong incentives for online traders to establish their own rules. More than any other actual social phenomenon, the Internet reminds us that the term law designates in fact two distinct phenomena: on one hand, it refers to the top-down, centralized lawmaking by a sovereign who monopolizes the authorized use of force, but it can also name the numerous customs, practices and standards of behavior, which are not the result of positive legislation or regulation, but rather the outcome of the complex interplay of a multitude of individual actors, seeking to develop adapted answers to the problems posed by life in society. The emergence of remarkably successful, private alternatives to traditional, hierarchical lawmaking makes perfectly clear that positive law does not exhaust the whole range of legal experience. It shows that positive law is unable to explain the nature of legal experience in its entirety, and that there may be laws other than the commands of sovereign authorities, laws with a different structure, which are more or less binding and formally perfect, more or less constraining and enforceable through sanctions, and yet producing a form of order. In as far as they allow interpersonal conflicts to be solved in an efficient way, the self-regulatory forces emerging on the Internet may well be considered as a form of law. Furthermore, they are likely to have an increasing impact on the established legal system which actually does not have much choice than to adapt and deal with this new legal competition emanating from the market. The Internet appears as a particularly interesting case study of legal evolution. It has indeed created a situation where social and economic changes have started
304 Elisabeth Krecké to engender important legal changes. If we assume that the legal system may advance to a large extent through forces that come from outside existing, traditional legal categories, and in particular from economic forces, then the study of law (at least, the self-generated, endogenous forms of lawmaking and their impact on the existing legal framework) may be considered as an integral part of economics. In this respect, economics may be seen as a science of law.
Notes I would like to thank Mario Rizzo for helpful comments and suggestions on an earlier draft. 1 Actually, a treaty to enforce laws across borders is being negotiated in The Hague. The purpose of the treaty is to clarify which country’s laws apply in transnational conflicts and to ensure that the resulting judgments are enforced. Nevertheless, in a context of borderless transactions, the powers of such a treaty risk expanding to unexpected proportions, and seriously to affect the sovereignty of the signatory countries. Because their websites appear all over the world, e-business companies could under this treaty find themselves liable in a country where their activities are considered as illegal, even though they are perfectly legal in their home country. This implies that for instance a Chinese court could attach the assets of an American company by arguing that the company’s website is visible in China, and has therefore to be ruled by Chinese law. Under the treaty, American courts would, in principle, be required to enforce such a judgment. While the treaty contains some exceptions allowing American courts to refuse such a judgment if it violated US public policy, third countries with no public policy interest in the outcome, could, however, invoke no argument to escape from enforcing it. Giving every signatory nation’s courts jurisdiction over every event anywhere on the Internet, such a treaty, if it was actually ratified and adopted, could thus seriously endanger the expansion of e-commerce (Waldmeir 2001: 12). 2 As for instance the American Digital Millenium Copyright Act (Mackaay 2000). 3 It should be noted, however, that for the moment, there is still a lot of public control over this market, and deregulation primarily takes the form of re-regulation. 4 Or eventually “lex informatica” (Reidenberg 1998) or “lex cybernatoria” (Benson 2000). 5 Term used by Ellickson (1993: 1366), with respect to property in land and by Mackaay (1996: 292), with respect to encryption techniques. 6 McIntyre (1994), for example, provides a list of eleven different definitions of law solely from a sociological perspective. 7 Such as Hart (1961), Ackerman (1983) and Dworkin (1986), for instance. 8 D’Entrèves (1963: 77). 9 D’Entrèves (1963: 78). 10 Hayek (1973). 11 Barnett (1998). 12 In the sense that there is no common purpose of law, there are only individual objectives (Hayek 1973). 13 As opposed to created law, corresponding to the traditional, top-down mechanisms of law production. (Hayek 1973). 14 For instance Williamson (1991). 15 Similar points of view are recently defended by traditional law and economics advocates such as Cooter (1994) and Shavell (1995) for instance. 16 “True private ordering,” in which norms not only emerge in a decentralized and spontaneous way, but are also enforced outside the existing legal framework, is opposed to “quasi private ordering” in which private norms may spontaneously develop in a given social context, yet their enforcement is assured by the established legal system (Lemley 1999).
Private lawmaking on the Internet 305 17 For instance, Cooter (1994). 18 These approaches suggest, for instance, that tort liability should depend on what is customary or normally done in an industry. They further propose that parties may change the legal rules themselves by contract in order to adapt them to the specificity of their transaction. Law is in this respect considered as a set of default rules provided for the convenience of private traders (Lemley 1999). 19 As opposed to “monocentric law,” corresponding to the centralized, monopolized, topdown forms of lawmaking (see also Benson 1999). 20 Insisting on the role of lawyers (and to a minor degree, the role of judges), as the “prime movers for novel theories of law,” Schwartzstein (1994: 1071) is one of few economists to argue that even traditional lawmaking proceeds from entrepreneurial decisions. Motivated by the search for new opportunities to advance their clients’ interests, lawyers continually elaborate new winning strategies and new interpretations of the law, thus pushing the law to progress and adapt to new situations. Schwartzstein thinks that courts discipline lawyers in the same way markets discipline entrepreneurs, by rewarding or rejecting their innovations. As a consequence, while lawyers permanently try to push the legal process into new directions and discover new opportunities, the judicial process, with its interests in protecting the certainty and stability of the law, serves to temper these forces toward change (1072). 21 See, for instance, Mackaay (2000). 22 Such as encryption technologies, contractual innovations, arbitration systems, etc. 23 Users may easily escape responsibility for online harm simply by changing their IP address, or by using different addresses for different purposes. Johnson and Post (1997) suggest the creation of a central control mechanism which allows service providers to establish a link between online identities and identifiable individuals, a solution which has been criticized, however, for the potential threat it may constitute for users’ privacy and civil liberties (Elkin-Koren and Salzberger 1999: 256). 24 Elkin-Koren and Salzberger (1999: 256) and Johnson and Post (1997: 1389–91). 25 As argued by Mattei and Pulitini (1991). 26 Ogus (1999). 27 As emphasized by Cooter (1994). 28 For instance, Benson (2000), and to some extent Johnson and Post (1997). 29 As emphasized by Krecké (1996). 30 In particular American anti-trust law. 31 Practices such as mergers, joint ventures, exchange of information among competitors, but also the emergence of ever more restrictive contractual arrangements, private settlement systems, etc. 32 The distinction between external and internal legal perspective has been advanced by Hart (1961: 55, 86–7, 96) and Dworkin (1986: 12–15). While internal theory describes the law from the perspective of what Hart calls the officials of the legal system (judges, lawyers), external theory takes instead the viewpoint of an uninvolved, outstanding observer.
References Ackerman, B.A. (1983) Restructuring American Law, New Haven: Yale University Press. Barnett, R.E. (1998) The Structure of Liberty, Oxford: Clarendon Press. Bell, T. (1997) “Polycentric law.” Online. Available HTTP: (accessed 3 March 1998). Benson, B.L. (1989) The Enterprise of Law: Justice Without the State, San Francisco: Pacific Research Institute. Benson, B.L. (1999) “Polycentric law vs. monocentric law: Implications from international
306 Elisabeth Krecké trade for the potential success of emerging markets,” Journal of Private Enterprise 15: 36–66. Benson, B.L. (2000) “Jurisdictional choice in international trade: Implications for lex cybernatoria,” Journal des Economistes et des Etudes Humaines 10(1): 3–31. Cooter, R.D. (1994) “Structural adjudication and the new law merchant: A model of decentralized law,” International Review of Law and Economics 14: 215–31. D’Entrèves, A.P. (1963) Natural Law. An Introduction to Legal Philosophy, 7th edn, London: Hutchinson University Library. Dworkin, R.M.(1986) Law’s Empire, Cambridge, MA: Harvard University Press. Ellickson, R.C. (1991) Order Without Law: How Neighbors Settle Disputes, Cambridge, MA: Harvard University Press. Ellickson, R.C. (1993) “Property in land,” Yale Law Journal 102: 1315–400. Elkin-Koran, N. and Salzberger, E. (1999) “The economic analysis of cyberspace: The challenges posed by cyberspace to the economic approach towards the law,” in C. Ott and H.B. Schaefer (eds) New Developments in Law and Economics (papers presented at the Annual Conference 1999 of the Erasmus Programme in Law and Economics), Hamburg: University of Hamburg–University of Ghent, pp. 225–72. Gould, M. (1997) “Governance of the Internet: A UK perspective,” in B. Kahin and J.H. Keller (eds) Coordinating the Internet, Cambridge, MA and London: MIT Press, pp. 39–61. Hart, H.L.A. (1961) The Concept of Law, Oxford: Oxford University Press. Hayek, F.A. (1960) The Constitution of Liberty, London: Routledge and Kegan Paul. Hayek, F.A. (1973) Law, Legislation and Liberty: Rules and Order, vol. 1, London: Routledge and Kegan Paul. Johnson, D.R. and Post, D.G. (1997) “And how should the Net be governed? A meditation on the relative virtues of decentralized, emergent law,” in B. Kahin and J.H. Keller (eds) Coordinating the Internet, Cambridge, MA and London: MIT Press, pp. 62–91. Krecké, E. (1996) “Law and the market order. An Austrian critique of the economic analysis of law,” Journal des Economistes et des Etudes Humaines 7(1): 19–37. Lemley, M.A. (1999) “The law and economics of Internet norms.” Online. Available: (accessed 13 May 2001). McIntyre, L.J. (1994) Law in the Sociological Enterprise: A Reconstruction, Boulder, CO: Westview Press. Mackaay, E. (1997) “L’économie des droits de propriété sur l’Internet,” Les Cahiers de Propriété Intellectuelle 9(2): 281–300. Mackaay, E. “Intellectual property and the Internet: the share of sharing,” paper presented at the Austrian Colloquium, New York University, November. Mattei, U. and Pulitini, F. (1991) “A competitive model of legal rules,” in A. Breton (ed.) The Competitive State: Villa Colombella Papers on Competitive Politics, London: Kluwer, pp. 207–19. Menger, C. (1883) “The ‘organic’ origin of law and the exact understanding thereof,” in C. Menger (ed.) Investigations into the Methods of Social Sciences, With Special Reference to Economics, New York: New York University Press (reprinted in 1985). Mercuro, N. (1998) “American institutional law and economics,” in R.W. McGee (ed.) Commentaries in Law and Economics, 1997 Yearbook, Orange, NJ: The Dumont Institute for Public Policy Research, pp. 28–63. Mises, L. ([1957] 1985) Theory aned History: An Interpretation of Social and Economic Evolution Auburn, AL: Ludwig von Mises Institute.
Private lawmaking on the Internet 307 Ogus, A. (1999) “Competition between national legal systems: A contribution of economic analysis to comparative law,” The International and Comparative Quarterly 48: 405–18. Reidenberg, J. (1998) “Lex informatica: The formulation of information policy rules through technology,” Texas Law Review 76: 553–82. Riker, W.H. and Weimer, D.L. (1993) “The economic and political liberalization of socialism: The fundamental problem of property rights,” in E.F. Paul and F.D. Miller (eds) Liberalism and the Economic Order, Cambridge: Cambridge University Press pp. 79–102. Samuels, W.J. (1989) “The legal–economic nexus,” George Washington Law Review 57: 1556–78. Schwartzstein, L.A. (1994) “An Austrian economic view of legal process,” Ohio State Law Journal 55: 1049–78. Shavell, S. (1995) “Alternative dispute resolution: An economic analysis,” Journal of Legal Studies 26: 1–28. Waldmeir, P. (2001) “A stealthy threat to e-commerce,” Financial Times 31 May: 12. Williamson, O.E. (1979) “Transaction cost economics: The governance of contractual relations,” Journal of Law and Economics 22: 233–61. Williamson, O.E. (1991) “Economic institutions: spontaneous and intentional governance,” Journal of Law, Economics and Organization 7: 159–87.
Index
Abebooks 223, 228, 229 Abelson, H. 107, 110, 112, 116 Abreu, E. 229 Ackerman, B. A. 304, 305 Adamic, L. A. 192, 197 adaptation 27, 28, 180, 181 Afghanistan 241 Aghion, P. 154, 158, 164, 165 Agorics Project 91, 94 agreement 7, cooperative 173 Ahmed, E. 235, 242, 244, 245 Aimar, T. 16, 17, 126, 139 Aizcorbe, A. 253, 260 Akerlof, G. 12, 218, 219, 220, 221, 222, 227, 228, 229 Alchian, A. A. 149, 164, 165, 225, 229 algorithm 104, 105; genetic 114 Alibris 223, 224, 229 Allen, P. 115, 116 allocation: resources 104, 144 Alm, R. 47, 60 Almeida-Costa, L. 60, 166 alphabetic 31; ordering 38 Amazon.com 229 Amendola, M. 172, 184 American Information Exchange Corporation 95 Amix 82, 95, 96, 99, 123 Anarchy, State and Utopia 15 Anderson, P. W. 115, 116, 118 Anderson, T. L. 112, 116 Antonelli, C. 53, 54, 57, 60, 171, 183, 184 Apache 47–49, 50 Apple 49, 106; Computer Corporation 96 Aréna, R. 11, 12, 126, 139, 185, 201, 207, 213, 215, 216 Argentina 238, 239, 255 Arno 4 ARPANET 55 Arrow, K. J. 116, 118, 176
Artificial Intelligence (AI) 93, 94, 96, 105, 106, 108, 113; emergent 105; Laboratory 91; literature 106 Askenazy, P. 171, 183, 184 assets: 63, 75; alienable (and non-alienable) 149, 150; earning 253; electronic 79; financial 254, 268; intangible and tangible 232; physical 80; physical and knowledge 152 Association des Historiens de la Tradition Economique Autrichienne (AHTEA) xv, 3, 16, 215 AT&T 111 Aucoin, M. 283 Australia 238, 239 Austria 173 Austrian: approach 3, 215, 297; colloquium 92; economics 3, 96; school 42, 231; theory 3, 12, 245, 265; theory of the firm 169, 170, 177; tradition 214, 215, 231 Austrian Economics in America 42 authority 145, 146, 148–51, 152, 155; supra-national 15 autocorrelation 235, 236 autocovariances 236 Axelrod, R. 109, 110, 114, 115, 116, 190, 197 Baba, Y. 185 Bach, J.-S. 92 Bacon, D. 86 Baetjer, H. xvi, 3, 28, 42, 44, 47, 50, 54, 59, 60, 91, 94, 121, 125 Bailey, P. B. 136, 139, 182, 184, 217 Baker, G. 155, 157, 158, 164, 165 Bakos, J. Y. 174, 175, 182, 184 bank 275, 277; private 15; see also central bank
Index 309 Bank for International Settlements 252, 260, 273, 283 Barnett, R. E. 304, 305 Baroque 92 Barotta, P. xv, 4 Barzel, Y. 174, 184 Baumil, W. 283 Baywatch 70 Beaver, D. 75, 86 Becker, G. 101 Beer, S. 112, 116 behaviour: linguistic 23 Bell, T. 298, 305 Bellon, B. 216 Belussi, F. 215 Ben Youssef, A. 216 Benghozi, P. J. 170, 172, 173, 184 Benjamin, R. I. 61, 119 Benson, B. L. 293, 298, 299, 301, 304, 305, 306 Berners-Lee, T. 43, 44 Bianchi, M. 126, 139 Bibliofind 223, 224, 229 Bicknell, C. 229 Bind (Berkeley Internet Name Daemon) 50 big player 12, 13, 14, 16, 17, 231, 234, 235, 237, 241, 242, 244 Birner, J. xv, 1, 16, 17, 86, 139, 183, 189, 197, 215, 216, 246 Bishop, P. 87 Black, R. 245 Black, S. E. 175, 184 Boettke, P. 12, 15, 42, 72, 115, 116, 218 Böhm-Bawerk, E. 42, 44 Boisot, M. 146, 147, 151, 164, 165 Bolton, P. 153, 165 Bond, A. H. 113, 116 Booker, L. B. 114, 116 books: print 223; used 223 booksellers 224 Bougi, G. 8, 14, 15, 262 Boulding, K. 108 Brand, S. 28, 44 Brazil 238, 239, 240, 255 Brennan, G. 115, 116, 283 Bresnahan, T. F. 55, 60 Brewer, M. B. 167 Brockwell, P. J. 242, 245 Brooks, F. P. Jr. 58, 59, 60, 112, 116 Brooks’s Law 59 Broussard, J. 235, 245 Brousseau, E. 130, 134, 138, 139, 172, 184 Brown, J. S. 34, 44, 114, 119 Brynjholfsson, E. 43, 44, 156, 165, 174, 175, 185, 215, 216, 217
bubbles 13, 235, 237, 249, 254 Buchanan, J. M. 53, 60, 108, 115, 116 Burch, G. 86 Burns, D. 94 Burt, R. S. 193, 197 Burton-Jones, A. 209, 216 Bush, V. 36, 37, 44, 112, 116 Business: -to-business (B2B) 202, 213, 214; -to-consumer 202, 210, 212, 213, 214 CAD–CAM 97 Caillaud, B. 136, 139 calculation: economic 26; profit/loss 26 Caldwell, B. 139 Californian Management Review 43 Campbell, J. 114, 116 Canada 238, 239 capabilities 9, 21, 26 52, 54, 58, 177; absorptive 52; dynamic 58; hypertext 40, 43; ‘pull’ 37; ‘push’ 37; software 35, 43 capability system 73 capital 27, 42, 63, 75, 76, 77, 97, 98, 121; division of 28; goods 27, 97; heterogeneous 27; private 225; reputation 72; structure 26, 28, 42 Capital and Its Structure 27 Carmax 221 Carnegie-Mellon 51 Caroli, E. 183, 184 Carter, A. P. 170, 185 Cash: Digi 267, 284; Net 267 Casson, M. 153, 165 The Cathedral and the Bazaar 51 Cave, M. 133, 136, 137, 139 Cecatto, H. A. 115, 116 Center for Agent Based Social Simulations 4 Centi, J.-P. 8, 14, 15, 262 central bank 13, 14, 15, 16, 255, 275 central planning 12 CERN 43 Chamlee-Wright, E. 42, 45 Chandler, A. D. 189, 197 Chartier, R. 43, 44 Chaum, D. 267, 270, 283 Cheung, S. N. S. 149, 164, 165 Chile 112 China 238, 240 choice 101 Citibank 67, 70, 71 Clemens, P. C. 62 clubs 53; technological 53 CNN 70
310 Index Coase, R. H. 47, 56, 58, 60, 144, 146, 148, 149, 151, 155, 165, 218, 229; theorem 233, 245 Coats, A. W. 112, 116 cognitive 178; approach 49; constraints 10 Cohen, D. 170, 183, 185 Cohen, W. M. 52, 60 Cohendet, P. 170, 172, 173, 174, 180, 184, 185 Colander, D. 112, 116, 125 Coleman, H. J. 167 commerce: electronic 219; see also e-commerce communication 2, 10, 16, 22, 26, 37, 40; modes of 32; structure 16; system 40; see also technology community 63, 81, 82, 221 Compaq 48 competence 52, 58; center 161 competition 14, 174, 220, 227, 269, 274, 278, 280, 281; perfect 202 competitive economic equilibrium (CEE) 203, 203, 206, 210, 212, 214, 215 Competition and Entrepreneurship 22 complementarity (ies) 9, 27, 28, 178; technological 128 complexity 28, 29, 31, 50, 100, 101, 112, 291 complexly structured description 29 computation 102, 107, 114 computers 33, 73, 84, 91, 97, 103, 113; industry 97, 111; programming 92; science 91, 107; scientists 95; simulation 124 conflict 77, 79 connection: causal 131; local 192; social 10 connectivity 133, 134 consumption 47, 58 context 54; shared 54 contextualization 49 contract 11, 71, 73–7, 79, 82, 83; conventional 82; employment 148; enforceable 7; as game 7, 72; incomplete 149, 150; self-enforcing 73; smart 7, 8, 63, 64, 71, 72, 79; split 64, 82; verbal 81; video 7, 64, 70, 81; written 31 contract host 77–9 conversation 30, 32, 40, 41; oral 31 Coombs, R. 147, 165 cooperation 47, 53, 86, 182, 190 coordination 5, 11, 26, 41, 52, 100, 101, 112, 113, 131, 132, 135, 153, 172, 180;
centralized 155; economic 204; entrepreneurial 28; external 175; failure 8, 137; market-type 9, 209; mutual 21; problems 138 Cooter, R. D. 304, 305, 306 Corrado, C. 260 Cosgel, M. M. 57 costs 47; communication 59; fixed 128; homogenization 65, 85; information 233; opportunity 263; social 218; transaction 12, 132, 231, 245, 263, 268; transaction and search 203; transportation 132 Cowan, D. J. 114, 116 Cowen, T. 143, 144, 145, 146, 147, 150, 163, 164, 165 Cox, M. W. 47, 60, 113, 116 credibility 80, 223, 224 credit: card 225, 226, 229, 267, 279; conditions 259; expansion 259 Crémer, J. 134, 137, 139 Crosby, B. 93 cryptography 268, 284 Csontos, L . 42, 44 Cubeddu, R. xv, 4 culture 30, 65; literate 30; oral 30 Cunningham, S. R. 57 Currie, W. 209, 216 cWare 57, 60 cyberspace 290, 291, 292, 295, 303 D.H. Brown Associates, Inc. 49, 60 Dafermos, G. N. 57, 58 Dallas 70 Dascal, M. 114, 116 Dasgupta, P. 116 data 35; processing 35 D’Aveni, R. 143, 165 David, P. 111, 116 Davis, R. 62 Davis, R. A. 242, 245 Day, R. 115, 117 Day, R. H. 236, 245 Debonneuil, M. 170, 183, 185 decentralization 151 decisions: individuals 132 Dedrick, J. 58, 62 de Garis, H. 94, 115, 117 De Jong, K. 114, 117 delegation 151 Delio, M. 229 Dell 48 De Marchi, N. 17 Demsetz, H. 112, 117, 146, 149, 154, 164, 165
Index 311 Denmark 158, 173 D’Entrèves, A. P. 304, 306 Derby 106 Descartes, René 105 de Sola Pool, I. 83, 86, 111, 120, 194, 197 de Solla Price, D. J. 58, 60, 193, 197 de Soto, H. 6, 7, 63–70, 76, 77, 80, 86, 87 Dibiaggio, L. 171, 185 DiBona, Ch. 48, 57, 58, 60 differentiation 134, 135, 137 digital path 63, 64, 70, 79, 80, 83, 86 Dingwell, J. 140, 246 Dion, D. 114, 116 discourse 112 disproportionalities 250 division: of knowledge 67, 233; of labor 67, 101, 232 Dobrow, D. G. 119 Dolan, K. A. 47, 60 Doms, M. 260 Dorn, J. 283 Dosi, G. 186 Dow, S. C. 197, 216 Drexler, E. 4, 43, 44, 59, 61, 86, 93, 95, 96, 99, 101, 103, 104, 107, 110, 112, 113, 115, 117, 119, 122 Dreyfus, H.L. 45, 105, 106, 114, 117 Duffy, J. 264, 283 Duguit, P. 34, 44 Dulbecco, P. 9, 10, 12, 169, 171, 177, 182, 185 Dunbar, R. 193, 194, 197 Dutch 215 Dutta, S. 166 Dvorkin, R. M. 304, 305, 306 Earl, P. E. 197, 216 Eastern Europe 65, 85, 229 eBay 225 Ebert, R. 67 Eccles, R. G. 197 Ecology of Computation 95 Ecology of Decisions 93 e-commerce 218, 223 economics: Austrian 161; cognitive 215; experimental 110, 114–15; information 129; Mengerian 94; organizational 152, 161 Economics of Computation 95 Economides, N. 133, 139 economies of scale 154, 269 The Economist 15, 203, 216 economists: mainstream 28 economy: digital 202; free-market 12; knowledge 8, 21, 29, 143, 144, 145,
147, 151, 163; monetary 13, 249; real 249 Edelman, G. M. 114, 117, 120 EDI 204 Ege, R. 16 Egypt 239, 240 Eiberg, K. 165, 167 Eisenstein, E. 43, 44 Elkin-Koren, N. 300, 304, 305 Ellickson, R. C. 298, 304, 306 Ellig, J. 54, 59, 111, 119, 143, 144, 163, 165 embeddedness 11, 208 emergence 14, 263, 264, 289, 296, 298, 303; spontaneous 289 encryption 291, 292; technologies 293 Engelbart, D. 36, 44 Engines of Creation 96 entrepreneur 5, 12, 14, 23, 25, 27, 144, 148, 153, 156, 211, 223, 233, 234; Austrian 211; Hayek 220; Kirzner’s 25, 42; Mengerian 8; ’s authority 9 entrepreneurial 27; action 262; coordination 28; endeavor 221; invention 80; mind 27; process 41; solutions 218 entrepreneurship 5, 21, 24–6, 30, 41, 163, 221; Kirznerian 215 equilibrium 21; competitive economic 11; underemployment 189 ergodic 242, 243 erights 75, 79 error duration model 235 Eskerod, P. 162, 165 Ethernet 59 Eunice, J. 57, 60 European Association for Evolutionary Political Economy 215 European Union 238, 239 Evans, C. 86 evolution 12, 14; technological 291 evolutionary: analogy 114; biology 114 expectations 5, 11, 14, 27; ‘Austrian’ 13 externalities 133, 137, 201; asymmetric network 136; network 133 Fama, E. 165, 278, 283 family 65 Faraday, Michael 56 Farmer, R. 72, 87 Farrell, J. 153, 165 Federal Reserve 254; Bank 260; Board 13, 253, 254, 260; Rate 254; System 253, 257 Feldsman, S. 215, 216 Felsenstein, D. 195, 197
312 Index Ferguson, D. D. 115, 117 Ferrer i Cancho, R. 192, 198 Festré, A. 11, 12, 201 Feyerabend, P. K. 183 Fikes, R. E. 119 Fioretti, G. 10, 16, 189 firm 9, 12, 146, 152, 163; boundaries of the 9, 144, 145, 155, 163; knowledge-based 156 First World 66, 70, 83, 84, 85, 86 Fischetti, M. 44 fixity 38, 40 flexibility 38, 134 Flores, F. 45, 106, 114, 120 Foldvary, F. 93 Folio Views 35 Fondazione Dino Terra xv, 4 Foray, D. 53, 54, 60, 170, 171, 173, 180, 185 Foresight Institute 96 formalization 108 Foss, N. J. 8, 9, 12, 143, 144, 153, 154, 163, 164, 165, 166, 179, 180, 183, 185, 186, 215 Frantz, B. 87 Fred 77, 78, 79, 83 free banking 14 Freeman, C. 172, 185 Friedman, D. 72, 80, 87 Fujitsu Siemens 48 Fukuyama, F. 64, 65, 67, 87 fundamentalist 236 fundamentals 203, 205 Gable, W. 143, 144, 163, 165 Gadamer, H.-G. 42 Gaffard, J. L. 171, 172, 184, 185 Gaggi, S. 39, 44 game(s) 64, 72, 76, 82; non-cooperative 263; options 77; theory 109, 264 Garretsen, R. W. 126, 139 Garrison, R. 42, 260 Garrouste, P. xv, 9, 10, 12, 16, 17, 163, 169, 171, 177, 178, 185, 197, 215, 216 Garud, R. 58, 59, 60 Garzarelli, G. 5, 6, 8, 47, 58, 60 general equilibrium theory 109 Gensollen, M. 201, 212, 216 George Mason University (GMU) 91, 94, 104, 115, 228 Gerrard, R. 86 Ghoshal, S. 55, 60, 143, 144, 147, 161, 163, 164, 166, 167 Gibbons, R. 165
Gilanshah, C. B. 235, 242, 245 Gilbert, N. 124, 125 Gilmore, J. 83, 86 Gittleman, M. 143, 166 Glassner, L. 113, 116 Glenn, M. 86 global transferability 64 globalization 13, 254, 256 Gloria, S. 126, 139, 206, 216 Godley, W. 252, 260 Gold, S. 229 Goldberg, D. E. 114, 116, 117 good(s) 42; consumption 135; experience 130; higher-order 42; homogeneous 221, 222; information 127, 135; search 130; shared Goody, J. 43, 44 Gordon, R. J. 252, 261 Gould, M. 290, 306 Gould, R. M. 164, 166 Gould, S. J. 111, 117 governance 290, 291; self 292 governmental path 64, 69, 80, 82 Grandori, A. 146, 148, 166 Granovetter, M. 2, 4, 10, 17, 67, 87, 191, 193, 198 Grant, K. R. 119 Graubard, S. R. 113, 116, 117 Greenan, N. 175, 183, 185 Greenspan, A. 13, 249, 253, 254, 256, 257, 259, 260, 261; model 254 Gregory, R. 102 Grigg, I. 86 groupware 5, 21, 33–5, 37, 40 Guilhon, B. 210, 214, 216 Gunnig, P. 215 Gupta, A. 137, 139 Gurbaxani, V. 59, 61 Gutenberg, Johannes 38, 112 hacker 51 Hägerstrand, T. 193, 197 Hansmann, H. 58, 61 Hanson, R. 86 hardware 55 Hardy, N. 70, 87, 88, 117 Harryson, S. J. 145, 166 Hart, H. L. A. 304, 305, 306 Hart, O. 146, 149, 151, 155, 156, 164, 166 Harvard Business Review 34 Hayek, F. A. 3, 7, 10, 11, 12, 16, 42, 52, 55, 61, 66, 67, 87, 93, 95, 96, 99, 101, 103, 105, 106, 111, 112, 113, 115, 117, 123, 124, 143, 144, 147, 151, 153, 155,
Index 313 159, 160, 163, 166, 183, 190, 198, 204, 205, 206, 209, 211, 214, 215, 216, 219, 220, 223, 227, 228, 229, 260, 261, 262, 264, 272, 280, 281, 282, 283, 296, 297, 298, 299, 304, 306 Hayekian settings 147, 150, 151, 152, 155 hazard 82; moral 129, 255, 256 Hebb, D. O. 105, 117 Heidegger, M. 105 Heiner, R. A. 110, 113, 115, 118 Helper, S. 143, 146, 148, 166 Henderson, R. I. 154, 166 Hermalin, B. 154, 166 hermeneutical orientation 5 heterogeneity 28; individuals’ 204 heterogeneous 27, 97, 113 Hewitt, C. 70, 87, 113, 114, 118 Heyne, P. 228, 229 hierarchy 55, 56, 162 hieroglyphics 31 High, J. 93 High-tech Hayekians xvi, 3, 4, 8, 121, 122, 123 Hill, C. W. L. 144, 167 Hill, P. J. 112, 116 Hillis, D. W. 114, 118 Hitt, L. M. 174, 175, 184 Hodgson, G. M. 126, 139, 143, 146, 147, 166 Hogg, T. 115, 118, 120 Holland, J. 106, 114, 115, 116, 118 Hollis, M. 221, 228, 230 Holmstöm, B. 153, 166, 174, 185 Holyoak, K. J. 118 Horrigan, M. 166 Horwitz, S. 115, 116, 118 Hoselitz, B. F. 140, 246 How Buildings Learn 28 Howard, M.T. 119 HP 48 HTML (hypertext markup language) 35, 43, 123, 211 Huang, W. 236, 245 hub 66, 69, 213; trust 66, 70 Huber, P. 111, 118 Huberman, B. A. 95, 96, 113, 115, 116, 118, 120 Human Action 25 Hume, D. 250 Hurst, H. E. 242, 243, 244, 245 Husserl, Edmund 105 hypertext 1, 5, 21, 33–6, 39, 40, 43, 98–9, 100, 121, 122, 211; organization 43
IBM 48, 128, 181 Ichniowski, C. 143, 166 Indonesia 239, 240 ignorance 234, 264 Imaï, K. 185 incentives 9, 55, 151, 224, 226, 271 industrial district 7 inflation 253, 260 infomediaries 210 Information and Investment 9 information 2, 3 28, 29, 33, 35, 36, 129, 130, 171, 173, 174, 176, 177, 182, 189, 201, 203, 224, 226, 232, 265, 266, 268, 278, 302; asymmetric 12, 129, 218, 219, 221, 222, 228; decisive 153; diffusion of 9; electronic 41; goods 130; hiding 50; imperfect 129; local 153; misleading 221; processor 9; raw 35; symmetric 129; technology 8, 9, 33, 41, 98, 111, 112, 251, 277 informal sector 63 innovation 13, 14, 49, 171, 249, 251, 262, 265; endogenous 265; entrepreneurial 299; technological 251 institutions 2, 7, 10, 23, 24, 64, 83, 99, 206, 228, 255, 264; domestic 251; exchange 208, 211; financial 14, 272; legal 219; market 31; private 211, 218; social 208, 264, 265 intelligence 105, 113 interaction(s) 3, 10, 36; social 10, 209; structure 10 International Center for Economic Research 183 International Monetary Fund (IMF) 255, 258 interconnection 134 Internet 1–2, 7, 8, 11, 12, 26, 128, 133, 136, 202, 204, 210, 220, 224, 226, 267, 268, 281, 289–93, 295, 296, 299, 300–3; auction 225; economy 2–3, 9, 11, 13, 14, 130; entrepreneur 15; firms 12; markets 12, 223 interoperability 98 interpretation 25, 27, 30; instruments of 25; subjective 27 intervention: outside 8 Ioannides, S. 215, 216 iPlanet 48 Iran 238, 240 IS/LM 109 issuer 75, 77, 84, 270 Izurieta, A. 252, 260 Jacobs, L. 119
314 Index Jacobsen, N. 161 Jackson, D. 86 Janssen, M. 175, 185 Jansson, N. 215 Japan 239, 253 Jensen, M. C. 55, 56, 61, 143, 144, 151, 155, 160, 165, 166, 167 Jevons, S. 126 Johnson, B. 176, 185 Johnson, D. R. 70, 84, 87, 290, 292, 295, 296, 298, 300, 301, 305, 306 Jordan, J. 283, 285 Joyce, M. 166 judgement 82, 154 Julien, B. 136, 139 Kaehler, T. 96, 106, 110, 114, 118 Kahin, B. 306 Kahn, K. 86, 113, 118 Kahneman, D. 113, 118 Kapor, M. D. 62 Kauffman, S. A. 196, 198 Katz, M. L. 111, 118, 134 Kay, A. 97, 112, 118 Keller, J. H. 306 Kemerer, C. F. 184 Kendrick, J. W. 172, 183 Kephart, J. O. 115, 118, 120 Keynes, J. M 189, 198 Kilkenny, M. 195, 198 Kilpatrick, H. E. Jr. 124, 125 Kirman, A. 16, 17, 190, 198 Kirzner, I. 5, 22, 23, 25, 42, 44, 111, 118, 126, 139, 206, 228, 230 Kiyotaki, N. 126, 139, 263, 283 Klein, B. 281, 283 Klein, D. 227, 228, 229 Klein, P. G. 157, 164, 166, 167 Klein, S. 164, 167 Knight, F. H. 154, 167 knowledge 2, 6, 9, 17, 25, 29, 33, 36, 49, 52, 54, 55, 57, 95, 97, 98, 101, 102, 112, 129, 145, 146, 147, 153, 154, 171–3, 178, 181, 182, 214, 222, 232, 234, 264, 265, 272; -based team 36; codified and explicit 204; costly-totransfer 160; creating process 34; creation and sharing 26; decisive 162; distributed 150; embodied 121; ‘esoteric’ 52; hidden 153, 154; human 92, 232; local 64, 69, 71, 73, 79, 223; management 34, 35, 40, 43; market 25; mutual 214; new 104, 159; objective 25; productive 50; relevant 30; scientific 58, 112; sharing 34, 35; specialized 9,
56; structure 33; tacit 34, 36, 52, 58; technological 58; transmission 97; unorganized 209 Knowledge, Groupware and the Internet 34 “The knowledge-creating-company” 34 Knudsen, C. 186 Kochan, T. A. 166 Kochen, M. 194, 197 Kogut, B. 56, 59, 61 Kolind, L. 159, 160, 161, 162, 163, 164, 167 Koppl, R. 12, 13, 57, 231, 234, 235, 236, 237, 242, 244, 245 Kornfield, W. A. 114, 118 Korte, C. 191, 198 kosmos 206 Krecké, E. 7, 8, 12, 14, 15, 72, 84, 289, 305, 306 Krieger, D. 88 Kumar, M. 215, 216 Kumaraswamy, A. 58, 59, 60 Laboy-Bruno, J. L. 57 Lachmann, L. 5, 11, 12, 21, 24, 25, 27, 28, 42, 44–5, 111, 118, 178, 184, 185, 206, 208, 209, 215, 216 Lacobie, K. 93, 94, 122 laissez-faire 262, 263 Lakatos, I. 183 Lakhani, K. 58, 61 Landa, J. 228, 230 Landow, G.P. 39, 44 Landry, J. 229, 230 Lange, O. 111, 118 Langlois, R. N. 56, 57, 58, 59, 61, 115, 119, 171, 177, 178, 179, 180, 185 Langton, C. G. 114, 119 language 22, 23, 24, 42, 82 Latora, V. 192, 198 Lavoie, D. xv, 3, 4, 21, 42, 43, 45, 47, 59, 86, 91, 112, 114, 115, 116, 119, 121, 121, 122, 123, 125 law 15, 24, 68, 289, 295, 297, 299; anti-trust 302; Chinese 304; customary 298; and economics 14; enforcement 14; formal 85; organic 297; people’s 69; polycentric 298; production 294; rational 297; rules of 65; spontaneously evolved 298; traditional 294 lawmaking 289, 290, 291, 301 Lazonick, W. 181, 186 learning 49 legal 297, 302; context 14; framework 14, 15; institutions 290, 302; nature 289; positivism 289, 295, 296; system 69, 290, 300, 303; theory 289
Index 315 legality 84 legislation 14 legitimacy 64, 71, 79, 84 Leijonhufvud, A. 58, 59, 61 Lemley, M. A. 298, 305, 306 lemons 218, 221, 227; model 227; theory 220 Lenat, D. B. 114, 119 ‘lengthening’ 27, 28 Lerner, J. 53, 61 Lessig, L. 83, 84, 87 Levine, D. 166 Levinthal, D. A. 52, 60 Lewin, P. 42, 45 lex electronica 292 lex mercatoria 292 Liebeskind, J. P. 144, 145, 167 Liggio, L. P. 111, 119 Liman, Y. R. 57 Linus’s Law 51 Linux 47, 50, 57, 58, 59 Lissoni, F. 195, 197 Litan, R. 128, 139 Llerena, P. 173, 174, 185 Loasby, B. 181, 182, 184, 186 Lomaski, L. 283 Longhi, C. 185 Lotus Institute 34, 37 Lotus Notes Domino 36 Lovas, B. 161, 164, 167 Lucas, H.C. Jr. 34, 45 Lucca xv, 4 Lumer, E. 115, 118 Luna, F. 124, 125 Lundwall, B. A. 170, 171, 176, 185, 186, 214 Lynch, L. M. 175, 184 Lyregaard, P.-E. 159, 160, 162, 167 McClaughry, R. 216 McClelland, J. L. 114, 119 McCloskey, D. 112, 119 MacDuffie, J. P. 166 McGee, R. W. 306 McGrattan, E. 119 Machlup, F. 183, 186 Mackaay, E. 293, 294, 301, 304, 305, 306 McIntyre, L. J. 304, 306 McKenny, J.L. 45 Mackie-Mason, J. K. 136, 139 McKnight, L. W. 136, 139 Mahnko, V. 165 Mairesse, J. 175, 183, 185 Malerba, F. 195 Malmgren, H. B. 52, 61, 180, 184, 186
Malone, T. W. 59, 61, 111, 115, 119 manager 157, 159; board 73 Mandelbrot, B. 242, 243, 245, 246 Mandeville, B. 6 Manzoni, J.-F. 166 Marchetti, C. 197 Marchiori, M. 192, 198 Marengo, L. 185 Marimon, R. 115, 119 market 2, 11, 22, 42, 146, 163, 208, 218, 231, 241, 266; assets 231; bond 257; capital and financial 157; computation 104; e- 210, 211, 213; economy 11, 100; failure 11, 12, 218; financial 13, 231, 239, 255; insurance 256; internet 219, 223; lemons 12; organization 12; principles 106; private 297; process 25, 92, 95, 101, 107, 108; signals 24; stock 257; technological 213; wheat 237 Market Process xvi, 3, 91, 93 Market Process Center 91, 93, 97 Markoff, J. 229, 230 Martin, H.-J. 43, 45 Marxian 17 Mason, R. 133, 136, 137, 139 Massala, A. xv mathematics 108 Mathews, J. A. 167 Mattei,U. 305, 306 Matusik, 144, 167 May, T. 70 meaning 22, 24, 25, 29, 39 mechanism: feedback 21; invisible-hand 15; self-governing 219 Meckling, W. H. 55, 56, 61, 143, 144, 151, 155, 160, 166 media: electronic 38, 39, 40 memorization 30 Ménard, C. 183, 186 Mendelsson, H. 143, 167 Menger, C. 8, 11, 12, 13, 15, 24, 42, 122, 126, 127, 130, 131, 132, 138, 140, 207, 208, 209, 211, 215, 217, 232, 246, 264, 283, 297, 306 mental 30; experiments 114; scheme 24 Mercatus Center xvi, 110 Mercuro, N. 299, 306 Meredith, R. 47, 60 Metcalfe, B. 59 Metcalfe, J. S. 147, 163 Metcalfe’s law 133 methodological solipsism 42 Mexico 255 Michalski, R. S. 118
316 Index Microsoft 2, 16, 48, 49, 57–8, 128; Exchange 36 Miles, G. 167 Miles, R.E. 148, 152, 167 Milgram, S. 190, 191, 198 Milgrom, P. 143, 153, 162, 164, 167 Miller, F. D. 307 Miller, G. 157, 167 Miller, J. H. 114, 119, 120 Miller, M. S. 3–5, 8, 10, 15, 37, 38, 40, 42, 43, 45, 57, 59, 61, 63, 70, 73, 82, 87, 88, 93, 95, 96, 101, 103, 104, 107, 110, 112, 113, 115, 116, 118, 119, 121, 122 mind 22, 105, 106, 113 Mink, P. 111, 119 Minkler, A. P. 152, 154 Minsky, M. 106, 114, 119 Mises, L. 9, 13, 16, 23, 24, 42, 45, 47, 61, 101, 103, 111, 144, 150, 151, 152, 153, 154, 157, 163, 167, 228, 230, 249, 250, 259, 260, 261, 299, 305 MIT 51, 55, 96 modularity 49, 59, 101 module 50 Monceri, F. xv Mondex 267 monetary: aspects 249; conditions, 254; creation 13; discipline 265, 282; effect 250; instruments 31; order 263, 278; policy 13, 14, 251, 253, 254, 255, 257, 258, 273, 273, 274; power 266; sector 13, 14; theory 250, 263, 265, 266, 282; trade cycle 250 money 13, 15, 24, 31, 76, 79, 262, 264, 274; bad 269; competitive 262; electronic (or e-) 14, 262–82; good 269; private 262; supply 250; velocity of 276; virtual 267 monitoring 55 Moore, J. 146, 149, 155, 156, 164, 166 Moraga, J. L. 175, 185 Moran, P. 60, 166 Morningstar, C. 87 Morocco 238 Morsing, M. 164, 165, 167 Mozilla 48, 59 Mramor, D. 235, 244, 245 MS-DOS 98 Mueller, A. P. 13, 249, 256, 258, 261 Mueller, J. 227, 230 Murphy, K. J. 165 Myers, P. S. 145, 167 The Mystery of Capital 63, 68 The Mythical Man-Month 59
NAIRU 251 nanotechnology 96 Nardone, C. 235, 245 Nash, H. 106 Nash, R. 118 negotiation 8 Nelson, R. R. 52, 58, 61, 115, 119, 190, 198 Nelson, T. H 43, 45, 86, 112, 119 Net 64, 66, 70, 83 Netscape 48, 59 network(s) 2, 9, 10, 16, 24, 41, 52, 53, 70, 133, 136, 145, 192, 224, 268, 271, 284; bookseller 225; communication 268; connected 66; of games 77; global 85, 195; high (low)-trust 7; influential 9; local 7; neural 96; products 54; sociological 2; structure 2; of trust 64, 65, 70; virtual 67 new economy 1, 8, 9, 12, 13, 127, 128, 129, 130, 138, 169, 170–3, 176, 179, 195, 201–4, 206, 209, 210, 214, 215, 231, 232, 233, 249, 258, 290 new form of production 127 Newell, A. 113, 119 Newman, M. E. J. 192, 198 New York 70 New York University 92 Nielsen, P. 170, 171, 186 Nisbet, R. E. 118 Nock, J. 140 Nohria, N. 197 noise trader 236, 237 Nonaka, I. 34, 43, 45 nonergodic 242 Nooteboom, B. 197 Nordhaus, W. D. 252, 261 Nozick, R. 15, 17 object-oriented programming 102 Ochs, J. 264, 283 Ockman, S. 60 Odlyzko, A. 137, 140 O’Driscoll, G. P. 147, 167, 206, 217 OECD 143, 167, 171, 172, 173, 183, 186, 237 Ogus, A. 305, 307 O’Hearn, Z. 86 Oliver, A. L. 167 Olson, C. 166 Ong, W. 30, 31, 40, 45 Onliner, S. D. 251, 261 opportunism 55 opportunities 12, 13, 23, 26, 41 option 75, 76; call 75
Index 317 order 39; meaningful 27; spontaneous 105, 113 O’Reilly, T. 50, 59, 62 organization 2, 12, 24, 43, 47, 53, 55, 56, 58, 95, 100; economic 8, 143, 145, 147, 151, 163; internal 9, 145; Oticon spaghetti 160; self-organizing 56; self-regulating 53; spontaneous 57 Organizational Science 43 orientation 23, 24; means of 24; mode of 24; mutual 26, 30, 41; subjective 27 Osterfeld, D. 283, 284 Osterman, P. 143, 167, 183, 186 Ostrom, T. 124, 125 The Other Path 6, 66, 68 Oticon 9, 158, 159, 161, 162, 163, 164 Ott, C. 306 ownership 52, 55, 68, 150, 155, 157 Pakistan 240 Palazzo del Consiglio 4 Palmer, T. 111, 119, 120 Palo Alto 93, 95, 96 Pandya, R. 45 Pareto 263 Parke, W. R. 235, 236, 246 Parker, D. 143, 144, 145, 146, 147, 150, 163, 164, 165 Parnas, D. L. 50, 62 Paul, E. F. 307 Penrose, E. 51, 62, 183, 186 perceptions 5, 23, 82 Perez, C. 172, 186 Pergamit, G. 86 Perl 47, 50 Peru 69 Peterson, C. 86, 96 Picot, A. 59, 62, 211, 217 Pillai, R. R. 143, 167 Pines, D. 116, 118 Pisa xv, 4, 215 Plot, C. 115, 120 Polanyi, M. 52, 58, 62, 114, 120, 209, 217 policy 56; monetary 13, 14 Popper, K. 9 Popperian 16, 17, 231 portals 16 Post, D. G. 70, 84, 87, 290, 292, 295, 296, 298, 300, 301, 305, 306 Poulsen, P. T. 164, 167 Powell, W. 146, 167 Prague 85 price 24, 31; fix 208; flex 208; flexible 207; mechanism 47 Prigogine, I. 115, 120
printing 39; press 31, 43; revolution 32 probability(ies) 235, 236; survival 235, 236 Problems of Sociology 127 process 25; competitive 227; coordination 25; discovery 15, 206; disequilibrium 97; economic 43; entrepreneurial 21; evolutionary 222; learning 5; market 30, 222, 264, 266; mechanical social, 21, 26; Mengerian 94; orientative social 21; self-organizing 2, 6; self-regulating; short-memory 242; social 21 product: network 57 profession 52, 53 profitability 233 programming 100; practice 100, 101 The Progress of Subjectivism 42 Projects and Products Committee (PPC) 160, 161, 162, 163 property 292, 293; holder 79; intellectual 301; owner 64, 293; private 165; -system 64 property rights 6, 7, 11, 15, 17, 31, 63, 102, 104, 111, 280, 301; experimental 294 production 47, 54; process 47 productivity 34, 175; higher 254 Prusak, L. 145, 167 Prychitko, D. L. 116 Pryor, F. L. 124, 125 psychologism 4 Puccini, G. 4 Pulitini, F. 305, 306 Quah, D. 128, 140 QWERTY 111 Radner, C. B. 198 Radner, R. 198 Rahn, R. 272, 283, 285 Raish, W. 210, 213, 217 Rallet, A. 172, 184, 216 rates: interest 252; investment 252 rating 64, 80 rational expectations 109 rationality 10; bounded 10, 192, 193, 195, 196 Raymond, E. S. 49, 51, 53, 54, 55, 58, 59, 62 real time 49, 58 Rector, R. 92, 93 Reeke, G. N. Jr. 114, 120 Rees, J. 70, 87 refutation 99, 100 regulation 14, 272, 290, regulatory: arbitrage 83; capture 65, 83
318 Index Reidenberg, J. 304, 307 relationships 132, 135 reputation 6, 14, 15, 52, 53, 57, 59, 130, 162, 224, 227 resources 103 restructuring 39; of articulations 43 returns: increasing 8 Rey, P. 139 Richardson, G. B. 9, 10, 17, 51, 62, 177, 178, 183, 184, 186 rights 75, 76; base 79; decision 153, 155, 160; decision-making 157, 159; delegated 9, 150; electronic 77; enforceable 7; ownership 9; transfer of 68; see also property rights Riker, V. H. 298, 307 Ripperger, T. 62 Rizzello, S. 215, 217 Rizzo, M. J. 42, 147, 167, 206, 217, 304 Robbins, L 101 Roberts, J. 143, 153, 164, 167 Robertson, P. L. 56, 58, 59, 60, 177, 179, 185 Rosenberg, D. K. 48, 58, 59, 62 Rosenberg, N. 55, 61 Rosenblatt, F. 105, 120 Rosser, J. 245 routines 52, 58, 190, 208 Rubinstein, A. 136, 140 Rumelhart, D. E. 114, 119 Russia 71, 255 Rust, J. 120 rules 14, 69, 86, 300; abstract 56; ethical 295; informal 298 Ryle, G. 16, 58, 62 Sabel, C. 166 Salin, P. 93, 95, 110 Salzberger, E. 300, 304, 305 Samuels, W. J. 299, 307 Samuelson, P. 59, 62 Santa Fe Institute 115 Santomero, A. 273, 283 Sargent, T. J. 119 Sarjanovic, I. 12, 13, 231 Savage, D. 6, 51, 52, 53, 56, 62 Say, J. B. 232, 246 Schaefer, H. B. 306 Schlör, H. 215 Schmidt, C. 139, 215, 217 Schnabl, H. 196, 198 Schredelseker, K. 245 Schuler, K. 94 Schumpeter, J. A. 196, 198, 215, 217
Schumpeterian perspective 249 Schwartz, J. T. 114, 120 Schwartzstein, L. A. 293, 305, 307 search models 126 Seater, J. 273, 283 Secure Electronic Transaction 226 Seigniorage 275 selection: adverse 12, 129, 218 self: -governing 289; -interest 228; -interpretation 97; -organization 52, 56; -organizing 159, 207; -police 228; -regulation 270, 289; -reinforcing 126; -reinforcment 207; -sufficiency 207 Selgin, G. 72, 276, 283, 284 Sen, A. 197 Sendmail 47 The Sensory Order 105, 106 Serino, M. 33, 36, 37, 46 Shapiro, C. 111, 118, 127, 134, 140, 201, 205, 217 Shapiro, J. S. 70, 86, 87 Sharp, D. H. 114, 116 Shavell, S. 304, 307 Shearmur, J. 228 Shriver, B. 113, 120 Sichel, D. E. 251, 261 Siena 215 Silicon Valley 85, 93 Silverberg, G. 115, 120 Simon, H. A. 10, 113, 119, 120, 146, 148, 149, 151, 155, 168, 186, 193, 198 simulation 108; computer 108; Mengerian 109 Sinaika, Y. M. 245 Sloan Management Review 43 Slovenian 235 Slovic, P. 118 small world 189, 190, 192 Smith, A. 67, 228, 232, 233, 246 Smith, D. E. 34 Smith, M. 204, 215, 216, 217 Smith, V. 266, 283 Smith V. L. 110, 115, 120, 122 Snow, C. C. 167 The Social Life of Information 34 society(ies) 65; Confusian 65 The Society of Mind 106 software 5, 6, 8, 49, 54, 55, 57, 97, 98, 100, 267; capabilities 34; as capital 97, 121; code 51; engineers 43; open source (OSS) 6, 47, 49, 50, 59 Software As Capital 42, 121 Sokolowski, R. 120 Solomon, E. H. 269, 283 Solow, R. 16, 17
Index 319 source: open 49, 50, 53, 56, 57, 58; shared 49 Soviet Union 123 Spain 173 specialization 180, 181 Spinosa, C. 23, 45 spontaneous: mechanism 14 Stahl, D. O. 139 Stallman, R. 57 standardization 54 Stanford 51; University 96 Stanley, T. 81, 86 Steckbeck, M. 12, 15, 72, 218, 230 Stefanski, P. 94 Stefansson, B. 124, 125 Stefik, M. J. 112, 120 Steinmuller, W. E. 170, 171, 186 Stengers, I. 115, 120 Stevens, E. 283, 285 Stieger, R. 87 Stiegler, M. 4, 8, 10, 15, 45, 63, 91, 96, 123 Stiglitz, J. E. 173, 174, 183, 186 stock 76, 79 Stone, M. 60 Stoneman, P. 116 Stornetta, S. 120 Storr, V. 42 Strauss, G. 166 Streissler, E. W. 140 Streissler, M. 140 Strogatz, S. H. 192, 198 structure 27, 29; hierarchical 38; intertextual 33; knowledge 33; meta- 38; of production 27 Subjectivism 5, 21, 30, 42 Sussman, G. J. 107, 110, 112, 116 Swan, C. xv system: decentralized 113; incentive 154; legal 290; open 113 Szabo, N. 71, 81, 84, 86, 87 Taghard, P. R. 118 taxis 206 technological: convergence 55; progress 254 technology 55; general purpose (GPT) 55, 56; information (IT) 202–5; information and communication (ICT) 55, 57, 111, 169, 170–6, 182; new 57, 301; payments 262; see also communication and information telecommunication 33 Teece, D. J. 181, 186 Terna, P. 4, 8, 91, 123, 124, 125 Tesfatsion, L. 125
text 27–30, 33, 42; electronic 38, 39 theory: Austrian business cycle 13; Austrian capital 42; business cycle 13; of the firm 9; general equilibrium 205; lemons 219; of money 126; neoclassical 263; Walrasian 205 Theory of Political Economy 126 third-party assayable 75 Third World 7, 63, 65, 68, 69, 70, 83, 86 Thomas, D. 113, 120 Thorp, J. 34, 45 ties: strong 191; weak 191 time 75; Newtonian 206; preferences 223 Tirole, J. 53, 61, 139, 154, 158, 164, 165 title 68, 80 Tobin, J. 283 Tomlinson, M. 143, 144, 168 tool: categorizing 40 Torre, D. 8, 126 Torwalds, L. 58 Tosi, E. 8, 126 Trajtenberg, M. 55, 60 transferability: global 64, 69, 74 transaction: complex 31, 292; electronic 292 transformation: media 33 transition 30 transnationality 278, 279, 281 Tribble, E. D. 45, 70, 72, 86, 87, 88, 119, 122 Trombly, M. 229 trust 7, 8, 12, 14, 15, 64, 65, 66, 73, 83, 214; family 65; high 65; low 65, 67 Tullock, G. 112, 120 Tulloh, W. xvi, 3, 42, 86, 91, 92, 94, 125 Turcanu, A. 59, 61 Turkish 255 Turkle, S. 105, 120 Tversky, A. 118 uncertainty 129, 179, 234, 263 Understanding Computers and Cognition 106 United States 2, 13, 17 University of Connecticut 57 University of Pisa xv, 4 untraceability 270 URL 37 US Department of Commerce 202, 217 USA 238, 241, 251, 252, 253, 255, 258 USSR 240, 241 utility 55 ‘utterance’ 30 values 27 Van Reenen, J. 183, 184
320 Index van Zijp, R. 139 Varian, A. 136, 139 Varian, H. R. 127, 140, 201, 205, 217 Vaughn, K. I. 42, 45 verbal 32 verbalization 31 Versailles, D. 215, 217 Video Contract 81 VIEWS 121–2 village 70, 80 Vinge, V. 75, 88 Volle, M. 129, 140 volume, volatility, velocity, virtuality 256 von Hippel, E. 58, 61 Waldmeir, P. 304, 307 Waldspurger, C. A. 115, 120 Walker, J. 82, 88 Wall Street 76, 80 Wallis, J. R. 242, 243, 246 Warden, E. A. 182, 186 Washington Evolutionary Systems Society 93 Watcherhauser, B. R. 42, 45 Watts, D. J. 192, 198 wealth 21, 33, 34, 41; real 22 Web 35, 43, 121, 122, 135; browsers 35 Weber, M. 148, 149 Wegner, P. 113, 120 Weimer, D. L. 298, 306 Werin, L. 166 Weisbuch, G. 190, 198 Weiss, D. M. 62 West, The 67, 68, 83, 85, 86 West, J. 58, 62 Whang, S. 59, 61 What Computers Can’t Do 106
Whinston, A. B. 139 White, M. 245 Wieser, L. von 11, 12, 42, 208, 209, 211, 215, 217 Wicksell, K. 250, 261 Wijkander, H. 166 William Demant Holding A/S 158 Williamson, O. E. 59, 62, 146, 153, 157, 158, 164, 168, 298, 304, 307 Wimmer, B. S. 136, 140 Winograd, T. 106, 114, 120 Winter, S. G. 52, 58, 61, 115, 119, 190, 198 Wittgenstein, Ludwig 105 Wolff, B. 62 Wolinski, A. 136, 140 Woodwall, P. 202 Woodward, B. 259, 261 Wool, A. 75, 86 Wright, R. 139, 263, 283 writing 31, 34, 39, 43; /reading 31 Wruck, K. 143, 144, 166 Xanadu 96, 99, 102, 110, 121, 122, 123; Operating Company 95 Xeros Palo Alto Research Center 96 Yates, J. 61, 119 Yeager, L. B. 234, 235, 242, 245 Yee, K-P. 86 Zack, M.H. 33, 34, 36, 37, 45–6 Zander, U. 56, 61 Zeus 48 Zingales, L. 146, 168 Zucker, L. 146, 148, 167, 168 Zuscovitch, E. 170, 186