Environmental encyclopedia: N-Z

  • 65 613 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Environmental encyclopedia: N-Z

Environmental Encyclopedia Third Edition Volume 1 A-M Marci Bortman, Peter Brimblecombe, Mary Ann Cunningham, William P

1,871 81 33MB

Pages 1675 Page size 612 x 792 pts (letter) Year 2006

Report DMCA / Copyright

DOWNLOAD FILE

Recommend Papers

File loading please wait...
Citation preview

Environmental Encyclopedia Third Edition Volume 1 A-M

Marci Bortman, Peter Brimblecombe, Mary Ann Cunningham, William P. Cunningham, and William Freedman, Editors

Environmental Encyclopedia Third Edition Volume 2 N-Z Historical Chronology U.S. Environmental Legislation Organizations General Index Marci Bortman, Peter Brimblecombe, Mary Ann Cunningham, William P. Cunningham, and William Freedman, Editors

Disclaimer: Some images in the original version of this book are not available for inclusion in the eBook.

Environmental Encyclopedia 3 Marci Bortman, Peter Brimblecombe, William Freedman, Mary Ann Cunningham, William P. Cunningham

Project Coordinator Jacqueline L. Longe

Editorial Systems Support Andrea Lopeman

Editorial Deirdre S. Blanchfield, Madeline Harris, Chris Jeryan, Kate Kretschmann, Mark Springer, Ryan Thomason

Permissions Shalice Shah-Caldwell

©2003 by Gale. Gale is an imprint of the Gale Group, Inc., a division of Thomson Learning, Inc.

For permission to use material from this product, submit your request via the Web at http://www.gale-edit.com/ permissions, or you may download our Permissions Request form and submit your request by fax or mail to:

Gale and Design™ and Thomson Learning™ are trademarks used herein under license. For more information, contact The Gale Group, Inc. 27500 Drake Road Farmington Hills, MI 48331-3535 Or visit our Internet site at http://www.gale.com ALL RIGHTS RESERVED No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, Web distribution, or information storage retrieval systems—without the written permission of the publisher.

Imaging and Multimedia Robert Duncan, Mary Grimes, Lezlie Light, Dan Newell, David Oblender, Christine O’Bryan, Kelly A. Quin

Permissions Department The Gale Group, Inc. 27500 Drake Road Farmington Hills, MI 48331-3535 Permissions hotline: 248-699-8006 or 800-877-4253, ext. 8006 Fax: 248-699-8074 or 800-762-4058 Since this page cannot legibly accommodate all copyright notices, the acknowledgments constitute an extension of the copyright notice.

ISBN 0-7876-5486-8 (set), ISBN 0-7876-5487-6 (Vol. 1), ISBN 0-7876-5488-4 (Vol. 2), ISSN 1072-5083 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1

Product Design Michelle DiMercurio, Tracey Rowens, Jennifer Wahi Manufacturing Evi Seoud, Rita Wimberley

While every effort has been made to ensure the reliability of the information presented in this publication, The Gale Group, Inc. does not guarantee the accuracy of the data contained herein. The Gale Group, Inc. accepts no payment for listing, and inclusion in the publication of any organization, agency, institution, publication, service, or individual does not imply endorsement of the editors or publisher. Errors brought to the attention of the publisher and verified to the satisfaction of the publisher will be corrected in future editions.

CONTENTS

ADVISORY BOARD ............................................ xxi CONTRIBUTORS .............................................. xxiii HOW TO USE THIS BOOK ............................ xxvii INTRODUCTION .............................................. xxix VOLUME 1 (A-M): .............................................1 A Abbey, Edward Absorption Acclimation Accounting for nature Accuracy Acetone Acid and base Acid deposition Acid mine drainage Acid rain Acidification Activated sludge Acute effects Adams, Ansel Adaptation Adaptive management Adirondack Mountains Adsorption Aeration Aerobic Aerobic sludge digestion Aerobic/anaerobic systems Aerosol Aflatoxin African Wildlife Foundation Africanized bees Agency for Toxic Substances and Disease Registry Agent Orange Agglomeration Agricultural chemicals Agricultural environmental management

Agricultural pollution Agricultural Research Service Agricultural revolution Agricultural Stabilization and Conservation Service Agroecology Agroforestry AIDS Air and Waste Management Association Air pollution Air pollution control Air pollution index Air quality Air quality control region Air quality criteria Airshed Alar Alaska Highway Alaska National Interest Lands Conservation Act (1980) Albedo Algal bloom Algicide Allelopathy Allergen Alligator, American Alpha particle Alternative energy sources Aluminum Amazon basin Ambient air Amenity value American box turtle American Cetacean Society American Committee for International Conservation American Farmland Trust American Forests American Indian Environmental Office American Oceans Campaign American Wildlands Ames test Amoco Cadiz Amory, Cleveland

iii

Environmental Encyclopedia 3

CONTENTS

Anaerobic Anaerobic digestion Anemia Animal cancer tests Animal Legal Defense Fund Animal rights Animal waste Animal Welfare Institute Antarctic Treaty (1961) Antarctica Antarctica Project Anthrax Anthropogenic Antibiotic resistance Aquaculture Aquarium trade Aquatic chemistry Aquatic microbiology Aquatic toxicology Aquatic weed control Aquifer Aquifer depletion Aquifer restoration Arable land Aral Sea Arco, Idaho Arctic Council Arctic haze Arctic National Wildlife Refuge Arid Arid landscaping Army Corps of Engineers Arrhenius, Svante Arsenic Arsenic-treated lumber Artesian well Asbestos Asbestos removal Asbestosis Ashio, Japan Asian longhorn beetle Asian (Pacific) shore crab Asiatic black bear Assimilative capacity Francis of Assisi, St. Asthma Aswan High Dam Atmosphere Atmospheric (air) pollutants Atmospheric deposition Atmospheric inversion Atomic Energy Commission Atrazine Attainment area Audubon, John James iv

Australia Autecology Automobile Automobile emissions Autotroph Avalanche

B Bacillus thuringiensis Background radiation Bacon, Sir Francis Baghouse Balance of nature Bald eagle Barrier island Basel Convention Bass, Rick Bats Battery recycling Bay of Fundy Beach renourishment Beattie, Mollie Bellwether species Below Regulatory Concern Bennett, Hugh Hammond Benzene Benzo(a)pyrene Berry, Wendell Best available control technology Best management practices Best Practical Technology Beta particle Beyond Pesticides Bhopal, India Bikini atoll Bioaccumulation Bioaerosols Bioassay Bioassessment Biochemical oxygen demand Biodegradable Biodiversity Biofilms Biofiltration Biofouling Biogeochemistry Biogeography Biohydrometallurgy Bioindicator Biological community Biological fertility Biological methylation Biological Resources Division Bioluminescence

Environmental Encyclopedia 3 Biomagnification Biomass Biomass fuel Biome Biophilia Bioregional Project Bioregionalism Bioremediation Biosequence Biosphere Biosphere reserve Biotechnology Bioterrorism Biotic community Biotoxins Bioventing BirdLife International Birth defects Bison Black lung disease Black-footed ferret Blackout/brownout Blow-out Blue Angel Blue revolution (fish farming) Blue-baby syndrome Bookchin, Murray Borlaug, Norman E. Boston Harbor clean up Botanical garden Boulding, Kenneth Boundary Waters Canoe Area Brackish Bromine Bronchitis Brower, David Ross Brown, Lester R. Brown pelican Brown tree snake Browner, Carol Brownfields Brundtland, Gro Harlem Btu Budyko, Mikhail I. Buffer Bulk density Burden of proof Bureau of Land Management Bureau of Oceans and International Environmental and Scientific Affairs (OES) Bureau of Reclamation Buried soil Burroughs, John Bush meat/market Bycatch

CONTENTS

Bycatch reduction devices

C Cadmium Cairo conference Calcareous soil Caldicott, Helen Caldwell, Lynton Keith California condor Callicott, John Baird Canadian Forest Service Canadian Parks Service Canadian Wildlife Service Cancer Captive propagation and reintroduction Carbon Carbon cycle Carbon dioxide Carbon emissions trading Carbon monoxide Carbon offsets (CO2-emission offsets) Carbon tax Carcinogen Carrying capacity Carson, Rachel Cash crop Catalytic converter Catskill watershed protection plan Center for Environmental Philosophy Center for Respect of Life and Environment Center for Rural Affairs Center for Science in the Public Interest Centers for Disease Control and Prevention Cesium 137 Chain reaction Chaparral Chelate Chelyabinsk, Russia Chemical bond Chemical oxygen demand Chemical spills Chemicals Chemosynthesis Chernobyl Nuclear Power Station Chesapeake Bay Child survival revolution Chimpanzees Chipko Andolan movement Chlordane Chlorinated hydrocarbons Chlorination Chlorine Chlorine monoxide Chlorofluorocarbons

v

Environmental Encyclopedia 3

CONTENTS

Cholera Cholinesterase inhibitor Chromatography Chronic effects Cigarette smoke Citizen science Citizens for a Better Environment Clay minerals Clay-hard pan Clayoquot Sound Clean Air Act (1963, 1970, 1990) Clean coal technology Clean Water Act (1972, 1977, 1987) Clear-cutting Clements, Frederic E. Climate Climax (ecological) Clod Cloning Cloud chemistry Club of Rome C:N ratio Coal Coal bed methane Coal gasification Coal washing Coase theorem Coastal Society, The Coastal Zone Management Act (1972) Co-composting Coevolution Cogeneration Cold fusion Coliform bacteria Colorado River Combined Sewer Overflows Combustion Cometabolism Commensalism Commercial fishing Commission for Environmental Cooperation Commoner, Barry Communicable diseases Community ecology Compaction Comparative risk Competition Competitive exclusion Composting Comprehensive Environmental Response, Compensation, and Liability Act Computer disposal Condensation nuclei Congo River and basin Coniferous forest vi

Conservation Conservation biology Conservation easements Conservation International Conservation Reserve Program (CRP) Conservation tillage Consultative Group on International Agricultural Research Container deposit legislation Contaminated soil Contour plowing Convention on International Trade in Endangered Species of Wild Fauna and Flora (1975) Convention on Long-Range Transboundary Air Pollution (1979) Convention on the Conservation of Migratory Species of Wild Animals (1979) Convention on the Law of the Sea (1982) Convention on the Prevention of Marine Pollution by Dumping of Waste and Other Matter (1972) Convention on Wetlands of International Importance (1971) Conventional pollutant Copper Coprecipitation Coral bleaching Coral reef Corporate Average Fuel Economy standards Corrosion and material degradation Cost-benefit analysis Costle, Douglas M. Council on Environmental Quality Cousteau, Jacques-Yves Cousteau Society, The Coyote Criteria pollutant Critical habitat Crocodiles Cronon, William Cross-Florida Barge Canal Cruzen, Paul Cryptosporidium Cubatao, Brazil Cultural eutrophication Cuyahoga River Cyclone collector

D Dam removal Dams (environmental effects) Darling, Jay Norwood “Ding” Darwin, Charles Robert Dead zones Debt for nature swap Deciduous forest Decline spiral

Environmental Encyclopedia 3 Decomposers Decomposition Deep ecology Deep-well injection Defenders of Wildlife Defoliation Deforestation Delaney Clause Demographic transition Denitrification Deoxyribose nucleic acid Desalinization Desert Desert tortoise Desertification Design for disassembly Detergents Detoxification Detritivores Detritus Dew point Diazinon Dichlorodiphenyl-trichloroethane Dieback Die-off Dillard, Annie Dioxin Discharge Disposable diapers Dissolved oxygen Dissolved solids Dodo Dolphins Dominance Dose response Double-crested cormorants Douglas, Marjory Stoneman Drainage Dredging Drift nets Drinking-water supply Drip irrigation Drought Dry alkali injection Dry cask storage Dry cleaning Dry deposition Dryland farming Dubos, Rene´ Ducks Unlimited Ducktown, Tennessee Dunes and dune erosion Dust Bowl

CONTENTS

E Earth Charter Earth Day Earth First! Earth Island Institute Earth Liberation Front Earth Pledge Foundation Earthquake Earthwatch Eastern European pollution Ebola Eco Mark Ecocide Ecofeminism Ecojustice Ecological consumers Ecological economics Ecological integrity Ecological productivity Ecological risk assessment Ecological Society of America Ecology EcoNet Economic growth and the environment Ecosophy Ecosystem Ecosystem health Ecosystem management Ecoterrorism Ecotone Ecotourism Ecotoxicology Ecotype Edaphic Edaphology Eelgrass Effluent Effluent tax EH Ehrlich, Paul El Nin˜o Electric utilities Electromagnetic field Electron acceptor and donor Electrostatic precipitation Elemental analysis Elephants Elton, Charles Emergency Planning and Community Right-to-Know Act (1986) Emergent diseases (human) Emergent ecological diseases Emission Emission standards

vii

Environmental Encyclopedia 3

CONTENTS

Emphysema Endangered species Endangered Species Act (1973) Endemic species Endocrine disruptors Energy and the environment Energy conservation Energy efficiency Energy flow Energy path, hard vs. soft Energy policy Energy recovery Energy Reorganization Act (1973) Energy Research and Development Administration Energy taxes Enteric bacteria Environment Environment Canada Environmental accounting Environmental aesthetics Environmental auditing Environmental chemistry Environmental Defense Environmental degradation Environmental design Environmental dispute resolution Environmental economics Environmental education Environmental enforcement Environmental engineering Environmental estrogens Environmental ethics Environmental health Environmental history Environmental impact assessment Environmental Impact Statement Environmental law Environmental Law Institute Environmental liability Environmental literacy and ecocriticism Environmental monitoring Environmental Monitoring and Assessment Program Environmental policy Environmental Protection Agency Environmental racism Environmental refugees Environmental resources Environmental science Environmental stress Environmental Stress Index Environmental Working Group Environmentalism Environmentally Preferable Purchasing Environmentally responsible investing Enzyme viii

Ephemeral species Epidemiology Erodible Erosion Escherichia coli Essential fish habitat Estuary Ethanol Ethnobotany Eurasian milfoil European Union Eutectic Evapotranspiration Everglades Evolution Exclusive Economic Zone Exotic species Experimental Lakes Area Exponential growth Externality Extinction Exxon Valdez

F Family planning Famine Fauna Fecundity Federal Energy Regulatory Commission Federal Insecticide, Fungicide and Rodenticide Act (1972) Federal Land Policy and Management Act (1976) Federal Power Commission Feedlot runoff Feedlots Fertilizer Fibrosis Field capacity Filters Filtration Fire ants First World Fish and Wildlife Service Fish kills Fisheries and Oceans Canada Floatable debris Flooding Floodplain Flora Florida panther Flotation Flu pandemic Flue gas Flue-gas scrubbing Fluidized bed combustion

Environmental Encyclopedia 3 Fluoridation Fly ash Flyway Food additives Food and Drug Administration Food chain/web Food irradiation Food policy Food waste Food-borne diseases Foot and mouth disease Forbes, Stephen A. Forel, Francois-Alphonse Foreman, Dave Forest and Rangeland Renewable Resources Planning Act (1974) Forest decline Forest management Forest Service Fossey, Dian Fossil fuels Fossil water Four Corners Fox hunting Free riders Freon Fresh water ecology Friends of the Earth Frogs Frontier economy Frost heaving Fuel cells Fuel switching Fugitive emissions Fumigation Fund for Animals Fungi Fungicide Furans Future generations

G Gaia hypothesis Gala´pagos Islands Galdikas, Birute Game animal Game preserves Gamma ray Gandhi, Mohandas Karamchand Garbage Garbage Project Garbology Gasohol Gasoline

CONTENTS

Gasoline tax Gastropods Gene bank Gene pool Genetic engineering Genetic resistance (or genetic tolerance) Genetically engineered organism Genetically modified organism Geodegradable Geographic information systems Geological Survey Georges Bank Geosphere Geothermal energy Giant panda Giardia Gibbons Gibbs, Lois Gill nets Glaciation Gleason, Henry A. Glen Canyon Dam Global Environment Monitoring System Global Forum Global ReLeaf Global Tomorrow Coalition Goiter Golf courses Good wood Goodall, Jane Gore Jr., Albert Gorillas Grand Staircase-Escalante National Monument Grasslands Grazing on public lands Great Barrier Reef Great Lakes Great Lakes Water Quality Agreement (1978) Great Smoky Mountains Green advertising and marketing Green belt/greenway Green Cross Green packaging Green plans Green politics Green products Green Seal Green taxes Greenhouse effect Greenhouse gases Greenpeace Greens Grinevald, Jacques Grizzly bear Groundwater

ix

Environmental Encyclopedia 3

CONTENTS

Groundwater monitoring Groundwater pollution Growth curve Growth limiting factors Guano Guinea worm eradication Gulf War syndrome Gullied land Gypsy moth

Humanism Human-powered vehicles Humus Hunting and trapping Hurricane Hutchinson, George E. Hybrid vehicles Hydrocarbons Hydrochlorofluorocarbons Hydrogen Hydrogeology Hydrologic cycle Hydrology Hydroponics Hydrothermal vents

H Haagen-Smit, Arie Jan Habitat Habitat conservation plans Habitat fragmentation Haeckel, Ernst Half-life Halons Hanford Nuclear Reservation Hardin, Garrett Hawaiian Islands Hayes, Denis Hazard Ranking System Hazardous material Hazardous Materials Transportation Act (1975) Hazardous Substances Act (1960) Hazardous waste Hazardous waste site remediation Hazardous waste siting Haze Heat (stress) index Heavy metals and heavy metal poisoning Heavy metals precipitation Heilbroner, Robert L. Hells Canyon Henderson, Hazel Herbicide Heritage Conservation and Recreation Service Hetch Hetchy Reservoir Heterotroph High-grading (mining, forestry) High-level radioactive waste High-solids reactor Hiroshima, Japan Holistic approach Homeostasis Homestead Act (1862) Horizon Horseshoe crabs Household waste Hubbard Brook Experimental Forest Hudson River Human ecology Humane Society of the United States x

I Ice age Ice age refugia Impervious material Improvement cutting Inbreeding Incineration Indicator organism Indigenous peoples Indonesian forest fires Indoor air quality Industrial waste treatment Infiltration INFORM INFOTERRA (U.N. Environment Programme) Injection well Inoculate Integrated pest management Intergenerational justice Intergovernmental Panel on Climate Change Internalizing costs International Atomic Energy Agency International Cleaner Production Cooperative International Convention for the Regulation of Whaling (1946) International Geosphere-Biosphere Programme International Institute for Sustainable Development International Joint Commission International Primate Protection League International Register of Potentially Toxic Chemicals International Society for Environmental Ethics International trade in toxic waste International Voluntary Standards International Wildlife Coalition Intrinsic value Introduced species Iodine 131 Ion

Environmental Encyclopedia 3

CONTENTS

Ion exchange Ionizing radiation Iron minerals Irrigation Island biogeography ISO 14000: International Environmental Management Standards Isotope Itai-itai disease IUCN—The World Conservation Union Ivory-billed woodpecker Izaak Walton League

Lawn treatment LD50 Leaching Lead Lead management Lead shot Leafy spurge League of Conservation Voters Leakey, Louis Leakey, Mary Leakey, Richard E. Leaking underground storage tank Leopold, Aldo Less developed countries Leukemia Lichens Life cycle assessment Limits to Growth (1972) and Beyond the Limits (1992) Limnology Lindeman, Raymond L. Liquid metal fast breeder reactor Liquified natural gas Lithology Littoral zone Loading Logging Logistic growth Lomborg, Bjørn Lopez, Barry Los Angeles Basin Love Canal Lovelock, Sir James Ephraim Lovins, Amory B. Lowest Achievable Emission Rate Low-head hydropower Low-level radioactive waste Lyell, Charles Lysimeter

J Jackson, Wes James Bay hydropower project Japanese logging

K Kapirowitz Plateau Kennedy Jr., Robert Kepone Kesterson National Wildlife Refuge Ketones Keystone species Kirtland’s warbler Krakatoa Krill Krutch, Joseph Wood Kudzu Kwashiorkor Kyoto Protocol/Treaty

L La Nina La Paz Agreement Lagoon Lake Baikal Lake Erie Lake Tahoe Lake Washington Land ethic Land Institute Land reform Land stewardship Land Stewardship Project Land trusts Land use Landfill Landscape ecology Landslide Land-use control Latency

M MacArthur, Robert Mad cow disease Madagascar Magnetic separation Malaria Male contraceptives Man and the Biosphere Program Manatees Mangrove swamp Marasmus Mariculture Marine ecology and biodiversity Marine Mammals Protection Act (1972) Marine pollution

xi

CONTENTS

Marine protection areas Marine Protection, Research and Sanctuaries Act (1972) Marine provinces Marsh, George Perkins Marshall, Robert Mass burn Mass extinction Mass spectrometry Mass transit Material Safety Data Sheets Materials balance approach Maximum permissible concentration McHarg, Ian McKibben, Bill Measurement and sensing Medical waste Mediterranean fruit fly Mediterranean Sea Megawatt Mendes, Chico Mercury Metabolism Metals, as contaminants Meteorology Methane Methane digester Methanol Methyl tertiary butyl ether Methylation Methylmercury seed dressings Mexico City, Mexico Microbes (microorganisms) Microbial pathogens Microclimate Migration Milankovitch weather cycles Minamata disease Mine spoil waste Mineral Leasing Act (1920) Mining, undersea Mirex Mission to Planet Earth (NASA) Mixing zones Modeling (computer applications) Molina, Mario Monarch butterfly Monkey-wrenching Mono Lake Monoculture Monsoon Montreal Protocol on Substances That Deplete the Ozone Layer (1987) More developed country Mortality Mount Pinatubo xii

Environmental Encyclopedia 3 Mount St. Helens Muir, John Mulch Multiple chemical sensitivity Multiple Use-Sustained Yield Act (1960) Multi-species management Municipal solid waste Municipal solid waste composting Mutagen Mutation Mutualism Mycorrhiza Mycotoxin

VOLUME 2 (N-Z): ..........................................931 N Nader, Ralph Naess, Arne Nagasaki, Japan National Academy of Sciences National Air Toxics Information Clearinghouse National Ambient Air Quality Standard National Audubon Society National Emission Standards for Hazardous Air Pollutants National Environmental Policy Act (1969) National Estuary Program National forest National Forest Management Act (1976) National Institute for the Environment National Institute for Urban Wildlife National Institute for Occupational Safety and Health National Institute of Environmental Health Sciences National lakeshore National Mining and Minerals Act (1970) National Oceanic and Atmospheric Administration (NOAA) National park National Park Service National Parks and Conservation Association National pollution discharge elimination system National Priorities List National Recycling Coalition National Research Council National seashore National Wildlife Federation National wildlife refuge Native landscaping Natural gas Natural resources Natural Resources Defense Council Nature Nature Conservancy, The Nearing, Scott Nekton Neoplasm

Environmental Encyclopedia 3 Neotropical migrants Neritic zone Neurotoxin Neutron Nevada Test Site New Madrid, Missouri New Source Performance Standard New York Bight Niche Nickel Nitrates and nitrites Nitrification Nitrogen Nitrogen cycle Nitrogen fixation Nitrogen oxides Nitrogen waste Nitrous oxide Noise pollution Nonattainment area Noncriteria pollutant Nondegradable pollutant Nongame wildlife Nongovernmental organization Nonpoint source Nonrenewable resources Non-timber forest products Non-Western environmental ethics No-observable-adverse-effect-level North North American Association for Environmental Education North American Free Trade Agreement North American Water And Power Alliance Northern spotted owl Not In My Backyard Nuclear fission Nuclear fusion Nuclear power Nuclear Regulatory Commission Nuclear test ban Nuclear weapons Nuclear winter Nucleic acid Nutrient

O Oak Ridge, Tennessee Occupational Safety and Health Act (1970) Occupational Safety and Health Administration Ocean Conservatory, The Ocean dumping Ocean Dumping Ban Act (1988) Ocean farming Ocean outfalls

CONTENTS

Ocean thermal energy conversion Octane rating Ode´n, Svante Odor control Odum, Eugene P. Office of Civilian Radioactive Waste Management Office of Management and Budget Office of Surface Mining Off-road vehicles Ogallala Aquifer Oil drilling Oil embargo Oil Pollution Act (1990) Oil shale Oil spills Old-growth forest Oligotrophic Olmsted Sr., Frederick Law Open marsh water management Open system Opportunistic organism Orangutan Order of magnitude Oregon silverspot butterfly Organic gardening and farming Organic waste Organization of Petroleum Exporting Countries Organochloride Organophosphate Orr, David W. Osborn, Henry Fairfield Osmosis Our Common Future(Brundtland Report) Overburden Overfishing Overgrazing Overhunting Oxidation reduction reactions Oxidizing agent Ozonation Ozone Ozone layer depletion

P Paleoecology/paleolimnology Parasites Pareto optimality (Maximum social welfare) Parrots and parakeets Particulate Partnership for Pollution Prevention Parts per billion Parts per million Parts per trillion Passenger pigeon

xiii

CONTENTS

Passive solar design Passmore, John A. Pathogen Patrick, Ruth Peat soils Peatlands Pedology Pelagic zone Pentachlorophenol People for the Ethical Treatment of Animals Peptides Percolation Peregrine falcon Perfluorooctane sulfonate Permaculture Permafrost Permanent retrievable storage Permeable Peroxyacetyl nitrate Persian Gulf War Persistent compound Persistent organic pollutants Pest Pesticide Pesticide Action Network Pesticide residue Pet trade Peterson, Roger Tory Petrochemical Petroleum Pfiesteria pH Phosphates Phosphorus removal Phosphorus Photic zone Photochemical reaction Photochemical smog Photodegradable plastic Photoperiod Photosynthesis Photovoltaic cell Phthalates Phytoplankton Phytoremediation Phytotoxicity Pinchot, Gifford Placer mining Plague Plankton Plant pathology Plasma Plastics Plate tectonics Plow pan xiv

Environmental Encyclopedia 3 Plume Plutonium Poaching Point source Poisoning Pollination Pollution Pollution control costs and benefits Pollution control Pollution credits Pollution Prevention Act (1990) Polunin, Nicholas Polybrominated biphenyls Polychlorinated biphenyls Polycyclic aromatic hydrocarbons Polycyclic organic compounds Polystyrene Polyvinyl chloride Population biology Population Council Population growth Population Institute Porter, Eliot Furness Positional goods Postmodernism and environmental ethics Powell, John Wesley Power plants Prairie Prairie dog Precision Precycling Predator control Predator-prey interactions Prescribed burning President’s Council on Sustainable Development Price-Anderson Act (1957) Primary pollutant Primary productivity (Gross and net) Primary standards Prince William Sound Priority pollutant Privatization movement Probability Project Eco-School Propellants Public Health Service Public interest group Public land Public Lands Council Public trust Puget Sound/Georgia Basin International Task Force Pulp and paper mills Purple loosestrife

Environmental Encyclopedia 3

CONTENTS

Resource recovery Resources for the Future Respiration Respiratory diseases Restoration ecology Retention time Reuse Rhinoceroses Ribonucleic acid Richards, Ellen Henrietta Swallow Right-to-Act legislation Right-to-know Riparian land Riparian rights Risk analysis Risk assessment (public health) Risk assessors River basins River blindness River dolphins Rocky Flats nuclear plant Rocky Mountain Arsenal Rocky Mountain Institute Rodale Institute Rolston, Holmes Ronsard, Pierre Roosevelt, Theodore Roszak, Theodore Rowland, Frank Sherwood Rubber Ruckleshaus, William Runoff

Q Quaamen, David

R Rabbits in Australia Rachel Carson Council Radiation exposure Radiation sickness Radioactive decay Radioactive fallout Radioactive pollution Radioactive waste Radioactive waste management Radioactivity Radiocarbon dating Radioisotope Radiological emergency response team Radionuclides Radiotracer Radon Rails-to-Trails Conservancy Rain forest Rain shadow Rainforest Action Network Rangelands Raprenox (nitrogen scrubbing) Rare species Rathje, William Recharge zone Reclamation Record of decision Recreation Recyclables Recycling Red tide Redwoods Refuse-derived fuels Regan, Tom [Thomas Howard] Regulatory review Rehabilitation Reilly, William K. Relict species Religion and the environment Remediation Renew America Renewable energy Renewable Natural Resources Foundation Reserve Mining Corporation Reservoir Residence time Resilience Resistance (inertia) Resource Conservation and Recovery Act

S Safe Drinking Water Act (1974) Sagebrush Rebellion Sahel St. Lawrence Seaway Sale, Kirkpatrick Saline soil Salinity Salinization Salinization of soils Salmon Salt, Henry S. Salt (road) Salt water intrusion Sand dune ecology Sanitary sewer overflows Sanitation Santa Barbara oil spill Saprophyte (decomposer) Savanna Savannah River site

xv

CONTENTS

Save the Whales Save-the-Redwoods League Scarcity Scavenger Schistosomiasis Schumacher, Ernst Friedrich Schweitzer, Albert Scientists’ Committee on Problems of the Environment Scientists’ Institute for Public Information Scotch broom Scrubbers Sea level change Sea otter Sea Shepherd Conservation Society Sea turtles Seabed disposal Seabrook Nuclear Reactor Seals and sea lions Sears, Paul B. Seattle, Noah Secchi disk Second World Secondary recovery technique Secondary standards Sediment Sedimentation Seed bank Seepage Selection cutting Sense of place Septic tank Serengeti National Park Seveso, Italy Sewage treatment Shade-grown coffee and cacao Shadow pricing Shanty towns Sharks Shepard, Paul Shifting cultivation Shoreline armoring Sick Building Syndrome Sierra Club Silt Siltation Silver Bay Singer, Peter Sinkholes Site index Skidding Slash Slash and burn agriculture Sludge Sludge treatment and disposal Slurry xvi

Environmental Encyclopedia 3 Small Quantity Generator Smart growth Smelter Smith, Robert Angus Smog Smoke Snail darter Snow leopard Snyder, Gary Social ecology Socially responsible investing Society for Conservation Biology Society of American Foresters Sociobiology Soil Soil and Water Conservation Society Soil compaction Soil conservation Soil Conservation Service Soil consistency Soil eluviation Soil illuviation Soil liner Soil loss tolerance Soil organic matter Soil profile Soil survey Soil texture Solar constant cycle Solar detoxification Solar energy Solar Energy Research, Development and Demonstration Act (1974) Solid waste Solid waste incineration Solid waste landfilling Solid waste recycling and recovery Solid waste volume reduction Solidification of hazardous materials Sonic boom Sorption Source separation South Spaceship Earth Spawning aggregations Special use permit Species Speciesism Spoil Stability Stack emissions Stakeholder analysis Statistics Steady-state economy Stegner, Wallace

Environmental Encyclopedia 3 Stochastic change Storage and transport of hazardous material Storm King Mountain Storm runoff Storm sewer Strategic lawsuits to intimidate public advocates Strategic minerals Stratification Stratosphere Stream channelization Stringfellow Acid Pits Strip-farming Strip mining Strontium 90 Student Environmental Action Coalition Styrene Submerged aquatic vegetation Subsidence Subsoil Succession Sudbury, Ontario Sulfate particles Sulfur cycle Sulfur dioxide Superconductivity Superfund Amendments and Reauthorization Act (1986) Surface mining Surface Mining Control and Reclamation Act (1977) Survivorship Suspended solid Sustainable agriculture Sustainable architecture Sustainable biosphere Sustainable development Sustainable forestry Swimming advisories Swordfish Symbiosis Synergism Synthetic fuels Systemic

T Taiga Tailings Tailings pond Takings Tall stacks Talloires Declaration Tansley, Arthur G. Tar sands Target species Taylor Grazing Act (1934) Tellico Dam

CONTENTS

Temperate rain forest Tennessee Valley Authority Teratogen Terracing Territorial sea Territoriality Tetrachloroethylene Tetraethyl lead The Global 2000 Report Thermal plume Thermal pollution Thermal stratification (water) Thermocline Thermodynamics, Laws of Thermoplastics Thermosetting polymers Third World Third World pollution Thomas, Lee M. Thoreau, Henry David Three Gorges Dam Three Mile Island Nuclear Reactor Threshold dose Tidal power Tigers Tilth Timberline Times Beach Tipping fee Tobacco Toilets Tolerance level Toluene Topography Topsoil Tornado and cyclone Torrey Canyon Toxaphene Toxic substance Toxic Substances Control Act (1976) Toxics Release Inventory (EPA) Toxics use reduction legislation Toxins Trace element/micronutrient Trade in pollution permits Tragedy of the commons Trail Smelter arbitration Train, Russell E. Trans-Alaska pipeline Trans-Amazonian highway Transboundary pollution Transfer station Transmission lines Transpiration Transportation

xvii

Environmental Encyclopedia 3

CONTENTS

Tributyl tin Trihalomethanes Trophic level Tropical rain forest Tropopause Troposphere Tsunamis Tundra Turbidity Turnover time Turtle excluder device 2,4,5-T 2,4-D

U Ultraviolet radiation Uncertainty in science, statistics Union of Concerned Scientists United Nations Commission on Sustainable Development United Nations Conference on the Human Environment (1972) United Nations Division for Sustainable Development United Nations Earth Summit (1992) United Nations Environment Programme Upwellings Uranium Urban contamination Urban design and planning Urban ecology Urban heat island Urban runoff Urban sprawl U.S. Department of Agriculture U.S. Department of Energy U.S. Department of Health and Human Services U.S. Department of the Interior U.S. Public Interest Research Group Used Oil Recycling Utilitarianism

V Vadose zone Valdez Principles Vapor recovery system Vascular plant Vector (mosquito) control Vegan Vegetarianism Vernadsky, Vladímir Victims’ compensation Vinyl chloride Virus Visibility Vogt, William xviii

Volatile organic compound Volcano Vollenweider, Richard

W War, environmental effects of Waste exchange Waste Isolation Pilot Plan Waste management Waste reduction Waste stream Wastewater Water allocation Water conservation Water diversion projects Water Environment Federation Water hyacinth Water pollution Water quality Water quality standards Water reclamation Water resources Water rights Water table Water table draw-down Water treatment Waterkeeper Alliance Waterlogging Watershed Watershed management Watt, James Gaius Wave power Weather modification Weathering Wells Werbach, Adam Wet scrubber Wetlands Whale strandings Whales Whaling White, Gilbert White Jr., Lynn Townsend Whooping crane Wild and Scenic Rivers Act (1968) Wild river Wilderness Wilderness Act (1964) Wilderness Society Wilderness Study Area Wildfire Wildlife Wildlife management Wildlife refuge

Environmental Encyclopedia 3 Wildlife rehabilitation Wilson, Edward Osborne Wind energy Windscale (Sellafield) plutonium reactor Winter range Wise use movement Wolman, Abel Wolves Woodwell, George M. World Bank World Conservation Strategy World Resources Institute World Trade Organization World Wildlife Fund Worldwatch Institute Wurster, Charles

X X ray Xenobiotic Xylene

CONTENTS

Yokkaichi asthma Yosemite National Park Yucca Mountain

Z Zebra mussel Zebras Zero discharge Zero population growth Zero risk Zone of saturation Zoo Zooplankton

HISTORICAL CHRONOLOGY .........................1555 ENVIRONMENTAL LEGISLATION IN THE UNITED STATES ........................................1561 ORGANIZATIONS ...........................................1567 GENERAL INDEX ...........................................1591

Y Yard waste Yellowstone National Park

xix

This Page Intentionally Left Blank

ADVISORY BOARD

A number of recognized experts in the library and environmental communities provided invaluable assistance in the formulation of this encyclopedia. Our panel of advisors helped us shape this publication into its final form, and we would like to express our sincere appreciation to them:

Dean Abrahamson: Hubert H. Humphrey Institute of Public Affairs, University of Minnesota, Minneapolis, Minnesota Maria Jankowska: Library, University of Idaho, Moscow, Idaho Terry Link: Library, Michigan State University, East Lansing, Michigan

Holmes Rolston: Department of Philosophy, Colorado State University, Fort Collins, Colorado Frederick W. Stoss: Science and Engineering Library, State University of New York—Buffalo, Buffalo, New York Hubert J. Thompson: Conrad Sulzer Regional Library, Chicago, Illinois

xxi

This Page Intentionally Left Blank

CONTRIBUTORS

Margaret Alic, Ph.D.: Freelance Writer, Eastsound, Washington William G. Ambrose Jr., Ph.D.: Department of Biology, East Carolina University, Greenville, North Carolina James L. Anderson, Ph.D.: Soil Science Department, University of Minnesota, St. Paul, Minnesota Monica Anderson: Freelance Writer, Hoffman Estates, Illinois Bill Asenjo M.S., CRC: Science Writer, Iowa City, Iowa Terence Ball, Ph.D.: Department of Political Science, University of Minnesota, Minneapolis, Minnesota Brian R. Barthel, Ph.D.: Department of Health, Leisure and Sports, The University of West Florida, Pensacola, Florida Stuart Batterman, Ph.D.: School of Public Health, University of Michigan, Ann Arbor, Michigan Eugene C. Beckham, Ph.D.: Department of Mathematics and Science, Northwood Institute, Midland, Michigan Milovan S. Beljin, Ph.D.: Department of Civil Engineering, University of Cincinnati, Cincinnati, Ohio Heather Bienvenue: Freelance Writer, Fremont, California Lawrence J. Biskowski, Ph.D.: Department of Political Science, University of Georgia, Athens, Georgia E. K. Black: University of Alberta, Edmonton, Alberta, Canada Paul R. Bloom, Ph.D.: Soil Science Department, University of Minnesota, St. Paul, Minnesota Gregory D. Boardman, Ph.D.: Department of Civil Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia Marci L. Bortman, Ph.D.: The Nature Conservancy, Huntington, New York Pat Bounds: Freelance Writer, Peter Brimblecombe, Ph.D.: School of Environmental Sciences, University of East Anglia, Norwich, United Kingdom

Kenneth N. Brooks, Ph.D.: College of Natural Resources, University of Minnesota, St. Paul, Minnesota Peggy Browning: Freelance Writer, Marie Bundy: Freelance Writer, Port Republic, Maryland Ted T. Cable, Ph.D.: Department of Horticulture, Forestry and Recreation Resources, Kansas State University, Manhattan, Kansas John Cairns Jr., Ph.D.: University Center for Environmental and Hazardous Materials Studies, Virginia Polytechnic Institute and State University, Blacksburg, Virginia Liane Clorfene Casten: Freelance Journalist, Evanston, Illinois Ann S. Causey: Prescott College, Prescott, Arizona Ann N. Clarke: Eckenfelder Inc., Nashville, Tennessee David Clarke: Freelance Journalist, Bethesda, Maryland Sally Cole-Misch: Freelance Writer, Bloomfield Hills, Michigan Edward J. Cooney: Patterson Associates, Inc., Chicago, Illinois Terence H. Cooper, Ph.D.: Soil Science Department, University of Minnesota, St. Paul, Minnesota Gloria Cooksey, C.N.E.: Freelance Writer, Sacramento, California Mark Crawford: Freelance Writer, Toronto, Ontario, Canada Neil Cumberlidge, Ph.D.: Department of Biology, Northern Michigan University, Marquette, Michigan John Cunningham: Freelance Writer, St. Paul, Minnesota Mary Ann Cunningham, Ph.D.: Department of Geology and Geography, Vassar College, Poughkeepsie, New York William P. Cunningham, Ph.D.: Department of Genetics and Cell Biology, University of Minnesota, St. Paul, Minnesota Richard K. Dagger, Ph.D.: Department of Political Science, Arizona State University, Tempe, Arizona xxiii

CONTRIBUTORS

Tish Davidson, A.M.: Freelance Writer, Fremont, California Stephanie Dionne: Freelance Journalist, Ann Arbor, Michigan Frank M. D’Itri, Ph.D.: Institute of Water Research, Michigan State University, East Lansing, Michigan Teresa C. Donkin: Freelance Writer, Minneapolis, Minnesota David A. Duffus, Ph.D.: Department of Geography, University of Victoria, Victoria, British Columbia, Canada Douglas Dupler, M.A.: Freelance Writer, Boulder, Colorado Cathy M. Falk: Freelance Writer, Portland, Oregon L. Fleming Fallon Jr., M.D., Dr.P.H.: Associate Professor, Public Health, Bowling Green State University , Bowling Green, Ohio George M. Fell: Freelance Writer, Inver Grove Heights, Minnesota Gordon R. Finch, Ph.D.: Department of Civil Engineering, University of Alberta, Edmonton, Alberta, Canada Paula Anne Ford-Martin, M.A.: Wordcrafts, Warwick, Rhode Island Janie Franz: Freelance Writer, Grand Forks, North Dakota Bill Freedman, Ph.D.: School for Resource and Environmental Studies, Dalhousie University, Halifax, Nova Scotia, Canada Rebecca J. Frey, Ph.D.: Writer, Editor, and Editorial Consultant, New Haven, Connecticut Cynthia Fridgen, Ph.D.: Department of Resource Development, Michigan State University, East Lansing, Michigan Andrea Gacki: Freelance Writer, Bay City, Michigan Brian Geraghty: Ford Motor Company, Dearborn, Michigan Robert B. Giorgis, Jr.: Air Resources Board, Sacramento, California Debra Glidden: Freelance American Indian Investigative Journalist, Syracuse, New York Eville Gorham, Ph.D.: Department of Ecology, Evolution and Behavior, University of Minnesota, St. Paul, Minnesota Darrin Gunkel: Freelance Writer, Seattle, Washington Malcolm T. Hepworth, Ph.D.: Department of Civil and Mineral Engineering, University of Minnesota, Minneapolis, Minnesota Katherine Hauswirth: Freelance Writer, Roanoke, Virginia Richard A. Jeryan: Ford Motor Company, Dearborn, Michigan xxiv

Environmental Encyclopedia 3 Barbara J. Kanninen, Ph.D.: Hubert H. Humphrey Institute of Public Affairs, University of Minnesota, Minneapolis, Minnesota Christopher McGrory Klyza, Ph.D.: Department of Political Science, Middlebury College, Middlebury, Vermont John Korstad, Ph.D.: Department of Natural Science, Oral Roberts University, Tulsa, Oklahoma Monique LaBerge, Ph.D.: Research Associate, Department of Biochemistry and Biophysics, University of Pennsylvania, Philadelphia, Pennsylvania Royce Lambert, Ph.D.: Soil Science Department, California Polytechnic State University, San Luis Obispo, California William E. Larson, Ph.D.: Soil Science Department, University of Minnesota, St. Paul, Minnesota Ellen E. Link: Freelance Writer, Laingsburg, Michigan Sarah Lloyd: Freelance Writer, Cambria, Wisconsin James P. Lodge Jr.: Consultant in Atmospheric Chemistry, Boulder, Colorado William S. Lynn, Ph.D.: Department of Geography, University of Minnesota, Minneapolis, Minnesota Alair MacLean: Environmental Editor, OMB Watch, Washington, DC Alfred A. Marcus, Ph.D.: Carlson School of Management, University of Minnesota, Minneapolis, Minnesota Gregory McCann: Freelance Writer, Freeland, Michigan Cathryn McCue: Freelance Journalist, Roanoke, Virginia Mary McNulty: Freelance Writer, Illinois Jennifer L. McGrath: Freelance Writer, South Bend, Indiana Robert G. McKinnell, Ph.D.: Department of Genetics and Cell Biology, University of Minnesota, St. Paul, Minnesota Nathan H. Meleen, Ph.D.: Engineering and Physics Department, Oral Roberts University, Tulsa, Oklahoma Liz Meszaros: Freelance Writer, Lakewood, Ohio Muthena Naseri: Moorpark College, Moorpark, California B. R. Niederlehner, Ph.D.: University Center for Environmental and Hazardous Materials Studies, Virginia Polytechnic Institute and State University, Blacksburg, Virginia David E. Newton: Instructional Horizons, Inc., San Francisco, California Robert D. Norris: Eckenfelder Inc., Nashville, Tennessee Teresa G. Norris, R.N.: Medical Writer, Ute Park, New Mexico Karen Oberhauser, Ph.D.: University of Minnesota, St. Paul, Minnesota Stephanie Ocko: Freelance Journalist, Brookline, Massachusetts

Environmental Encyclopedia 3 Kristin Palm: Freelance Writer, Royal Oak, Michigan James W. Patterson: Patterson Associates, Inc., Chicago, Illinois Paul Phifer, Ph.D.: Freelance Writer, Portland, Oregon Jeffrey L. Pintenich: Eckenfelder Inc., Nashville, Tennessee Douglas C. Pratt, Ph.D.: University of Minnesota: Department of Plant Biology, Scandia, Minnesota Jeremy Pratt: Institute for Human Ecology, Santa Rosa, California Klaus Puettman: University of Minnesota, St. Paul, Minnesota Stephen J. Randtke: Department of Civil Engineering, University of Kansas, Lawrence, Kansas Lewis G. Regenstein: Author and Environmental Writer, Atlanta, Georgia Linda Rehkopf: Freelance Writer, Marietta, Georgia Paul E. Renaud, Ph.D.: Department of Biology, East Carolina University, Greenville, North Carolina Marike Rijsberman: Freelance Writer, Chicago, Illinois L. Carol Ritchie: Environmental Journalist, Arlington, Virginia Linda M. Ross: Freelance Writer, Ferndale, Michigan Joan Schonbeck: Medical Writer, Nursing, Massachusetts Department of Mental Health, Marlborough, Massachusetts Mark W. Seeley: Department of Soil Science, University of Minnesota, St. Paul, Minnesota Kim Sharp, M.Ln.: Freelance Writer, Richmond, Texas James H. Shaw, Ph.D.: Department of Zoology, Oklahoma State University, Stillwater, Oklahoma Laurel Sheppard: Freelance Writer, Columbus, Ohio Judith Sims, M.S.: Utah Water Research Laboratory, Utah State University, Logan, Utah Genevieve Slomski, Ph.D.: Freelance Writer, New Britain, Connecticut Douglas Smith: Freelance Writer, Dorchester, Massachusetts

CONTRIBUTORS

Lawrence H. Smith, Ph.D.: Department of Agronomy and Plant Genetics, University of Minnesota, St. Paul, Minnesota Jane E. Spear: Freelance Writer, Canton, Ohio Carol Steinfeld: Freelance Writer, Concord, Massachusetts Paulette L. Stenzel, Ph.D.: Eli Broad College of Business, Michigan State University, East Lansing, Michigan Les Stone: Freelance Writer, Ann Arbor, Michigan Max Strieb: Freelance Writer, Huntington, New York Amy Strumolo: Freelance Writer, Beverly Hills, Michigan Edward Sucoff, Ph.D.: Department of Forestry Resources, University of Minnesota, St. Paul, Minnesota Deborah L. Swackhammer, Ph.D.: School of Public Health, University of MinnesotaMinneapolis, Minnesota Liz Swain: Freelance Writer, San Diego, California Ronald D. Taskey, Ph.D.: Soil Science Department, California Polytechnic State University, San Luis Obispo, California Mary Jane Tenerelli, M.S.: Freelance Writer, East Northport, New York Usha Vedagiri: IT Corporation, Edison, New Jersey Donald A. Villeneuve, Ph.D.: Ventura College, Ventura, California Nikola Vrtis: Freelance Writer, Kentwood, Michigan Eugene R. Wahl: Freelance Writer, Coon Rapids, Minnesota Terry Watkins: Indianapolis, Indiana Ken R. Wells: Freelance Writer, Laguna Hills, California Roderick T. White Jr.: Freelance Writer, Atlanta, Georgia T. Anderson White, Ph.D.: University of Minnesota, St. Paul, Minnesota Kevin Wolf: Freelance Writer, Minneapolis, Minnesota Angela Woodward: Freelance Writer, Madison, Wisconsin Gerald L. Young, Ph.D.: Program in Environmental Science and Regional Planning, Washington State University, Pullman, Washington

xxv

This Page Intentionally Left Blank

HOW TO USE THIS BOOK

The third edition of Environmental Encyclopedia has been designed with ready reference in mind. OStraight alphabetical arrangement of topics allows users to locate information quickly. OBold-faced terms within entries direct the reader to related articles. OContact information is given for each organization profiled in the book. OCross-references at the end of entries alert readers to related entries not specifically mentioned in the body of the text. O

The Resources sections direct readers to additional sources of information on a topic. OThree appendices provide the reader with a chronology of environmental events, a summary of environmental legislation, and a succinct alphabetical list of environmental organizations. OA comprehensive general index guides readers to all topics mentioned in the text. O

xxvii

This Page Intentionally Left Blank

INTRODUCTION

Welcome to the third edition of the Gale Environmental Encyclopedia! Those of us involved in writing and production of this book hope you will find the material here interesting and useful. As you might imagine, choosing what to include and what to exclude from this collection has been challenging. Almost everything has some environmental significance, so our task has been to select a limited number of topics we think are of greatest importance in understanding our environment and our relation to it. Undoubtedly, we have neglected some topics that interest you and included some you may consider irrelevant, but we hope that overall you will find this new edition helpful and worthwhile. The word environment is derived from the French environ, which means to “encircle” or “surround.” Thus, our environment can be defined as the physical, chemical, and biological world that envelops us, as well as the complex of social and cultural conditions affecting an individual or community. This broad definition includes both the natural world and the “built” or technological environment, as well as the cultural and social contexts that shape human lives. You will see that we have used this comprehensive meaning in choosing the articles and definitions contained in this volume. Among some central concerns of environmental science are: Ohow did the natural world on which we depend come to be as it is, and how does it work? Owhat have we done and what are we now doing to our environment—both for good and ill? Owhat can we do to ensure a sustainable future for ourselves, future generations, and the other species of organisms on which—although we may not be aware of it—our lives depend? The articles in this volume attempt to answer those questions from a variety of different perspectives. Historically, environmentalism is rooted in natural history, a search for beauty and meaning in nature. Modern environmental science expands this concern, drawing on

almost every area of human knowledge including social sciences, humanities, and the physical sciences. Its strongest roots, however, are in ecology, the study of interrelationships among and between organisms and their physical or nonliving environment. A particular strength of the ecological approach is that it studies systems holistically; that is, it looks at interconnections that make the whole greater than the mere sum of its parts. You will find many of those interconnections reflected in this book. Although the entries are presented individually so that you can find topics easily, you will notice that many refer to other topics that, in turn, can lead you on through the book if you have time to follow their trail. This series of linkages reflects the multilevel associations in environmental issues. As our world becomes increasingly interrelated economically, socially, and technologically, we find evermore evidence that our global environment is also highly interconnected. In 2002, the world population reached about 6.2 billion people, more than triple what it had been a century earlier. Although the rate of population growth is slowing— having dropped from 2.0% per year in 1970 to 1.2% in 2002—we are still adding about 200,000 people per day, or about 75 million per year. Demographers predict that the world population will reach 8 or 9 billion before stabilizing sometime around the middle of this century. Whether natural resources can support so many humans is a question of great concern. In preparation for the third global summit in South Africa, the United Nations released several reports in 2002 outlining the current state of our environment. Perhaps the greatest environmental concern as we move into the twentyfirst century is the growing evidence that human activities are causing global climate change. Burning of fossil fuels in power plants, vehicles, factories, and homes release carbon dioxide into the atmosphere. Burning forests and crop residues, increasing cultivation of paddy rice, raising billions of ruminant animals, and other human activities also add to the rapidly growing atmospheric concentrations of heat trapping gases in the atmosphere. Global temperatures have begun xxix

INTRODUCTION

to rise, having increased by about 1°F (0.6°C) in the second half of the twentieth century. Meteorologists predict that over the next 50 years, the average world temperature is likely to increase somewhere between 2.7–11°F (1.5–6.1°C). That may not seem like a very large change, but the difference between current average temperatures and the last ice age, when glaciers covered much of North America, was only about 10°F (5°C). Abundant evidence is already available that our climate is changing. The twentieth century was the warmest in the last 1,000 years; the 1990s were the warmest decade, and 2002 was the single warmest year of the past millennium. Glaciers are disappearing on every continent. More than half the world’s population depends on rivers fed by alpine glaciers for their drinking water. Loss of those glaciers could exacerbate water supply problems in areas where water is already scarce. The United Nations estimates that 1.1 billion people—one-sixth of the world population—now lack access to clean water. In 25 years, about two-thirds of all humans will live in water-stressed countries where supplies are inadequate to meet demand. Spring is now occurring about a week earlier and fall is coming about a week later over much of the northern hemisphere. This helps some species, but is changing migration patterns and home territories for others. In 2002, early melting of ice floes in Canada’s Gulf of St. Lawrence apparently drowned nearly all of the 200,000 to 300,000 harp seal pups normally born there. Lack of sea ice is also preventing polar bears from hunting seals. Environment Canada reports that polar bears around Hudson’s Bay are losing weight and decreasing in number because of poor hunting conditions. In 2002, a chunk of ice about the size of Rhode Island broke off the Larsen B ice shelf on the Antarctic Peninsula. As glacial ice melts, ocean levels are rising, threatening coastal ecosystems and cities around the world. After global climate change, perhaps the next greatest environmental concern for most biologists is the worldwide loss of biological diversity. Taxonomists warn that onefourth of the world’s species could face extinction in the next 30 years. Habitat destruction, pollution, introduction of exotic species, and excessive harvesting of commercially important species all contribute to species losses. Millions of species—most of which have never even been named by science, let alone examined for potential usefulness in medicine, agriculture, science, or industry—may disappear in the next century as a result of our actions. We know little about the biological roles of these organisms in the ecosystems and their loss could result in an ecological tragedy. Ecological economists have tried to put a price on the goods and services provided by natural ecosystems. Although many ecological processes aren’t traded in the market place, xxx

Environmental Encyclopedia 3 we depend on the natural world to do many things for us like purifying water, cleansing air, and detoxifying our wastes. How much would it cost if we had to do all this ourselves? The estimated annual value of all ecological goods and services provided by nature are calculated to be worth at least $33 trillion, or about twice the annual GNPs of all national economies in the world. The most valuable ecosystems in terms of biological processes are wetlands and coastal estuaries because of their high level of biodiversity and their central role in many biogeochemical cycles. Already there are signs that we are exhausting our supplies of fertile soil, clean water, energy, and biodiversity that are essential for life. Furthermore, pollutants released into the air and water, along with increasing amounts of toxic and hazardous wastes created by our industrial society, threaten to damage the ecological life support systems on which all organisms—including humans—depend. Even without additional population growth, we may need to drastically rethink our patterns of production and disposal of materials if we are to maintain a habitable environment for ourselves and our descendants. An important lesson to be learned from many environmental crises is that solving one problem often creates another. Chlorofluorocarbons, for instance, were once lauded as a wonderful discovery because they replaced toxic or explosive chemicals then in use as refrigerants and solvents. No one anticipated that CFCs might damage stratospheric ozone that protects us from dangerous ultraviolet radiation. Similarly, the building of tall smokestacks on power plants and smelters lessened local air pollution, but spread acid rain over broad areas of the countryside. Because of our lack of scientific understanding of complex systems, we are continually subjected to surprises. How to plan for “unknown unknowns” is an increasing challenge as our world becomes more tightly interconnected and our ability to adjust to mistakes decreases. Not all is discouraging, however, in the field of environmental science. Although many problems beset us, there are also encouraging signs of progress. Some dramatic successes have occurred in wildlife restoration and habitat protection programs, for instance. The United Nations reports that protected areas have increased five-fold over the past 30 years to nearly 5 million square miles. World forest losses have slowed, especially in Asia, where deforestation rates slowed from 8% in the 1980s to less than 1% in the 1990s. Forested areas have actually increased in many developed countries, providing wildlife habitat, removal of excess carbon dioxide, and sustainable yields of forest products. In spite of dire warnings in the 1960s that growing human populations would soon overshoot the earth’s carrying capacity and result in massive famines, food supplies have more than kept up with population growth. There is

Environmental Encyclopedia 3 more than enough food to provide a healthy diet for everyone now living, although inequitable distribution leaves about 800 million with an inadequate diet. Improved health care, sanitation, and nutrition have extended life expectancies around the world from 40 years, on average, a century ago, to 65 years now. Public health campaigns have eradicated smallpox and nearly eliminated polio. Other terrible diseases have emerged, however, most notably acquired immunodeficiency syndrome (AIDS), which is now the fourth most common cause of death worldwide. Forty million people are now infected with HIV—70% percent of them in subSaharan Africa—and health experts warn that unsanitary blood donation practices and spreading drug use in Asia may result in tens of millions more AIDS deaths in the next few decades. In developed countries, air and pollution have decreased significantly over the past 30 years. In 2002, the Environmental Protection Agency declared that Denver— which once was infamous as one of the most polluted cities in the United States—is the first major city to meet all the agency’s standards for eliminating air pollution. At about the same time, the EPA announced that 91% of all monitored river miles in the United States met the water quality goals set in the 1985 clean water act. Pollution-sensitive species like mayflies have returned to the upper Mississippi River, and in Britian, salmon are being caught in the Thames River after being absent for more than two centuries. Conditions aren’t as good, however, in many other countries. In most of Latin America, Africa, and Asia, less than two % of municipal sewage is given even primary treatment before being dumped into rivers, lakes, or the ocean. In South Asia, a 2-mile (3-km) thick layer of smog covers the entire Indian sub-continent for much of the year. This cloud blocks sunlight and appears to be changing the climate, bringing drought to Pakistan and Central Asia, and shifting monsoon winds that caused disastrous floods in 2002 in Nepal, Bangladesh, and eastern India that forced 25 million people from their homes and killed at least 1,000 people. Nobel laureate Paul Crutzen estimates that two million deaths each year in India alone can be attributed to air pollution effects. After several decades of struggle, a world-wide ban on the “dirty dozen” most dangerous persistent organic pollutants (POPs) was ratified in 2000. Elimination of compounds such as DDT, Aldrin, Dieldrin, Mirex, Toxaphene, polychlorinated biphenyls, and dioxins has allowed recovery of several wildlife species including bald eagles, perigrine falcons, and brown pelicans. Still, other toxic synthetic chemicals such as polybrominated diphenyl ethers, chromated copper arsenate, perflurooctane sulfonate, and atrazine are now being found accumulating in food chains far from anyplace where they have been used.

INTRODUCTION

Solutions for many of our pollution problems can be found in either improved technology, more personal responsibility, or better environmental management. The question is often whether we have the political will to enforce pollution control programs and whether we are willing to sacrifice short-term convenience and affluence for long-term ecological stability. We in the richer countries of the world have become accustomed to a highly consumptive lifestyle. Ecologists estimate that humans either use directly, destroy, coopt, or alter almost 40% of terrestrial plant productivity, with unknown consequences for the biosphere. Whether we will be willing to leave some resources for other species and future generations is a central question of environmental policy. One way to extend resources is to increase efficiency and recycling of the items we use. Automobiles have already been designed, for example, that get more than 100 mi/gal (42 km/l) of diesel fuel and are completely recyclable when they reach the end of their designed life. Although recycling rates in the United States have increased in recent years, we could probably double our current rate with very little sacrifice in economics or convenience. Renewable energy sources such as solar or wind power are making encouraging progress. Wind already is cheaper than any other power source except coal in many localities. Solar energy is making it possible for many of the two billion people in the world who don’t have access to electricity to enjoy some of the benefits of modern technology. Worldwide, the amount of installed wind energy capacity more than doubled between 1998 and 2002. Germany is on course to obtain 20% of its energy from renewables by 2010. Together, wind, solar, biomass and other forms of renewable energy have the potential to provide thousands of times as much energy as all humans use now. There is no reason for us to continue to depend on fossil fuels for the majority of our energy supply. One of the widely advocated ways to reduce poverty and make resources available to all is sustainable development. A commonly used definition of this term is given in Our Common Future, the report of the World Commission on Environment and Development (generally called the Brundtland Commission after the prime minister of Norway, who chaired it), described sustainable development as: “meeting the needs of the present without compromising the ability of future generations to meet their own needs.” This implies improving health, education, and equality of opportunity, as well as ensuring political and civil rights through jobs and programs based on sustaining the ecological base, living on renewable resources rather than nonrenewable ones, and living within the carrying capacity of supporting ecological systems. Several important ethical considerations are embedded in environmental questions. One of these is intergenerational xxxi

INTRODUCTION

justice: what responsibilities do we have to leave resources and a habitable planet for future generations? Is our profligate use of fossil fuels, for example, justified by the fact that we have technology to extract fossil fuels and enjoy their benefits? Will human lives in the future be impoverished by the fact that we have used up most of the easily available oil, gas, and coal? Author and social critic Wendell Berry suggests that our consumption of these resources constitutes a theft of the birthright and livelihood of posterity. Philosopher John Rawls advocates a “just savings principle” in which members of each generation may consume no more than their fair share of scarce resources. How many generations are we obliged to plan for and what is our “fair share?” It is possible that our use of resources now—inefficient and wasteful as it may be—represents an investment that will benefit future generations. The first computers, for instance, were huge clumsy instruments that filled rooms full of expensive vacuum tubes and consumed inordinate amounts of electricity. Critics complained that it was a waste of time and resources to build these enormous machines to do a few simple calculations. And yet if this technology had been suppressed in its infancy, the world would be much poorer today. Now nanotechnology promises to make machines and tools in infinitesimal sizes that use minuscule amounts of materials and energy to carry out valuable functions. The question remains whether future generations will be glad that we embarked on the current scientific and technological revolution or whether they will wish that we had maintained a simple agrarian, Arcadian way of life. Another ethical consideration inherent in many environmental issues is whether we have obligations or responsibilities to other species or to Earth as a whole. An anthropocentric (human-centered) view holds that humans have rightful dominion over the earth and that our interests and well-being take precedence over all other considerations. Many environmentalists criticize this perspective, considering it arrogant and destructive. Biocentric (life-centered) philosophies argue that all living organisms have inherent values and rights by virtue of mere existence, whether or not

xxxii

Environmental Encyclopedia 3 they are of any use to us. In this view, we have a responsibility to leave space and resources to enable other species to survive and to live as naturally as possible. This duty extends to making reparations or special efforts to encourage the recovery of endangered species that are threatened with extinction due to human activities.Some environmentalists claim that we should adopt an ecocentric (ecologically centered) outlook that respects and values nonliving entities such as rocks, rivers, mountains—even whole ecosystems—as well as other living organisms. In this view, we have no right to break up a rock, dam a free-flowing river, or reshape a landscape simply because it benefits us. More importantly, we should conserve and maintain the major ecological processes that sustain life and make our world habitable. Others argue that our existing institutions and understandings, while they may need improvement and reform, have provided us with many advantages and amenities. Our lives are considerably better in many ways than those of our ancient ancestors, whose lives were, in the words of British philosopher Thomas Hobbes: “nasty, brutish, and short.” Although science and technology have introduced many problems, they also have provided answers and possible alternatives as well. It may be that we are at a major turning point in human history. Current generations are in a unique position to address the environmental issues described in this encyclopedia. For the first time, we now have the resources, motivation, and knowledge to protect our environment and to build a sustainable future for ourselves and our children. Until recently, we didn’t have these opportunities, or there was not enough clear evidence to inspire people to change their behavior and invest in environmental protection; now the need is obvious to nearly everyone. Unfortunately, this also may be the last opportunity to act before our problems become irreversible. We hope that an interest in preserving and protecting our common environment is one reason that you are reading this encyclopedia and that you will find information here to help you in that quest. [William P. Cunningham, Managing Editor]

A

Edward Paul Abbey (1927 – 1989) American environmentalist and writer Novelist, essayist, white-water rafter, and self-described “desert rat,” Abbey wrote of the wonders and beauty of the American West that was fast disappearing in the name of “development” and “progress.” Often angry, frequently funny, and sometimes lyrical, Abbey recreated for his readers a region that was unique in the world. The American West was perhaps the last place where solitary selves could discover and reflect on their connections with wild things and with their fellow human beings. Abbey was born in Home, Pennsylvania, in 1927. He received his B.A. from the University of New Mexico in 1951. After earning his master’s degree in 1956, he joined the National Park Service, where he served as park ranger and fire fighter. He later taught writing at the University of Arizona. Abbey’s books and essays, such as Desert Solitaire (1968) and Down the River (1982), had their angrier fictional counterparts—most notably, The Monkey Wrench Gang (1975) and Hayduke Lives! (1990)—in which he gave voice to his outrage over the destruction of deserts and rivers by dam-builders and developers of all sorts. In The Monkey Wrench Gang Abbey weaves a tale of three “ecoteurs” who defend the wild west by destroying the means and machines of development—dams, bulldozers, logging trucks—which would otherwise reduce forests to lumber and raging rivers to irrigation channels. This aspect of Abbey’s work inspired some radical environmentalists, including Dave Foreman and other members of Earth First!, to practice “monkey-wrenching” or “ecotage” to slow or stop such environmentally destructive practices as strip mining, the clear-cutting of old-growth forests on public land, and the damming of wild rivers for flood control, hydroelectric power, and what Abbey termed “industrial tourism.” Although Abbey’s description and defense of such tactics has been widely condemned by many mainstream environmental groups, he remains a revered fig-

ure among many who believe that gradualist tactics have not succeeded in slowing, much less stopping, the destruction of North American wilderness. Abbey is unique among environmental writers in having an oceangoing ship named after him. One of the vessels in the fleet of the militant Sea Shepherd Conservation Society, the Edward Abbey, rams and disables whaling and drift-net fishing vessels operating illegally in international waters. Abbey would have welcomed the tribute and, as a white-water rafter and canoeist, would no doubt have enjoyed the irony. Abbey died on March 14, 1989. He is buried in a desert in the southwestern United States. [Terence Ball]

RESOURCES BOOKS Abbey, E. Desert Solitaire. New York: McGraw-Hill, 1968. ———. Down the River. Boston: Little, Brown, 1982. ———. Hayduke Lives! Boston: Little, Brown, 1990. ———. The Monkey Wrench Gang. Philadelphia: Lippincott, 1975. Berry, W. “A Few Words in Favor of Edward Abbey.” In What Are People For? San Francisco: North Point Press, 1991. Bowden, C. “Goodbye, Old Desert Rat.” In The Sonoran Desert. New York: Abrams, 1992. Manes, C. Green Rage: Radical Environmentalism and the Unmaking of Civilization. Boston: Little, Brown, 1990.

Absorption Absorption, or more generally “sorption,” is the process by which one material (the sorbent) takes up and retains another (the sorbate) to form a homogenous concentration at equilibrium. The general term is “sorption,” which is defined as adhesion of gas molecules, dissolved substances, or liquids to the surface of solids with which they are in contact. In soils, three types of mechanisms, often working together, constitute sorption. They can be grouped into physical sorp1

Environmental Encyclopedia 3

Acclimation

tion, chemiosorption, and penetration into the solid mineral phase. Physical sorption (also known as adsorption) involves the attachment of the sorbent and sorbate through weak atomic and molecular forces. Chemiosorption involves chemical bonds similar to holding atoms in a molecule. Electrostatic forces operate to bond minerals via ion exchange, such as the replacement of sodium, magnesium, potassium, and aluminum cations (+) as exchangeable bases with acid (-) soils. While cation (positive ion) exchange is the dominant exchange process occurring in soils, some soils have the ability to retain anions (negative ions) such as nitrates, chlorine and, to a larger extent, oxides of sulfur. Absorption and Wastewater Treatment In on-site wastewater treatment, the soil absorption field is the land area where the wastewater from the septic tank is spread into the soil. One of the most common types of soil absorption field has porous plastic pipe extending away from the distribution box in a series of two or more parallel trenches, usually 1.5–2 ft (30.5–61 cm) wide. In conventional, below-ground systems, the trenches are 1.5–2 ft deep. Some absorption fields must be placed at a shallower depth than this to compensate for some limiting soil condition, such as a hardpan or high water table. In some cases they may even be placed partially or entirely in fill material that has been brought to the lot from elsewhere. The porous pipe that carries wastewater from the distribution box into the absorption field is surrounded by gravel that fills the trench to within a foot or so of the ground surface. The gravel is covered by fabric material or building paper to prevent plugging. Another type of drainfield consists of pipes that extend away from the distribution box, not in trenches but in a single, gravel-filled bed that has several such porous pipes in it. As with trenches, the gravel in a bed is covered by fabric or other porous material. Usually the wastewater flows gradually downward into the gravel-filled trenches or bed. In some instances, such as when the septic tank is lower than the drainfield, the wastewater must be pumped into the drainfield. Whether gravity flow or pumping is used, wastewater must be evenly distributed throughout the drainfield. It is important to ensure that the drainfield is installed with care to keep the porous pipe level, or at a very gradual downward slope away from the distribution box or pump chamber, according to specifications stipulated by public health officials. Soil beneath the gravel-filled trenches or bed must be permeable so that wastewater and air can move through it and come in contact with each other. Good aeration is necessary to ensure that the proper chemical and microbiological processes will be occurring in the soil to cleanse the percolating wastewater of contaminants. A well-aerated soil also ensures slow travel and good contact between wastewater and soil. 2

How Common Are Septic Systems with Soil Absorption Systems? According to the 1990 U.S. Census, there are about 24.7 million households in the United States that use septic tank systems or cesspools (holes or pits for receiving sewage) for wastewater treatment. This figure represents roughly 24% of the total households included in the census. According to a review of local health department information by the National Small Flows Clearinghouse, 94% of participating health departments allow or permit the use of septic tank and soil absorption systems. Those that do not allow septic systems have sewer lines available to all residents. The total volume of waste disposed of through septic systems is more than one trillion gallons (3.8 trillion l) per year, according to a study conducted by the U.S. Environmental Protection Agency’s Office of Technology Assessment, and virtually all of that waste is discharged directly to the subsurface, which affects groundwater quality. [Carol Steinfeld]

RESOURCES BOOKS Elliott, L. F., and F. J. Stevenson, Soils for the Management of Wastes and Waste Waters. Madison, WI: Soil Science Society of America, 1977.

OTHER Fact Sheet SL-59, a series of the Soil and Water Science Department, Florida Cooperative Extension Service, Institute of Food and Agricultural Sciences, University of Florida. February 1993.

Acaricide see Pesticide

Acceptable risk see Risk analysis

Acclimation Acclimation is the process by which an organism adjusts to a change in its environment. It generally refers to the ability of living things to adjust to changes in climate, and usually occurs in a short time of the change. Scientists distinguish between acclimation and acclimatization because the latter adjustment is made under natural conditions when the organism is subject to the full range of changing environmental factors. Acclimation, however, refers to a change in only one environmental factor under laboratory conditions.

Environmental Encyclopedia 3

Acetone

In an acclimation experiment, adult frogs (Rana temporaria) maintained in the laboratory at a temperature of either 50°F (10°C) or 86°F (30°C) were tested in an environment of 32°F (0°C). It was found that the group maintained at the higher temperature was inactive at freezing. The group maintained at 50°F (10°C), however, was active at the lower temperature; it had acclimated to the lower temperature. Acclimation and acclimatization can have profound effects upon behavior, inducing shifts in preferences and in mode of life. The golden hamster (Mesocricetus auratus) prepares for hibernation when the environmental temperature drops below 59°F (15°C). Temperature preference tests in the laboratory show that the hamsters develop a marked preference for cold environmental temperatures during the pre-hibernation period. Following arousal from a simulated period of hibernation, the situation is reversed, and the hamsters actively prefer the warmer environments. An acclimated microorganism is any microorganism that is able to adapt to environmental changes such as a change in temperature or a change in the quantity of oxygen or other gases. Many organisms that live in environments with seasonal changes in temperature make physiological adjustments that permit them to continue to function normally, even though their environmental temperature goes through a definite annual temperature cycle. Acclimatization usually involves a number of interacting physiological processes. For example, in acclimatizing to high altitudes, the first response of human beings is to increase their breathing rate. After about 40 hours, changes have occurred in the oxygen-carrying capacity of the blood, which makes it more efficient in extracting oxygen at high altitudes. As this occurs, the breathing rate returns to normal. [Linda Rehkopf]

produced in a nation in a particular year. It is recognized that natural resources are economic assets that generate income, and that just as the depreciation of buildings and capital equipment are treated as economic costs and subtracted from GNP to get NNP, depreciation of natural capital should also be subtracted when calculating NNP. In addition, expenditures on environmental protection, which at present are included in GNP and NNP, are considered defensive expenditures in accounting for nature which should not be included in either GNP or NNP.

Accuracy Accuracy is the closeness of an experimental measurement to the “true value” (i.e., actual or specified) of a measured quantity. A “true value” can determined by an experienced analytical scientist who performs repeated analyses of a sample of known purity and/or concentration using reliable, well-tested methods. Measurement is inexact, and the magnitude of that exactness is referred to as the error. Error is inherent in measurement and is a result of such factors as the precision of the measuring tools, their proper adjustment, the method, and competency of the analytical scientist. Statistical methods are used to evaluate accuracy by predicting the likelihood that a result varies from the “true value.” The analysis of probable error is also used to examine the suitability of methods or equipment used to obtain, portray, and utilize an acceptable result. Highly accurate data can be difficult to obtain and costly to produce. However, different applications can require lower levels of accuracy that are adequate for a particular study. [Judith L. Sims]

RESOURCES BOOKS Ford, M. J. The Changing Climate: Responses of the Natural Fauna and Flora. Boston: G. Allen and Unwin, 1982. McFarland, D., ed. The Oxford Companion to Animal Behavior. Oxford, England: Oxford University Press, 1981. Stress Responses in Plants: Adaptation and Acclimation Mechanisms. New York: Wiley-Liss, 1990.

RESOURCES BOOKS Jaisingh, Lloyd R. Statistics for the Utterly Confused. New York, NY: McGraw-Hill Professional, 2000. Salkind, Neil J. Statistics for People Who (Think They) Hate Statistics. Thousand Oaks, CA: Sage Publications, Inc., 2000.

Acetone

Accounting for nature A new approach to national income accounting in which the degradation and depletion of natural resource stocks and environmental amenities are explicitly included in the calculation of net national product (NNP). NNP is equal to gross national product (GNP) minus capital depreciation, and GNP is equal to the value of all final goods and services

Acetone (C3H60) is a colorless liquid that is used as a solvent in products, such as in nail polish and paint, and in the manufacture of other chemicals such as plastics and fibers. It is a naturally occurring compound that is found in plants and is released during the metabolism of fat in the body. It is also found in volcanic gases, and is manufactured by the chemical industry. Acetone is also found in the atmo3

Environmental Encyclopedia 3

Acid and base

The basic mechanisms of acid deposition. (Illustration by Wadsworth Inc. Reproduced by permission.)

sphere as an oxidation product of both natural and anthropogenic volatile organic compounds (VOCs). It has a strong

smell and taste, and is soluble in water. The evaporation point of acetone is quite low compared to water, and the chemical is highly flammable. Because it is so volatile, the acetone manufacturing process results in a large percentage of the compound entering the atmosphere. Ingesting acetone can cause damage to the tissues in the mouth and can lead to unconsciousness. Breathing acetone can cause irritation of the eyes, nose, and throat; headaches; dizziness; nausea; unconsciousness; and possible coma and death. Women may experience menstrual irregularity. There has been concern about the carcinogenic nature of acetone, but laboratory studies, and studies of humans who have been exposed to acetone in the course of their occupational activities show no evidence that acetone causes cancer. [Marie H. Bundy]

Acid and base According to the definition used by environmental chemists, an acid is a substance that increases the hydrogen ion (H+) 4

concentration in a solution and a base is a substance that removes hydrogen ions (H+) from a solution. In water, removal of hydrogen ions results in an increase in the hydroxide ion (OH-) concentration. Water with a pH of 7.0 is neutral, while lower pH values are acidic and higher pH values are basic.

Acid deposition Acid precipitation from the atmosphere, whether in the form of dryfall (finely divided acidic salts), rain, or snow. Naturally occurring carbonic acid normally makes rain and snow mildly acidic (approximately 5.6 pH). Human activities often introduce much stronger and more damaging acids. Sulfuric acids formed from sulfur oxides released in coal or oil combustion or smelting of sulfide ores predominate as the major atmospheric acid in industrialized areas. Nitric acid created from nitrogen oxides, formed by oxidizing atmospheric nitrogen when any fuel is burned in an oxygenrich environment, constitutes the major source of acid precipitation in such cities as Los Angeles with little industry,

Environmental Encyclopedia 3 but large numbers of trucks and automobiles. The damage caused to building materials, human health, crops, and natural ecosystems by atmospheric acids amounts to billions of dollars per year in the United States.

Acid mine drainage The process of mining the earth for coal and metal ores has a long history of rich economic rewards—and a high level of environmental impact to the surrounding aquatic and terrestrial ecosystems. Acid mine drainage is the highly acidic, sediment-laden discharge from exposed mines that is released into the ambient aquatic environment. In large areas of Pennsylvania, West Virginia, and Kentucky, the bright orange seeps of acid mine drainage have almost completely eliminated aquatic life in streams and ponds that receive the discharge. In the Appalachian coal mining region, almost 7,500 mi (12,000 km) of streams and almost 30,000 acres (12,000 ha) of land are estimated to be seriously affected by the discharge of uncontrolled acid mine drainage. In the United States, coal-bearing geological strata occur near the surface in large portions of the Appalachian mountain region. The relative ease with which coal could be extracted from these strata led to a type of mining known as strip mining that was practiced heavily in the nineteenth and early twentieth centuries. In this process, large amounts of earth, called the overburden, were physically removed from the surface to expose the coal-bearing layer beneath. The coal was then extracted from the rock as quickly and cheaply as possible. Once the bulk of the coal had been mined, and no more could be extracted without a huge additional cost, the sites were usually abandoned. The remnants of the exhausted coal-bearing rock and soil are called the mine spoil waste. Acid mine drainage is not generated by strip mining itself but by the nature of the rock where it takes place. Three conditions are necessary to form acid mine drainage: pyrite-bearing rock, oxygen, and iron-oxidizing bacteria. In the Appalachians, the coal-bearing rocks usually contain significant quantities of pyrite (iron). This compound is normally not exposed to the atmosphere because it is buried underground within the rock; it is also insoluble in water. The iron and the sulfide are said to be in a reduced state, i.e., the iron atom has not released all the electrons that it is capable of releasing. When the rock is mined, the pyrite is exposed to air. It then reacts with oxygen to form ferrous iron and sulfate ions, both of which are highly soluble in water. This leads to the formation of sulfuric acid and is responsible for the acidic nature of the drainage. But the oxidation can only occur if the bacteria Thiobacillus ferrooxidans are present. These activate the iron-and-sulfur oxidizing

Acid mine drainage

reactions, and use the energy released during the reactions for their own growth. They must have oxygen to carry these reactions through. Once the maximum oxidation is reached, these bacteria can derive no more energy from the compounds and all reactions stop. The acidified water may be formed in several ways. It may be generated by rain falling on exposed mine spoils waste or when rain and surface water (carrying dissolved oxygen) flow down and seep into rock fractures and mine shafts, coming into contact with pyrite-bearing rock. Once the acidified water has been formed, it leaves the mine area as seeps or small streams. Characteristically bright orange to rusty red in color due to the iron, the liquid may be at a pH of 2–4. These are extremely low pH values and signify a very high degree of acidity. Vinegar, for example, has a pH of about 4.7 and the pH associated with acid rain is in the range of 4–6. Thus, acid mine drainage with a pH of 2 is more acidic than almost any other naturally occurring liquid release in the environment (with the exception of some volcanic lakes that are pure acid). Usually, the drainage is also very high in dissolved iron, manganese, aluminum, and suspended solids. The acidic drainage released from the mine spoil wastes usually follows the natural topography of its area and flows into the nearest streams or wetlands where its effect on the water quality and biotic community is unmistakable. The iron coats the stream bed and its vegetation as a thick orange coating that prevents sunlight from penetrating leaves and plant surfaces. Photosynthesis stops and the vegetation (both vascular plants and algae) dies. The acid drainage eventually also makes the receiving water acid. As the pH drops, the fish, the invertebrates, and algae die when their metabolism can no longer adapt. Eventually, there is no life left in the stream with the possible exception of some bacteria that may be able to tolerate these conditions. Depending on the number and volume of seeps entering a stream and the volume of the stream itself, the area of impact may be limited and improved conditions may exist downstream, as the acid drainage is diluted. Abandoned mine spoil areas also tend to remain barren, even after decades. The colonization of the acidic mineral soil by plant species is a slow and difficult process, with a few lichens and aspens being the most hardy species to establish. While many methods have been tried to control or mitigate the effects of acid mine drainage, very few have been successful. Federal mining regulations (Surface Mining Control and Reclamation Act of 1978) now require that when mining activity ceases, the mine spoil waste should be buried and covered with the overburden and vegetated topsoil. The intent is to restore the area to premining condition and to prevent the generation of acid mine drainage by 5

Environmental Encyclopedia 3

Acid rain

limiting the exposure of pyrite to oxygen and water. Although some minor seeps may still occur, this is the singlemost effective way to minimize the potential scale of the problem. Mining companies are also required to monitor the effectiveness of their restoration programs and must post bonds to guarantee the execution of abatement efforts, should any become necessary in the future. There are, however, numerous abandoned sites exposing pyrite-bearing spoils. Cleanup efforts for these sites have focused on controlling one or more of the three conditions necessary for the creation of the acidity: pyrite, bacteria, and oxygen. Attempts to remove bulk quantities of the pyrite-bearing mineral and store it somewhere else are extremely expensive and difficult to execute. Inhibiting the bacteria by using detergents, solvents, and other bactericidal agents are temporarily effective, but usually require repeated application. Attempts to seal out air or water are difficult to implement on a large scale or in a comprehensive manner. Since it is difficult to reduce the formation of acid mine drainage at abandoned sites, one of the most promising new methods of mitigation treats the acid mine drainage after it exits the mine spoil wastes. The technique channels the acid seeps through artificially created wetlands, planted with cattails or other wetland plants in a bed of gravel, limestone, or compost. The limestone neutralizes the acid and raises the pH of the drainage while the mixture of oxygen-rich and oxygen-poor areas within the wetland promote the removal of iron and other metals from the drainage. Currently, many agencies, universities, and private firms are working to improve the design and performance of these artificial wetlands. A number of additional treatment techniques may be strung together in an interconnected system of anoxic limestone trenches, settling ponds, and planted wetlands. This provides a variety of physical and chemical microenvironments so that each undesirable characteristic of the acid drainage can be individually addressed and treated, e.g., acidity is neutralized in the trenches, suspended solids are settled in the ponds, and metals are precipitated in the wetlands. In the United States, the research and treatment of acid mine drainage continues to be an active field of study in the Appalachians and in the metal-mining areas of the Rocky Mountains. [Usha Vedagiri]

RESOURCES PERIODICALS Clay, S. “A Solution to Mine Drainage?” American Forests 98 (July-August 1992): 42-43. Hammer, D. A. Constructed Wetlands for Wastewater Treatment: Municipal, Industrial, Agricultural. Chelsea, MI: Lewis, 1990. Schwartz, S. E. “Acid Deposition: Unraveling a Regional Phenomenon.” Science 243 (February 1989): 753–763.

6

Welter, T. R. “An ’All Natural’ Treatment: Companies Construct Wetlands to Reduce Metals in Acid Mine Drainage.” Industry Week 240 (August 5, 1991): 42–43.

Acid rain Acid rain is the term used in the popular press that is equivalent to acidic deposition as used in the scientific literature. Acid deposition results from the deposition of airborne acidic pollutants on land and in bodies of water. These pollutants can cause damage to forests as well as to lakes and streams. The major pollutants that cause acidic deposition are sulfur dioxide (SO2) and nitrogen oxides (NOx) produced during the combustion of fossil fuels. In the atmosphere these gases oxidize to sulfuric acid (H2SO4) and nitric acid (HNO3) that can be transported long distances before being returned to the earth dissolved in rain drops (wet deposition), deposited on the surfaces of plants as cloud droplets, or directly on plant surfaces (dry deposition). Electrical utilities contribute 70% of the 20 million tons (21 million metric tons) of SO2 that are annually added to the atmosphere. Most of this is from the combustion of coal. Electric utilities also contribute 30% of the 19 million tons of NOx added to the atmosphere, and internal combustion engines used in automobiles, trucks, and buses contribute more than 40%. Natural sources such as forest fires, swamp gases, and volcanoes only contribute 1–5% of atmospheric SO2. Forest fires, lightning, and microbial processes in soils contribute about 11% to atmospheric NOx. In response to air quality regulations, electrical utilities have switched to coal with lower sulfur content and installed scrubbing systems to remove SO2. This has resulted in a steady decrease in SO2 emissions in the United States since 1970, with a 18–20% decrease between 1975 and 1988. Emissions of NOx have also decreased from the peak in 1975, with a 9–15% decrease from 1975 to 1988. A commonly used indicator of the intensity of acid rain is the pH of this rainfall. The pH of non-polluted rainfall in forested regions is in the range 5.0–5.6. The upper limit is 5.6, not neutral (7.0), because of carbonic acid that results from the dissolution of atmospheric carbon dioxide. The contribution of naturally occurring nitric and sulfuric acid, as well as organic acids, reduces the pH somewhat to less than 5.6. In arid and semi-arid regions, rainfall pH values can be greater than 5.6 due the effect of alkaline soil dust in the air. Nitric and sulfuric acids in acidic rainfall (wet deposition) can result in pH values for individual rainfall events of less than 4.0. In North America, the lowest acid rainfall is in the northeastern United States and southeastern Canada. The lowest mean pH in this region is 4.15. Even lower pH values

Environmental Encyclopedia 3 are observed in central and northern Europe. Generally, the greater the population density and density of industrialization the lower the rainfall pH. Long distance transport, however, can result in low pH rainfall even in areas with low population and low density of industries, as in parts of New England, eastern Canada, and in Scandinavia. A very significant portion of acid deposition occurs in the dry form. In the United States, it is estimated that 30– 60% of acidic deposition occurs as dry fall. This material is deposited as sulfur dioxide gas and very finely divided particles (aerosols) directly on the surfaces of plants (needles and leaves). The rate of deposition depends not only on the concentration of acid materials suspended in the air, but on the nature and density of plant surfaces exposed to the atmosphere and the atmospheric conditions(e.g., wind speed and humidity). Direct deposition of acid cloud droplets can be very important especially in some high altitude forests. Acid cloud droplets can have acid concentrations of five to 20 times that in wet deposition. In some high elevation sites that are frequently shrouded in clouds, direct droplet deposition is three times that of wet deposition from rainfall. Acid deposition has the potential to adversely affect sensitive forests as well as lakes and streams. Agriculture is generally not included in the assessment of the effects of acidic deposition because experimental evidence indicates that even the most severe episodes of acid deposition do not adversely affect the growth of agricultural crops, and any long-term soil acidification can readily be managed by addition of agricultural lime. In fact, the acidifying potential of the fertilizers normally added to cropland is much greater than that of acidic deposition. In forests, however, longterm acidic deposition on sensitive soils can result in the depletion of important nutrient elements (e.g., calcium, magnesium, and potassium) and in soil acidification. Also, acidic pollutants can interact with other pollutants (e.g., ozone) to cause more immediate problems for tree growth. Acid deposition can also result in the acidification of sensitive lakes and with the loss of biological productivity. Long-term exposure of acid sensitive materials used in building construction and in monuments (e.g., zinc, marble, limestone, and some sandstone) can result in surface corrosion and deterioration. Monuments tend to be the most vulnerable because they are usually not as protected from rainfall as most building materials. Good data on the impact of acidic deposition on monuments and building material is lacking. Nutrient depletion due to acid deposition on sensitive soils is a long-term (decades to centuries) consequence of acidic deposition. Acidic deposition greatly accelerates the very slow depletion of soil nutrients due to natural weathering processes. Soils that contain less plant-available calcium,

Acid rain

magnesium and potassium are less buffered with respect to degradation due to acidic deposition. The most sensitive soils are shallow sandy soils over hard bedrock. The least vulnerable soils are the deep clay soils that are highly buffered against changes due to acidic deposition. The more immediate possible threat to forests is the forest decline phenomenon that has been observed in forests in northern Europe and North America. Acidic deposition in combination with other stress factors such as ozone, disease and adverse weather conditions can lead to decline in forest productivity and, in certain cases, to dieback. Acid deposition alone cannot account for the observed forest decline, and acid deposition probably plays a minor role in the areas where forest decline has occurred. Ozone is a much more serious threat to forests, and it is a key factor in the decline of forests in the Sierra Nevada and San Bernardino mountains in California. The greatest concern for adverse effects of acidic deposition is the decline in biological productivity in lakes. When a lake has a pH less than 6.0, several species of minnows, as well as other species that are part of the food chain for many fish, cannot survive. At pH values less than about 5.3, lake trout, walleye, and smallmouth bass cannot survive. At pH less than about 4.5, most fish cannot survive (largemouth bass are an exception). Many small lakes are naturally acidic due to organic acids produced in acid soils and acid bogs. These lakes have chemistries dominated by organic acids, and many have brown colored waters due to the organic acid content. These lakes can be distinguished from lakes acidified by acidic deposition, because lakes strongly affected by acidic deposition are dominated by sulfate. Lakes that are adversely affected by acidic deposition tend to be in steep terrain with thin soils. In these settings the path of rainwater movement into a lake is not influenced greatly by soil materials. This contrasts to most lakes where much of the water that collects in a lake flows first into the groundwater before entering the lake via subsurface flow. Due to the contact with soil materials, acidity is neutralized and the capacity to neutralize acidity is added to the water in the form of bicarbonate ions (bicarbonate alkalinity). If more than 5% of the water that reaches a lake is in the form of groundwater, a lake is not sensitive to acid deposition. An estimated 24% of the lakes in the Adirondack region of New York are devoid of fish. In one third to one half of these lakes this is due to acidic deposition. Approximately 16% of the lakes in this region may have lost one or more species of fish due to acidification. In Ontario, Canada, 115 lakes are estimated to have lost populations of lake trout. Acidification of lakes, by acidic deposition, extends as far west as Upper Michigan and northeastern Wisconsin, where many sensitive lakes occur and there is some evidence for 7

Environmental Encyclopedia 3

Acidification

Acidity see pH

Acoustics see Noise pollution

Acquired immune deficiency syndrome see AIDS

Activated sludge

Acid rain in Chicago, Illinois, erodes the structures of historical buildings. (Photograph by Richard P. Jacobs. JLM Visuals. Reproduced by permission.)

acidification. However, the extent of acidification is quite limited. [Paul R. Bloom]

RESOURCES BOOKS Bresser, A. H., ed. Acid Precipitation. New York: Springer-Verlag, 1990. Mellanby, K., ed. Air Pollution, Acid Rain and the Environment. New York: Elsevier, 1989. Turck, M. Acid Rain. New York: Macmillan, 1990. Wellburn, A. Air Pollution and Acid Rain: The Biological Impact. New York: Wiley, 1988. Young, P. M. Acidic Deposition: State of Science and Technology. Summary Report of the U. S. National Acid Precipitation Program. Washington, DC: U. S. Government Printing Office, 1991.

Acidification The process of becoming more acidic due to inputs of an acidic substance. The common measure of acidification is a decrease in pH. Acidification of soils and natural waters by acid rain or acid wastes can result in reduced biological productivity if the pH is sufficiently reduced. 8

The activated sludge process is an aerobic (oxygen-rich), continuous-flow biological method for the treatment of domestic and biodegradable industrial wastewater, in which organic matter is utilized by microorganisms for life-sustaining processes, that is, for energy for reproduction, digestion, movement, etc. and as a food source to produce cell growth and more microorganisms. During these activities of utilization and degradation of organic materials, degradation products of carbon dioxide and water are also formed. The activated sludge process is characterized by the suspension of microorganisms in the wastewater, a mixture referred to as the mixed liquor. Activated sludge is used as part of an overall treatment system, which includes primary treatment of the wastewater for the removal of particulate solids before the use of activated sludge as a secondary treatment process to remove suspended and dissolved organic solids. The conventional activated sludge process consists of an aeration basin, with air as the oxygen source, where treatment is accomplished. Soluble (dissolved) organic materials are absorbed through the cell walls of the microorganisms and into the cells, where they are broken down and converted to more microorganisms, carbon dioxide, water, and energy. Insoluble (solid) particles are adsorbed on the cell walls, transformed to a soluble form by enzymes (biological catalysts) secreted by the microorganisms, and absorbed through the cell wall, where they are also digested and used by the microorganisms in their life-sustaining processes. The microorganisms that are responsible for the degradation of the organic materials are maintained in suspension by mixing induced by the aeration system. As the microorganisms are mixed, they collide with other microorganisms and stick together to form larger particles called floc. The large flocs that are formed settle more readily than individual cells. These flocs also collide with suspended and colloidal materials (insoluble organic materials), which stick to the flocs and cause the flocs to grow even larger. The microor-

Environmental Encyclopedia 3 ganisms digest these adsorbed materials, thereby re-opening sites for more materials to stick. The aeration basin is followed by a secondary clarifier (settling tank), where the flocs of microorganisms with their adsorbed organic materials settle out. A portion of the settled microorganisms, referred to as sludge, are recycled to the aeration basin to maintain an active population of microorganisms and an adequate supply of biological solids for the adsorption of organic materials. Excess sludge is wasted by being piped to separate sludge-handling processes. The liquids from the clarifier are transported to facilities for disinfection and final discharge to receiving waters, or to tertiary treatment units for further treatment. Activated sludge processes are designed based on the mixed liquor suspended solids (MLSS) and the organic loading of the wastewater, as represented by the biochemical oxygen demand (BOD) or chemical oxygen demand (COD). The MLSS represents the quantity of microorganisms involved in the treatment of the organic materials in the aeration basin, while the organic loading determines the requirements for the design of the aeration system. Modifications to the conventional activated sludge process include: Extended aeration. The mixed liquor is retained in the aeration basin until the production rate of new cells is the same as the decay rate of existing cells, with no excess sludge production. In practice, excess sludge is produced, but the quantity is less than that of other activated sludge processes. This process is often used for the treatment of industrial wastewater that contains complex organic materials requiring long detention times for degradation. OContact stabilization. A process based on the premise that as wastewater enters the aeration basin (referred to as the contact basin), colloidal and insoluble organic biodegradable materials are removed rapidly by biological sorption, synthesis, and flocculation during a relatively short contact time. This method uses a reaeration (stabilization) basin before the settled sludge from the clarifier is returned to the contact basin. The concentrated flocculated and adsorbed organic materials are oxidized in the reaeration basin, which does not receive any addition of raw wastewater. OPlug flow. Wastewater is routed through a series of channels constructed in the aeration basin; wastewater flows through and is treated as a plug as it winds its way through the basin. As the “plug” passes through the tank, the concentrations of organic materials are gradually reduced, with a corresponding decrease in oxygen requirements and microorganism numbers. OStep aeration. Influent wastewater enters the aeration basin along the length of the basin, while the return sludge enters at the head of the basin. This process results in a more O

Ansel Easton Adams

uniform oxygen demand in the basin and a more stable environment for the microorganisms; it also results in a lower solids loading on the clarifier for a given mass of microorganisms. OOxidation ditch. A circular aeration basin (racetrackshaped) is used, with rotary brush aerators that extend across the width of the ditch. Brush aerators aerate the wastewater, keep the microorganisms in suspension, and drive the wastewater around the circular channel. [Judith Sims]

RESOURCES BOOKS Corbitt, R. A. “Wastewater Disposal.” In Standard Handbook of Environmental Engineering, edited by R. A. Corbitt. New York: McGraw-Hill, 1990. Junkins, R., K. Deeny, and T. Eckhoff. The Activated Sludge Process: Fundamentals of Operation. Boston: Butterworth Publishers, 1983.

Acute effects Effects that persist in a biologic system for only a short time, generally less than a week. The effects might range from behavioral or color changes to death. Tests for acute effects are performed with humans, animals, plants, insects, and microorganisms. Intoxication and a hangover resulting from the consumption of too much alcohol, the common cold, and parathion poisoning are examples of acute effects. Generally, little tissue damage occurs as a result of acute effects. The term acute effects should not be confused with acute toxicity studies or acute dosages, which respectively refer to short-term studies (generally less than a week) and short-term dosages (often a single dose). Both chronic and acute exposures can initiate acute effects.

Ansel Easton Adams

(1902 – 1984)

American photographer and conservationist Ansel Adams is best known for his stark black-and-white photographs of nature and the American landscape. He was born and raised in San Francisco. Schooled at home by his parents, he received little formal training except as a pianist. A trip to Yosemite Valley as a teenager had a profound influence on him, and Yosemite National Park and the Sierra “range of light” attracted him back many times and inspired two great careers: photographer and conservationist. As he observed, “Everybody needs something to believe in [and] my point of focus is conservation.” He used his photographs to make that point more vivid and turned it into an enduring legacy. 9

Environmental Encyclopedia 3

Adaptation

Adams was a painstaking artist, and some critics have chided him for an overemphasis on technique and for creating in his work “a mood that is relentlessly optimistic.” Adams was a careful technician, making all of his own prints (reportedly hand-producing over 13,000 in his lifetime), sometimes spending a whole day on one print. He explained: “I have made thousands of photographs of the natural scene, but only those images that were most intensely felt at the moment of exposure have survived the inevitable winnowing of time.” He did winnow, ruthlessly, and the result was a collection of work that introduced millions of people to the majesty and diversity of the American landscape. Not all of Adams’s pictures were “uplifting” or “optimistic” images of scenic wonders; he also documented scenes of overgrazing in the arid Southwest and of incarcerated Japanese-Americans in the Manzanar internment camp. From the beginning, Adams used his photographs in the cause of conservation. His pictures played a major role in the late 1930s in establishing Kings Canyon National Park. Throughout his life, he remained an active, involved conservationist; for many years he was on the Board of the Sierra Club and strongly influenced the Club’s activities and philosophy. Ansel Adams’s greatest bequest to the world will remain his photographs and advocacy of wilderness and the national park ideals. Through his work he not only generated interest in environmental conservation, he also captured the beauty and majesty of nature for all generations to enjoy. [Gerald L. Young]

RESOURCES BOOKS Adams, Ansel. Ansel Adams: An Autobiography. New York: New York Graphic Society, 1984.

PERIODICALS Cahn, R. “Ansel Adams, Environmentalist.” Sierra 64 (May–June 1979): 31–49. Grundberg, A. “Ansel Adams: The Politics of Natural Space.” The New Criterion 3 (1984): 48–52.

Adaptation All members of a population share many characteristics in common. For example, all finches in a particular forest are alike in many ways. But if many hard-to-shell seeds are found in the forest, those finches with stronger, more conical bills will have better rates of reproduction and survival than finches with thin bills. Therefore, a conical, stout bill can be considered an adaptation to that forest environment. Any specialized characteristic that permits an individual to 10

survive and reproduce is called an adaptation. Adaptations may result either from an individual’s genetic heritage or from its ability to learn. Since successful genetic adaptations are more likely to be passed from generation to generation through the survival of better adapted organisms, adaptation can be viewed as the force that drives biological evolution.

Adaptive management Adaptive management is taking an idea, implementing it, and then documenting and learning from any mistakes or benefits of the experiment. The basic idea behind adaptive management is that natural systems are too complex, too non-linear, and too multi-scale to be predictable. Management policies and procedures must therefore become more adaptive and capable of change to cope with unpredictable systems. Advocates suggest treating management policies as experiments, which are then designed to maximize learning rather than focusing on immediate resource yields. If the environmental and resource systems on which human beings depend are constantly changing, then societies who utilize that learning cannot rely on those systems to sustain continued use. Adaptive management mandates a continual experimental process, an on-going process of reevaluation and reassessment of planning methods and human actions, and a constant long-term monitoring of environmental impacts and change. This would keep up with the constant change in the environmental systems to which the policies or ideas are to be applied. The Grand Canyon Protection Act of 1992 is one example of adaptive management at work. It entails the study and monitoring of the Glen Canyon Dam and the operational effects on the surrounding environment, both ecological and biological. Haney and Power suggest that “uncertainty and complexity frustrate both science and management, and it is only by combining the best of both that we use all available tools to manage ecosystems sustainably.” However, Fikret Berkes and colleagues claim that adaptive management can be attained by approaching it as a rediscovery of traditional ecological knowledge among indigenous peoples: “These traditional systems had certain similarities to adaptive management with its emphasis on feedback learning, and its treatment of uncertainty and unpredictability intrinsic to all ecosystems.” An editorial in the journal Environment offered the rather inane statement that adaptive management “has not realized its promise.” The promise is in the idea, but implementation begins with people. Adaptive management, like Smart Growth and other seemingly innovative approaches

Environmental Encyclopedia 3

Adirondack Mountains

to land use and environmental management, is plagued by the problem of how to get people to actually put into practice what is proposed. Even for practical ideas the problem remains the same: not science, not technology, but human willfulness and human behavior. For policies or plans to be truly adaptive, the people themselves must be willing to adapt. Haney and Power provide the conclusion: “When properly integrated, the [adaptive management] process is continuous and cyclic; components of the adaptive management model evolve as information is gained and social and ecological systems change. Unless management is flexible and innovative, outcomes become less sustainable and less accepted by stakeholders. Management will be successful in the face of complexity and uncertainty only with holistic approaches, good science, and critical evaluation of each step. Adaptive management is where it all comes together.” [Gerald L. Young]

RESOURCES BOOKS Holling, C. S., ed. Adaptive Environmental Assessment and Management. NY: John Wiley & Sons, 1978.

PERIODICALS Haney, Alan, and Rebecca L. Power. “Adaptive Management for Sound Ecosystem Management.” Environmental Management 20, no. 6 (November/December 1996): 879–886. McLain, Rebecca J., and Robert G. Lee. “Adaptive management: Promises and Pitfalls.” Environmental Management 20, no. 4 (July/August, 1996): 437–448. Shindler, Bruce, Brent Steel, and Peter List. “Public Judgments of Adaptive Management: A Response from Forest Communities.” Journal of Forestry 96, no. 6 (June, 1996): 4–12. Walters, Carl. Adaptive Management of Renewable Resources. NY: Macmillan, 1986. Walters, Carl J. “Ecological Optimization and Adaptive Management.” Annual Review of Ecology and Systematics 9 (1978): 157–188.

Adirondack Mountains A range of mountains in northeastern New York, containing Mt. Marcy (5,344 ft; 1,644 m), the state’s highest point. Bounded by the Mohawk Valley on the south, the St. Lawrence Valley on the northeast, and by the Hudson River and Lake Champlain on the east, the Adirondack Mountains form the core of Adirondack Park. This park is one of the earliest and most comprehensive examples of regional planning in the United States. The regional plan attempts to balance conflicting interests of many users at the same time as it controls environmentally destructive development. Although the plan remains controversial, it has succeeded

in largely preserving one of the last and greatest wilderness areas in the East. The Adirondacks serve a number of important purposes for surrounding populations. Vacationers, hikers, canoeists, and anglers use the area’s 2,300 wilderness lakes and extensive river systems. The state’s greatest remaining forests stand in the Adirondacks, providing animal habitat and serving recreational visitors. Timber and mining companies, employing much of the area’s resident population, also rely on the forests, some of which contain the East’s most ancient old-growth groves. Containing the headwaters of numerous rivers, including the Hudson, Adirondack Park is an essential source of clean water for farms and cities at lower elevations. Adirondack Park was established by the New York State Constitution of 1892, which mandates that the region shall remain “forever wild.” Encompassing six million acres (2.4 million ha), this park is the largest wilderness area in the eastern United States—nearly three times the size of Yellowstone National Park. Only a third of the land within park boundaries, however, is owned by the state of New York. Private mining and timber concerns, public agencies, several towns, thousands of private cabins, and 107 units of local government occupy the remaining property. Because the development interests of various user groups and visitors conflict with the state constitution, a comprehensive regional land use plan was developed in 1972–1973. The novelty of the plan lay in the large area it covered and in its jurisdiction over land uses on private land as well as public land. According to the regional plan, all major development within park boundaries must meet an extensive set of environmental safeguards drawn up by the state’s Adirondack Park Agency. Stringent rules and extensive regulation frustrate local residents and commercial interests, who complain about the plan’s complexity and resent “outsiders” ruling on what Adirondackers are allowed to do. Nevertheless, this plan has been a milestone for other regions trying to balance the interests of multiple users. By controlling extensive development, the park agency has preserved a wilderness resource that has become extremely rare in the eastern United States. The survival of this century-old park, surrounded by extensive development, demonstrates the value of preserving wilderness in spite of ongoing controversy. In recent years forestry and recreation interests in the Adirondacks have encountered a new environmental problem in acid precipitation. Evidence of deleterious effects of acid rain and snow on aquatic and terrestrial vegetation began to accumulate in the early 1970s. Studies revealed that about half of the Adirondack lakes situated above 3,300 ft (1,000 m) have pH levels so low that all fish have disappeared. Prevailing winds put these mountains directly downstream of urban and industrial regions of western New York 11

Environmental Encyclopedia 3

Adsorption

and southern Ontario. Because they form an elevated obstacle to weather patterns, these mountains capture a great deal of precipitation carrying acidic sulfur and nitrogen oxides from upwind industrial cities. [Mary Ann Cunningham]

RESOURCES BOOKS Ciroff, R. A., and G. Davis. Protecting Open Space: Land Use Control in the Adirondack Park. Cambridge, MA: Ballinger, 1981. Davis, G., and T. Duffus. Developing a Land Conservation Strategy. Elizabethtown, NY: Adirondack Land Trust, 1987.

Aerobic Refers to either an environment that contains molecular oxygen gas (O2); an organism or tissue that requires oxygen for its metabolism; or a chemical or biological process that requires oxygen. Aerobic organisms use molecular oxygen in respiration, releasing carbon dioxide (CO2) in return. These organisms include mammals, fish, birds, and green plants, as well as many of the lower life forms such as fungi, algae, and sundry bacteria and actinomycetes. Many, but not all, organic decomposition processes are aerobic; a lack of oxygen greatly slows these processes.

Graham, F. J. The Adirondack Park: A Political History. New York: Knopf, 1978.

Aerobic/anaerobic systems

Popper, F. J. The Politics of Land Use Reform. Madison, WI: University of Wisconsin Press, 1981.

Most living organisms require oxygen to function normally, but a few forms of life exist exclusively in the absence of oxygen and some can function both in the presence of oxygen (aerobically) and in its absence (anaerobically). Examples of anaerobic organisms are found in bacteria of the genus Clostridium, in parasitic protozoans from the gastrointestinal tract of humans and other vertebrates, and in ciliates associated with sulfide-containing sediments. Organisms capable of switching between aerobic and anaerobic existence are found in forms of fungi known as yeasts. The ability of an organism to function both aerobically and anaerobically increases the variety of sites in which it is able to exist and conveys some advantages over organisms with less adaptive potential. Microbial decay activity in nature can occur either aerobically or anaerobically. Aerobic decomposers of compost and other organic substrates are generally preferable because they act more quickly and release fewer noxious odors. Large sewage treatment plants use a two-stage digestion system in which the first stage is anaerobic digestion of sludge that produces flammable methane gas that may be used as fuel to help operate the plant. Sludge digestion continues in the aerobic second stage, a process which is easier to control but more costly because of the power needed to provide aeration. Although most fungi are generally aerobic organisms, yeasts used in bread making and in the production of fermented beverages such as wine and beer can metabolize anaerobically. In the process, they release ethyl alcohol and the carbon dioxide that causes bread to rise. Tissues of higher organisms may have limited capability for anaerobic metabolism, but they need elaborate compensating mechanisms to survive even brief periods without oxygen. For example, human muscle tissue is able to metabolize anaerobically when blood cannot supply the large amounts of oxygen needed for vigorous activity. Muscle contraction requires an energy-rich compound called adeno-

Adsorption The removal of ions or molecules from solutions by binding to solid surfaces. Phosphorus is removed from water flowing through soils by adsorption on soil particles. Some pesticides adsorb strongly on soil particles. Adsorption by suspended solids is also an important process in natural waters.

AEC see Atomic Energy Commission

AEM see Agricultural Environmental Management

Aeration In discussions of plant growth, aeration refers to an exchange that takes place in soil or another medium allowing oxygen to enter and carbon dioxide to escape into the atmosphere. Crop growth is often reduced when aeration is poor. In geology, particularly with reference to groundwater, aeration is the portion of the earth’s crust where the pores are only partially filled with water. In relation to water treatment, aeration is the process of exposing water to air in order to remove such undesirable substances in drinking water as iron and manganese. 12

Environmental Encyclopedia 3

Aerobic sludge digestion

sine triphosphate (ATP). Muscle tissue normally contains enough ATP for 20–30 seconds of intense activity. ATP must then be metabolically regenerated from glycogen, the muscle’s primary energy source. Muscle tissue has both aerobic and anaerobic metabolic systems for regenerating ATP from glycogen. Although the aerobic system is much more efficient, the anaerobic system is the major energy source for the first minute or two of exercise. The carbon dioxide released in this process causes the heart rate to increase. As the heart beats faster and more oxygen is delivered to the muscle tissue, the more efficient aerobic system for generating ATP takes over. A person’s physical condition is important in determining how well the aerobic system is able to meet the needs of continued activity. In fit individuals who exercise regularly, heart function is optimized, and the heart is able to pump blood rapidly enough to maintain aerobic metabolism. If the oxygen level in muscle tissue drops, anaerobic metabolism will resume. Toxic products of anaerobic metabolism, including lactic acid, accumulate in the tissue, and muscle fatigue results. Other interesting examples of limited anaerobic capability are found in the animal kingdom. Some diving ducks have an adaptation that allows them to draw oxygen from stored oxyhemoglobin and oxymyoglobin in blood and muscles. This adaptation permits them to remain submerged in water for extended periods. To prevent desiccation, mussels and clams close their shells when out of the water at low tide, and their metabolism shifts from aerobic to anaerobic. When once again in the water, the animals rapidly return to aerobic metabolism and purge themselves of the acid products of anaerobiosis accumulated while they were dry. [Douglas C. Pratt]

RESOURCES BOOKS Lea, A.G.H., and Piggott, J. R. Fermented beverage production. New York: Blackie, 1995. McArdle, W. D. Exercise Physiology: Energy, Nutrition, and Human Performance. 4th ed. Baltimore: Williams & Wilkins, 1996. Stanbury, P. F., Whitaker, A., and Hall, S. J. Principles of Fermentation Technology. 2nd ed. Tarrytown, N.Y.: Pergamon, 1995.

PERIODICALS Klass, D. L. “Methane from Anaerobic Fermentation.” Science 223 (1984): 1021.

Aerobic sludge digestion Wastewater treatment plants produce organic sludge as wastewater is treated; this sludge must be further treated before ultimate disposal. Sludges are generated from primary settling tanks, which are used to remove settable, particulate

solids, and from secondary clarifiers (settling basins), which are used to remove excess biomass production generated in secondary biological treatment units. Disposal of sludges from wastewater treatment processes is a costly and difficult problem. The processes used in sludge disposal include: (1) reduction in sludge volume, primarily by removal of water, which constitutes 97–98% of the sludge; (2) reduction of the volatile (organic) content of the sludge, which eliminates nuisance conditions by reducing putrescibility and reduces threats to human health by reducing levels of microorganisms; and (3) ultimate disposal of the residues. Aerobic sludge digestion is one process that may be used to reduce both the organic content and the volume of the sludge. Under aerobic conditions, a large portion of the organic matter in sludge may be oxidized biologically by microorganisms to carbon dioxide and water. The process results in approximately 50% reduction in solids content. Aerobic sludge digestion facilities may be designed for batch or continuous flow operations. In batch operations, sludge is added to a reaction tank while the contents are continuously aerated. Once the tank is filled, the sludges are aerated for two to three weeks, depending on the types of sludge. After aeration is discontinued, the solids and liquids are separated. Solids at concentrations of 2–45 are removed, and the clarified liquid supernatant is decanted and recycled to the wastewater treatment plant. In a continuous flow system, an aeration tank is utilized, followed by a settling tank. Aerobic sludge digestion is usually used only for biological sludges from secondary treatment units, in the absence of sludges from primary treatment units. The most commonly used application is for the treatment of sludges wasted from extended aeration systems (which is a modification of the activated sludge system). Since there is no addition of an external food source, the microorganisms must utilize their own cell contents for metabolic purposes in a process called endogenous respiration. The remaining sludge is a mineralized sludge, with remaining organic materials comprised of cell walls and other cell fragments that are not readily biodegradable. The advantages of using aerobic digestion, as compared to the use of anaerobic digestion include: (1) simplicity of operation and maintenance; (2) lower capital costs; (3) lower levels of biochemical oxygen demand (BOD) and phosphorus in the supernatant; (4) fewer effects from upsets such as the presence of toxic interferences or changes in loading and pH; (5) less odor; (6) nonexplosive; (7) greater reduction in grease and hexane solubles; (8) greater sludge fertilizer value; (9) shorter retention periods; and (10) an effective alternative for small wastewater treatment plants. 13

Environmental Encyclopedia 3

Aerosol

Disadvantages include: (1) higher operating costs, especially energy costs; (2) highly sensitive to ambient temperature (operation at temperatures below 59°F [15°C]) may require excessive retention times to achieve stabilization; if heating is required, aerobic digestion may not be costeffective); (3) no useful byproduct such as methane gas that is produced in anaerobic digestion; (4) variability in the ability to dewater to reduce sludge volume; (5) less reduction in volatile solids; and (6) unfavorable economics for larger wastewater treatment plants. [Judith Sims]

RESOURCES BOOKS Corbitt, R. A. “Wastewater Disposal.” In Standard Handbook of Environmental Engineering, edited by R. A. Corbitt. New York: McGraw-Hill, 1990. Gaudy Jr., A. F., and E. T. Gaudy. Microbiology for Environmental Scientists and Engineers. New York: McGraw-Hill, 1980. Peavy, H. S., D. R. Rowe, and G. Tchobanoglous. Environmental Engineering. New York: McGraw-Hill, 1985.

Aerosol A suspension of particles, liquid or solid, in a gas. The term implies a degree of permanence in the suspension, which puts a rough upper limit on particle size at a few tens of micrometers at most (1 micrometer = 0.00004 in). Thus in proper use the term connotes the ensemble of the particles and the suspending gas. The atmospheric aerosol has two major components, generally referred to as coarse and fine particles, with different sources and different composition. Coarse particles result from mechanical processes, such as grinding. The smaller particles are ground, the more surface they have per unit of mass. Creating new surface requires energy, so the smallest average size that can be created by such processes is limited by the available energy. It is rare for such mechanically generated particles to be less than 1 ␮m (0.00004 in.) in diameter. Fine particles, on the other hand, are formed by condensation from the vapor phase. For most substances, condensation is difficult from a uniform gaseous state; it requires the presence of pre-existing particles on which the vapors can deposit. Alternatively, very high concentrations of the vapor are required, compared with the concentration in equilibrium with the condensed material. Hence, fine particles form readily in combustion processes when substances are vaporized. The gas is then quickly cooled. These can then serve as nuclei for the formation of larger particles, still in the fine particle size range, in the presence of condensable vapors. However, in the atmo14

sphere such particles become rapidly more scarce with in-

creasing size, and are relatively rare in sizes much larger than a few micrometers. At about 2 ␮m (0.00008 in.), coarse and fine particles are about equally abundant. Using the term strictly, one rarely samples the atmospheric aerosol, but rather the particles out of the aerosol. The presence of aerosols is generally detected by their effect on light. Aerosols of a uniform particle size in the vicinity of the wavelengths of visible light can produce rather spectacular optical effects. In the laboratory, such aerosols can be produced by condensation of the heated vapors of certain oils on nuclei made by evaporating salts from heated filaments. If the suspending gas is cooled quickly, particle size is governed by the supply of vapor compared with the supply of nuclei, and the time available for condensation to occur. Since these can all be made nearly constant throughout the gas, the resulting particles are quite uniform. It is also possible to produce uniform particles by spraying a dilute solution of a soluble material, then evaporating the solvent. If the spray head is vibrated in an appropriate frequency range, the drops will be uniform in size, with the size controlled by the frequency of vibration and the rate of flow of the spray. Obviously, the final particle size is also a function of the concentration of the sprayed solution. [James P. Lodge Jr.]

RESOURCES BOOKS Jennings, S. G., ed. Aerosol Effects on Climate. Tucson, AZ: University of Arizona Press, 1993. Reist, P. Aerosol Science and Technology. New York: McGraw-Hill, 1992.

PERIODICALS Monastersky, R. “Aerosols: Critical Questions for Climate.” Science News 138 (25 August 1990): 118. Sun, M. “Acid Aerosols Called Health Hazard.” Science 240 (24 June 1988): 1727.

Aflatoxin Toxic compounds produced by some fungi and among the most potent naturally occurring carcinogens for humans and animals. Aflatoxin intake is positively related to high incidence of liver cancer in humans in many developing countries. In many farm animals aflatoxin can cause acute or chronic diseases. Aflatoxin is a metabolic by-product produced by the fungi Aspergillus flavus and the closely related species Aspergillus parasiticus growing on grains and decaying organic compounds. There are four naturally occurring aflatoxins: B1, B2, G1, and G2. All of these compounds will fluoresce under a UV (black) light around 425– 450 nm providing a qualitative test for the presence of afla-

Environmental Encyclopedia 3 toxins. In general, starch grains, such as corn, are infected in storage when the moisture content of the grain reaches 17–18% and the temperature is 79–99°F (26–37°C). However, the fungus may also infect grain in the field under hot, dry conditions.

African Wildlife Foundation The African Wildlife Foundation (AWF), headquartered in Washington, DC, was established in 1961 to promote the protection of the animals native to Africa. The group maintains offices in both Washington, DC, and Nairobi, Kenya. The African headquarters promotes the idea that Africans themselves are best able to protect the wildlife of their continent. AWF also established two colleges of wildlife management in Africa (Tanzania and Cameroon), so that rangers and park and reserve wardens can be professionally trained. Conservation education, especially as it relates to African wildlife, has always been a major AWF goal—in fact, it has been the association’s primary focus since its inception. AWF carries out its mandate to protect Africa’s wildlife through a wide range of projects and activities. Since 1961, AWF has provided a radio communication network in Africa, as well as several airplanes and jeeps for antipoaching patrols. These were instrumental in facilitating the work of Dr. Richard Leakey in the Tsavo National Park, Kenya. In 1999, the African Hearlands project was set up and to try to connect large areas of wild land which is home to wild animals. They also attempt to involve people who live adjacent to protected wildlife areas by asking them to take joint responsibility for natural resources. The program demonstrates that land conservation and the needs of neighboring people and their livestock can be balanced, and the benefits shared. Currently there are four heartland areas: Maasai Steppe, Kilimanjaro, Virunga, and Samburu. Another highly successful AWF program is the Elephant Awareness Campaign. Its slogan, “Only Elephants Should Wear Ivory,” has become extremely popular, both in Africa and in the United States, and is largely responsible for bringing the plight of the African elephant (Loxodonta africana) to public awareness. Although AWF is concerned with all the wildlife of Africa, in recent years the group has focused on saving African elephants, black rhinoceroses (Diceros bicornis), and mountain gorillas (Gorilla gorilla berengei). These species are seriously endangered, and are benefiting from AWF’s Critical Habitats and Species Program, which works to aid these and other animals in critical danger. From its inception, AWF has supported education centers, wildlife clubs, national parks, and reserves. There

Africanized bees

is even a course at the College of African Wildlife Management in Tanzania that allows students to learn community conservation activities and helps park officials learn to work with residents living adjacent to protected areas. AWF also involves teachers in its endeavors with a series of publications, Let’s Conserve Our Wildlife. Written in Swahili, the series includes teacher’s guides and has been used in both elementary schools and adult literacy classes in African villages. AWF also publishes the quarterly magazine Wildlife News. [Cathy M. Falk]

RESOURCES ORGANIZATIONS African Wildlife Foundation., 1400 16th Street, NW, Washington, DC USA 20036 (202) 939-3333, Fax: (202) 939-3332, Email: [email protected],

Africanized bees The Africanized bee (Apis mellifera scutellata), or “killer bee,” is an extremely aggressive honeybee. This bee developed when African honeybees were brought to Brazil to mate with other bees to increase honey production. The imported bees were accidentally released and they have since spread northward, traveling at a rate of 300 mi (483 km) per year. The bees first appeared in the United States at the TexasMexico border in late 1990. The bees get their “killer” title because of their vigorous defense of colonies or hives when disturbed. Aside from temperament, they are much like their counterparts now in the United States, which are European in lineage. Africanized bees are slightly smaller than their more passive cousins. Honeybees are social insects and live and work together in colonies. When bees fly from plant to plant, they help pollinate flowers and crops. Africanized bees, however, seem to be more interested in reproducing than in honey production or pollination. For this reason they are constantly swarming and moving around, while domestic bees tend to stay in local, managed colonies. Because Africanized bees are also much more aggressive than domestic honey bees when their colonies are disturbed, they can be harmful to people who are allergic to bee stings. More problematic than the threat to humans, however, is the impact the bees will have on fruit and vegetable industries in the southern parts of the United States. Many fruit and vegetable growers depend on honey bees for pollination, and in places where the Africanized bees have appeared, honey production has fallen by as much as 80%. Beekeepers in this country are experimenting with “re-queening” their 15

Environmental Encyclopedia 3

Agency for Toxic Substances and Disease Registry

Nevada bees were almost 90% Africanized in June of 2001. Most of Texas has been labeled as a quarantine zone, and beekeepers are not able to move hives out of these boundaries. The largest colony found to date was in southern Phoenix, Arizona. The hive was almost 6 ft (1.8 m) long and held about 50,000 Africanized bees. [Linda Rehkopf]

RESOURCES PERIODICALS “African Bees Make U.S. Debut.” Science News 138 (October 27, 1990): 261. Barinaga, M. “How African Are ’Killer’ Bees?” Science 250 (November 2, 1990): 628–629. Hubbell, S. “Maybe the ’Killer’ Bee Should Be Called the ’Bravo’ Instead.” Smithsonian 22 (September 1991): 116–124. White, W. “The Bees From Rio Claro.” The New Yorker 67 (September 16, 1991): 36–53. Winston, M. Killer Bees: The Africanized Honey Bee in the Americas. Cambridge: Harvard University Press, 1992.

OTHER “Africanized Bees in the Americas.” Sting Shield.com Page. April 25, 2002 [cited May 2002]. .

An Africanized bee collecting grass pollen in Brazil. (Photograph by Scott Camazine. Photo Researchers Inc. Reproduced by permission.)

colonies regularly to ensure that the colonies reproduce gentle offspring. Another danger is the propensity of the Africanized bee to mate with honey bees of European lineage, a kind of “infiltration” of the gene pool of more domestic bees. Researchers from the U.S. Department of Agriculture (USDA) are watching for the results of this interbreeding, particularly for those bees that display European-style physiques and African behaviors, or vice versa. When Africanized bees first appeared in southern Texas, researchers from the USDA’s Honeybee Research Laboratory in Weslaco, Texas, destroyed the colony, estimated at 5,000 bees. Some of the members of the 3-lb (1.4 kg) colony were preserved in alcohol and others in freezers for future analysis. Researchers are also developing management techniques, including the annual introduction of young mated European queens into domestic hives, in an attempt to maintain gentle production stock and ensure honey production and pollination. As of 2002, there were 140 counties in Texas, nine in New Mexico, nine in California, three in Nevada, and all of the 15 counties in Arizona in which Africanized bee colonies had been located. There have also been reported colonies in Puerto Rico and the Virgin Islands. Southern 16

Agency for Toxic Substances and Disease Registry The Agency for Toxic Substances and Disease Registry (ATSDR) studies the health effects of hazardous substances in general and at specific locations. As indicated by its title, the Agency maintains a registry of people exposed to toxic chemicals. Along with the Environmental Protection Agency (EPA), ATSDR prepares and updates profiles of toxic substances. In addition, ATSDR assesses the potential dangers posed to human health by exposure to hazardous substances at Superfund sites. The Agency will also perform health assessments when petitioned by a community. Though ATSDR’s early health assessments have been criticized, the Agency’s later assessments and other products are considered more useful. ATSDR was created in 1980 by the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), also known as the Superfund, as part of the U.S. Department of Health and Human Services. As

originally conceived, ATSDR’s role was limited to performing health studies and examining the relationship between toxic substances and disease. The Superfund Amendments and Reauthorization Act (SARA) of 1986 codified ATSDR’s responsibility for assessing health threats at Superfund sites. ATSDR, along with the national Centers for Disease Control and state health departments, conducts health surveys in communities near locations that have been

Environmental Encyclopedia 3

Agent Orange

placed on the Superfund’s National Priorities List for clean up. ATSDR has preformed 951 health assessments in the two years after the law was passed. Approximately one quarter of these assessments were memos or reports that had been completed prior to 1986 and were simply re-labeled as health assessments. These first assessments have been harshly criticized. The General Accounting Office (GAO), a congressional agency that reviews the actions of the federal administration, charged that most of these assessments were inadequate. Some argued that the agency was underfunded and poorly organized. Recently, ATSDR received less than 5% of the $1.6 billion appropriated for the Superfund project. Subsequent health assessments, more than 200 of them, have generally been more complete, but they still may not be adequate in informing the community and the EPA of the dangers at specific sites. In general, ATSDR identifies a local agency to help prepare the health surveys. Unlike many of the first assessments, more recent surveys now include site visits and face-to-face interviews. However, other data on environmental effects are limited. ATSDR only considers environmental information provided by the companies that created the hazard or data collected by the EPA. In addition, ATSDR only assesses health risks from illegal emissions, not from “permitted” emissions. Some scientists contend that not enough is known about the health effects of exposure to hazardous substances to make conclusive health assessments. Reaction to the performance of ATSDR’s other functions has been generally more positive. As mandated by SARA, ATSDR and the EPA have prepared hundreds of toxicological profiles of hazardous substances. These profiles have been judged generally helpful, and the GAO praised ATSDR’s registry of people who have been exposed to toxic substances. [Alair MacLean]

RESOURCES BOOKS Environmental Epidemiology: Public Health and Hazardous Wastes. National Research Council. Committee on Environmental Epidemiology. Washington, DC: National Academy Press, 1991. Lewis, S., B. Keating, and D. Russell. Inconclusive by Design: Waste, Fraud and Abuse in Federal Environmental Health Research. Boston: National Toxics Campaign Fund; and Harvey, LA: Environmental Health Network, 1992.

OTHER Superfund: Public Health Assessments Incomplete and of Questionable Value. Washington, DC: General Accounting Office, 1991.

ORGANIZATIONS The ATSDR Information Center, , (404) 498-0110, Fax: (404) 498-0057, Toll Free: (888) 422-8737, Email: [email protected],

Agent Orange Agent Orange is a herbicide recognized for its use during the Vietnam War. It is composed of equal parts of two chemicals: 2,4-D and 2,4,5-T. A less potent form of the herbicide has also been used for clearing heavy growth on a commercial basis for a number of years. However, it does not contain 2,4-D. On a commercial level, the herbicide was used in forestry control as early as the 1930s. In the 1950s through the 1960s, Agent Orange was also exported. For example, New Brunswick, Canada, was the scene of major Agent Orange spraying to control forests for industrial development. In Malaysia in the 1950s, the British used compounds with the chemical mixture 2,4,5-T to clear communication routes. In the United States, herbicides were considered for military use towards the end of World War II, during the action in the Pacific. However, the first American military field tests were actually conducted in Puerto Rico, Texas, and Fort Drum, New York, in 1959. That same year—1959—the Crops Division at Fort Detrick, Maryland initiated the first large-scale military defoliation effort. The project involved the aerial application of Agent Orange to about 4 mi2 (10.4 km2) of vegetation. The experiment proved highly successful; the military had found an effective tool. By 1960, the South Vietnamese government, aware of these early experiments, had requested that the United States conduct trials of these herbicides for use against guerrilla forces. Spraying of Agent Orange in Southeast Asia began in 1961. South Vietnam President Diem stated that he wanted this “powder” in order to destroy the rice and the food crops that would be used by the Viet Cong. Thus began the use of herbicides as a weapon of war. The United States military became involved, recognizing the limitations of fighting in foreign territory with troops that were not accustomed to jungle conditions. The military wanted to clear communication lines and open up areas of visibility in order to enhance their opportunities for success. Eventually, the United States military took complete control of the spray missions. Initially, there were to be restrictions: the spraying was to be limited to clearing power lines and roadsides, railroads and other lines of communications and areas adjacent to depots. Eventually, the spraying was used to defoliate the thick jungle brush, thereby obliterating enemy hiding places. Once under the authority of the military, and with no checks or restraints, the spraying continued to increase in 17

Agent Orange

Environmental Encyclopedia 3

Deforestation of the Viet Cong jungle in South Vietnam. (AP/Wide World Photos. Reproduced by permission.)

intensity and abandon, escalating in scope because of military pressure. It was eventually used to destroy crops, mainly rice, in an effort to deprive the enemy of food. Unfortunately, the civilian population—Vietnamese men, women, and children—was also affected. The United States military sprayed 3.6 million acres (1.5 million ha) with 19 million gal (720 million l) of Agent Orange over nine years. The spraying also became useful in clearing military base perimeters, cache sites, and waterways. Base perimeters were often sprayed more than once. In the case of dense jungle growth, one application of spray was made for the upper and another for the lower layers of vegetation. Inland forests, mangrove forests, and cultivated lands were all targets. Through Project Ranch Hand—the Air Force team assigned to the spray missions—Agent Orange became the most widely produced and dispensed defoliant in Vietnam. Military requirements for herbicide use were developed by the Army’s Chemical Operations Division, J-3, Military Assistance Command, Vietnam, (MACV). With Project Ranch Hand underway, the spray missions increased monthly after 1962. This increase was made possible by the continued military promises to stay away from the civilians 18

or to re-settle those civilians and re-supply the food in any areas where herbicides destroyed the food of the innocent. These promises were never kept. The use of herbicides for crop destruction peaked in 1965 when 45% of the total spraying was designed to destroy crops. Initially, the aerial spraying took place near Saigon. Eventually the geographical base was widened. During the 1967 expansion period of herbicide procurement, when requirements had become greater than the industries’ ability to produce, the Air Force and Joint Chiefs of Staff become actively involved in the herbicide program. All production for commercial use was diverted to the military, and the Department of Defense (DOD) was appointed to deal with problems of procurement and production. Commercial producers were encouraged to expand their facilities and build new plants, and the DOD made attractive offers to companies that might be induced to manufacture herbicides. A number of companies were awarded contracts. Working closely with the military, certain chemical companies sent technical advisors to Vietnam to instruct personnel on the methods and techniques necessary for effective use of the herbicides.

Environmental Encyclopedia 3 During the peak of the spraying, approximately 129 sorties were flown per aircraft. Twenty-four UC-123B aircraft were used, averaging 39 sorties per day. In addition, there were trucks and helicopters that went on spraying missions, backed up by such countries as Australia. C-123 cargo planes and helicopters were also used. Helicopters flew without cargo doors so that frequent ground fire could be returned. But the rotary blades would kick up gusts of spray, thereby delivering a powerful dose onto the faces and bodies of the men inside the plane. The dense Vietnamese jungle growth required two applications to defoliate both upper and lower layers of vegetation. On the ground, both enemy troops and Vietnamese civilians came in contact with the defoliant. American troops were also exposed. They could inhale the fine misty spray or be splashed in the sudden and unexpected deluge of an emergency dumping. Readily absorbing the chemicals through their skin and lungs, hundreds of thousands of United States military troops were exposed as they lived on the sprayed bases, slept near empty drums, and drank and washed in water in areas where defoliation had occurred. They ate food that had been brushed with spray. Empty herbicide drums were indiscriminately used and improperly stored. Volatile fumes from these drums caused damage to shade trees and to anyone near the fumes. Those handling the herbicides in support of a particular project goal had the unfortunate opportunity of becoming directly exposed on a consistent basis. Nearly three million veterans served in Southeast Asia. There is growing speculation that nearly everyone who was in Vietnam was eventually exposed to some degree—far less a possibility for those stationed in urban centers or on the waters. According to official sources, in addition to the Ranch Hand group at least three groups were exposed: OA group considered secondary support personnel. This included Army pilots who may have been involved in helicopter spraying, along with the Navy and Marine pilots. OThose who transported the herbicide to Saigon, and from there to Bien Hoa and Da Nang. Such personnel transported the herbicide in the omnipresent 55-gallon (208l) containers. OSpecialized mechanics, electricians, and technical personnel assigned to work on various aircraft. Many of this group were not specifically assigned to Ranch Hand but had to work in aircraft that were repeatedly contaminated. Agent Orange was used in Vietnam in undiluted form at the rate of 3–4 gal (11.4-15.2 l) per acre. 13.8 lb (6.27 kg) of the chemical 2,4,5-T were added to 12 lb (5.5 kg) of 2,4-D per acre, a nearly 50-50 ratio. This intensity is 13.3 lb (6.06 kg) per acre more than was recommended by the military’s own manual. Computer tapes (HERBS TAPES) now available show that some areas were sprayed

Agent Orange

as much as 25 times in just a few short months, thereby dramatically increasing the exposure to anyone within those sprayed areas. Between 1962 and 1971 an estimated 11.2 million gal (42.4 million l) of Agent Orange were dumped over South Vietnam. Evaluations show that the chemical had killed and defoliated 90–95% of the treated vegetation. Thirty-six percent of all mangrove forest areas in South Vietnam were destroyed. Viet Cong tunnel openings, caves, and above ground shelters were revealed to the aircraft after the herbicides were shipped in drums identified by an orange stripe and a contract identification number that enabled the government to identify the specific manufacturer. The drums were sent to a number of central transportation points for shipment to Vietnam. Agent Orange is contaminated by the chemical dioxin, specifically TCDD. In Vietnam, the dioxin concentration in Agent Orange varied from parts per billion (ppb) to parts per million (ppm), depending on each manufacturer’s production methods. The highest reported concentration in Agent Orange was 45 ppm. The Environmental Protection Agency (EPA) evacuated Times Beach, Missouri, when tests revealed soil samples there with two parts per billion of dioxin. The EPA has stated that one ppb is dangerous to humans. Ten years after the spraying ended, the agricultural areas remained barren. Damaging amounts of dioxin stayed in the soil thus infecting the food chain and exposing the Vietnamese people. As a result there is some concern that the high levels of TCDD are responsible for infant mortality, birth defects, and spontaneous abortions that occur in higher numbers in the once sprayed areas of Vietnam. Another report indicates that thirty years after Agent Orange contaminated the area, there is 100 times as much dioxin found in the bloodstream of people living in the area than those living in non-contaminated areas of Vietnam. This is a result of the dioxin found in the soil of the once heavily sprayed land. The chemical is then passed on to humans through the food they eat. Consequently, dioxin is also spread to infants through the mother’s breast milk, which will undoubtedly affect the child’s development. In 1991 Congress passed the Agent Orange Act(Public Law 102-4), which funded the extensive scientific study of the long-term health effects of Agent Orange and other herbicides used in Vietnam. As of early 2002, Agent Orange had been linked to the development of peripheral neuropathy, type II diabetes, prostate cancer, multiple myeloma, lymphomas, soft tissue sarcomas, and respiratory cancers. Researchers have also found a possible correlation between dioxin and the development of spinal bifida, a birth defect, and childhood leukemia in offspring of exposed vets. It is important to acknowledge the statistics do not 19

Agglomeration

necessarily show a strong link between exposure to Agent Orange or TCDD and some of the conditions listed above. However, Vietnam veterans who were honorably discharged and have any of these “presumptive” conditions (i.e., conditions presumed caused by wartime exposure) are entitled to Veterans Administration (VA) health care benefits and disability compensation under federal law. Unfortunately many Vietnamese civilians will not receive any benefits despite the evidence that they continue to suffer from the affects of Agent Orange. [Liane Clorfene Casten and Paula Anne Ford-Martin]

RESOURCES BOOKS Committee to Review the Health Effects in Vietnam Veterans of Exposure to Herbicides, Division of Health Promotion and Disease Prevention, Institute of Medicine. Veterans and Agent Orange: Update 2000.Washington, DC: National Academy Press, 2001.

PERIODICALS “Agent Orange Exposure Linked to Type 2 Diabetes.” Nation’s Health 30, no. 11 (December 2000/January 2001): 11. “Agent Orange Victims.” Earth Island Journal 17, no. 1 (Spring 2002): 15. Dreyfus, Robert. “Apocolypse Still.” Mother Jones (January/February 2000). Korn, Peter."The Persisting Poison; Agent Orange in Vietnam.” The Nation 252, no.13 (April 8, 1991): 440. Young, Emma"Foul Fare.” New Scientist 170, no. 2292 (May 26, 2001): 13.

OTHER

U.S. Veterans Affairs (VA). Agent Orange [June 2002]. .

Agglomeration Any process by which a group of individual particles is clumped together into a single mass. The term has a number of specialized uses. Some types of rocks are formed by the agglomeration of particles of sand, clay, or some other material. In geology, an agglomerate is a rock composed of volcanic fragments. One technique for dealing with air pollution is ultrasonic agglomeration. A source of very high frequency sound is attached to a smokestack, and the ultrasound produced by this source causes tiny particulate matter in waste gases to agglomerate into particles large enough to be collected.

Agricultural chemicals The term agricultural chemical refers to any substance involved in the growth or utilization of any plant or animal of economic importance to humans. An agricultural chemical may be a natural product, such as urea, or a synthetic chemical, such as DDT. The agricultural chemicals now in use 20

Environmental Encyclopedia 3 include fertilizers, pesticides, growth regulators, animal feed supplements, and raw materials for use in chemical processes. In the broadest sense, agricultural chemicals can be divided into two large categories, those that promote the growth of a plant or animal and those that protect plants or animals. To the first group belong plant fertilizers and animal food supplements, and to the latter group belong pesticides, herbicides, animal vaccines, and antibiotics. In order to stay healthy and grow normally, crops require a number of nutrients, some in relatively large quantities called macronutrients, and others in relatively small quantities called micronutrients. Nitrogen, phosphorus, and potassium are considered macronutrients, and boron, calcium, chlorine, copper, iron, magnesium, manganese among others are micronutrients. Farmers have long understood the importance of replenishing the soil, and they have traditionally done so by natural means, using such materials as manure, dead fish, or compost. Synthetic fertilizers were first available in the early twentieth century, but they became widely used only after World War II. By 1990 farmers in the United States were using about 20 million tons (20.4 million metric tons) of these fertilizers a year. Synthetic fertilizers are designed to provide either a single nutrient or some combination of nutrients. Examples of single-component or “straight” fertilizers are urea (NH2CONH2), which supplies nitrogen, or potassium chloride (KCl), which supplies potassium. The composition of “mixed” fertilizers, those containing more than one nutrient, is indicated by the analysis printed on their container. An 8-10-12 fertilizer, for example, contains 8% nitrogen by weight, 10% phosphorus, and 12% potassium. Synthetic fertilizers can be designed to release nutrients almost immediately ("quick-acting") or over longer periods of time ("time-release"). They may also contain specific amounts of one or more trace nutrients needed for particular types of crops or soil. Controlling micronutrients is one of the most important problems in fertilizer compounding and use; the presence of low concentrations of some elements can be critical to a plant’s health, while higher levels can be toxic to the same plants or to animals that ingest the micronutrient. Plant growth patterns can also be influenced by direct application of certain chemicals. For example, the gibberellins are a class of compounds that can dramatically affect the rate at which plants grow and fruits and vegetables ripen. They have been used for a variety of purposes ranging from the hastening of root development to the delay of fruit ripening. Delaying ripening is most important for marketing agricultural products because it extends the time a crop can be transported and stored on grocery shelves. Other kinds of chemicals used in the processing, transporting, and storage

Environmental Encyclopedia 3 of fruits and vegetables include those that slow down or speed up ripening (maleic hydrazide, ethylene oxide, potassium permanganate, ethylene, and acetylene are examples), that reduce weight loss (chlorophenoxyacetic acid, for example), retain green color (cycloheximide), and control firmness (ethylene oxide). The term agricultural chemical is most likely to bring to mind the range of chemicals used to protect plants against competing organisms: pesticides and herbicides. These chemicals disable or kill bacteria, fungi, rodents, worms, snails and slugs, insects, mites, algae, termites, or any other species of plant or animal that feeds upon, competes with, or otherwise interferes with the growth of crops. Such chemicals are named according to the organism against which they are designed to act. Some examples are fungicides (designed to kill fungi), insecticides (used against insects), nematicides (to kill round worms), avicides (to control birds), and herbicides (to combat plants). In 1990, 393 million tons of herbicides, 64 million tons of insecticides, and 8 million tons of other pesticides were used on American farmlands. The introduction of synthetic pesticides in the years following World War II produced spectacular benefits for farmers. More than 50 major new products appeared between 1947 and 1967, resulting in yield increases in the United States ranging from 400% for corn to 150% for sorghum and 100% for wheat and soybeans. Similar increases in less developed countries, resulting from the use of both synthetic fertilizers and pesticides, eventually became known as the Green Revolution. By the 1970s, however, the environmental consequences of using synthetic pesticides became obvious. Chemicals were becoming less effective as pests developed resistances to them, and their toxic effects on other organisms had grown more apparent. Farmers were also discovering drawbacks to chemical fertilizers as they found that they had to use larger and larger quantities each year in order to maintain crop yields. One solution to the environmental hazards posed by synthetic pesticides is the use of natural chemicals such as juvenile hormones, sex attractants, and anti-feedant compounds. The development of such natural pest-control materials has, however, been relatively modest; the vast majority of agricultural companies and individual farmers continue to use synthetic chemicals that have served them so well for over a half century. Chemicals are also used to maintain and protect livestock. At one time, farm animals were fed almost exclusively on readily available natural foods. They grazed on rangelands or were fed hay or other grasses. Today, carefully blended chemical supplements are commonly added to the diet of most farm animals. These supplements have been determined on the basis of extensive studies of the nutrients that contribute to the growth or milk production of cows,

Agricultural environmental management

sheep, goats, and other types of livestock. A typical animal supplement diet consists of various vitamins, minerals, amino acids, and nonprotein (simple) nitrogen compounds. The precise formulation depends primarily on the species; a vitamin supplement for cattle, for example, tends to include A, D, and E, while swine and poultry diets would also contain Vitamin K, riboflavin, niacin, pantothenic acid, and choline. A number of chemicals added to animal feed serve no nutritional purpose but provide other benefits. For example, the addition of certain hormones to the feed of dairy cows can significantly increase their output of milk. Genetic engineering is also becoming increasingly important in the modification of crops and livestock. Cows injected with a genetically modified chemical, bovine somatotropin, produce a significantly larger quantity of milk. It is estimated that infectious diseases cause the death of 15–20 of all farm animals each year. Just as plants are protected from pests by pesticides, so livestock are protected from disease organisms by immunization, antibiotics, and other techniques. Animals are vaccinated against speciesspecific diseases, and farmers administer antibiotics, sulfonamides, nitrofurans, arsenicals, and other chemicals that protect against disease-causing organisms. The use of chemicals with livestock can have deleterious effects, just as crop chemicals have. In the 1960s, for example, the hormone diethylstilbestrol (DES) was widely used to stimulate the growth of cattle, but scientists found that detectable residues of the hormone remained in meat sold from the slaughtered animals. DES is now considered a carcinogen, and the U.S. Food and Drug Administration has banned its use in cattle feed since 1979. [David E. Newton]

RESOURCES BOOKS Benning, L. E. Beneath the Bottom Line: Agricultural Approaches to Reduce Agrichemical Contamination of Groundwater. Washington, DC: Office of Technology Assessment, 1990. ———, and J. H. Montgomery. Agrochemicals Desk Reference: Environmental Data. Boca Raton, FL: Lewis, 1993. ———, and T. E. Waddell. Managing Agricultural Chemicals in the Environment: The Case for a Multimedia Approach. Washington, DC: Conservation Foundation, 1988. Chemistry and the Food System, A Study by the Committee on Chemistry and Public Affairs of the American Chemical Society. Washington, DC: American Chemical Society, 1980.

Agricultural environmental management The complex interaction of agriculture and environment has been an issue since the beginning of man. Humans grow 21

Agricultural environmental management

food to eat and also hunt animals that depend on natural resources for healthy ongoing habitats. Therefore, the world’s human population must balance farming activities with maintaining natural resources. The term agriculture originally meant the act of cultivating fields or growing crops. However, it has expanded to include raising livestock as well. When early settlers began farming and ranching in the United States, they faced pristine wilderness and open prairies. There was little cause for concern about protecting the environment or population and for two centuries, the country’s land and water were aggressively used to create a healthy supply of ample food for Americans. In fact, many American families settled in rural areas and made a living as farmers and ranchers, passing the family business down through generations. By the 1930s, the federal government began requiring farmers to idle certain acres of land to prevent oversupply of food and to protect exhausted soil. Since that time, agriculture has become a complex science, as farmers must carefully manage soil and water to lessen risk of degrading the soil and its surrounding environment or depleting water tables beneath the land’s surface. In fact, farming and ranching present several environmental challenges that require careful management by farmers and local and federal regulatory agencies that guide their activities. The science of applying principles of ecology to agriculture is called agroecology. Those involved in agroecology develop farming methods that use fewer synthetic (manmade) pesticides and fertilizers and encourage organic farming. They also work to conserve energy and water. Soil erosion, converting land to agricultural use, introduction of fertilizer and pesticides, animal wastes, and irrigation are parts of farming that can lead to changes in quality or availability of water. An expanding human population has lead to increased farming and accelerated soil erosion. When soil has a low capacity to retain water, farmers must pump groundwater up and spray it over crops. After years of doing so, the local water table will eventually fall. This can impact native vegetation in the area. The industry calls the balance of environment and lessening of agricultural effects sustainability or sustainable development. In some parts of the world, like in the High Plains of the United States or parts of Saudi Arabia, populations and agriculture are depleting water aquifers faster than the natural environment can replenish them. Sustainable development involves dedicated, scientifically based plans to ensure that agricultural activity is managed in such a way that aquifers are not prematurely depleted. Agroforestry is a method of cultivating both crops and teres on the same land. Between rows of trees, farmers plant agricultural crops that generate income during the time it takes the trees to grow mature enough to produce earnings from nuts or lumber. 22

Environmental Encyclopedia 3 Increased modernization of agriculture also impacts the environment. Traditional farming practice, which continues in underdeveloped countries today, consists of subsistence agriculture. In subsistence farming, just enough crops and livestock are raised to meet the needs of a particular family. However, today large farms produce food for huge populations. More than half of the world’s working population is employed by some agricultural or agriculturally associated industry. Almost 40% of the world’s land area is devoted to agriculture (including permanent pasture). The growing use of machines, pesticides and man-made fertilizers have all seriously impacted the environment. For example, the use of pesticides like DDT in the 1960s were identified as leading to the deaths of certain species of birds. Most western countries banned use of the pesticides and the bird populations soon recovered. Today, use of pesticides is strictly regulated in the United States. Many more subtle effects of farming occur on the environment. When grasslands and wetlands or forests are converted to crops, and when crops are not rotated, eventually, the land changes to the point that entire species of plants and animals can become threatened. Urbanization also imposes onto farmland and cuts the amount of land available for farming. Throughout the world, countries and organizations develop strategies to protect the environment, natural habitats and resources while still supplying the food our populations require. In 1992, The United Nations Conference on Environment and Development in Rio de Janeiro focused on how to sustain the world’s natural resources but balance good policies on environment and community vitality. In the United States, the Department of Agriculture has published its own policy on sustainable development, which works toward balancing economics, environment and social needs concerning agriculture. In 1993, an Executive Order formed the President’s Council on Sustainable Development (PCSD) to develop new approaches to achieve economic and environmental goals for public policy in agriculture. Guiding principles include sections on agriculture, forestry and rural community development. According to the United States Environmental Protection Agency (EPA), Agricultural Environmental Management (AEM) is one of the most innovative programs in New York State. The program was begun in June 2000 when Governor George Pataki introduced legislation to the state’s Senate and Assembly proposing a partnership to promote farming’s good stewardship of land and to provide the funding and support of farmers’ efforts. The bill was passed and signed into law by the governor on August 24, 2000. The purpose of the law is to help farmers develop agricultural environmental management plans that control agricultural pollution and comply with federal, state and local regula-

Environmental Encyclopedia 3

Agricultural pollution

tions on use of land, water quality, and other environmental concerns. New York’s AEM program brings together agencies from state, local, and federal governments, conservation representatives, businesses from the private sector, and farmers. The program is voluntary and offers education, technical assistance, and financial incentives to farmers to participate. An example of a successful AEM project occurred at a dairy farm in central New York. The farm composted animals’ solid wastes, which reduced the amount of waste spread on the fields. This in turn reduced pollution in the local watershed. The New York State Department of Agriculture and Markets oversees the program. It begins when a farmer expresses interest in AEM. Next, the farmer completes a series of five tiers of the program. In Tier I, the farmer completes a short questionnaire that surveys current farming activities and future plans to identify potential environmental concerns. Tier II involves worksheets that document current activities that promote stewardship of the environment and help prioritize any environmental concerns. In Tier III, a conservation plan is developed that is tailored specifically for the individual farm. The farmer works together with an AEM coordinator and several members of the cooperating agency staff. Under Tier IV of the AEM program, agricultural agencies and consultants provide the farmer with educational, technical, and financial assistance to implement best management practices for preventing pollution to water bodies in the farm’s area. The plans use Natural Resources Conservation Service standards and guidance from cooperating professional engineers. Finally, farmers in the AEM program receive ongoing evaluations to ensure that the plan they have devised helps protect the environment and also ensures viability of the farm business. Funding for the AEM program comes from a variety of sources, including New York’s Clean Water/Clean Air Bond Act and the State Environmental Protection Fund. Local Soil and Water Conservation Districts (SWCDs) also partner in the effort, and farmers can access funds through these districts. The EPA says involvement of the SWCDs has likely been a positive factor in farmers’ acceptance of the program. Though New York is perceived as mostly urban, agriculture is a huge business in the state. The AEM program serves important environmental functions and helps keep New York State’s farms economically viable. More than 7,000 farms participate in the program. [Teresa G. Norris]

RESOURCES BOOKS Calow, Peter. The Encyclopedia of Ecology and Environmental Management. Malden, MA: Blackwell Science, Inc., 1998.

PERIODICALS Ervin, DE, et al. “Agriculture and Environment: A New Strategic Vision.“ Environment 40, no. 6 (July-August, 1998):8.

ORGANIZATIONS New York State Department of Agriculture and Markets, 1 Winners Circle, Albany, NY USA 12235 (518) 457-3738, Fax: (518)457-3412, Email: [email protected], http://www.agmkt.state.ny.us Sustainable Development, USA, United States Department of Agriculture, 14th and Independence SW, Washington, DC USA 20250 (202) 7205447, Email: [email protected], http://www.usda.gov

Agricultural pollution The development of modern agricultural practices is one of the great success stories of applied sciences. Improved plowing techniques, new pesticides and fertilizers, and better strains of crops are among the factors that have resulted in significant increases in agricultural productivity. Yet these improvements have not come without cost to the environment and sometimes to human health. Modern agricultural practices have contributed to the pollution of air, water, and land. Air pollution may be the most memorable, if not the most significant, of these consequences. During the 1920s and 1930s, huge amounts of fertile topsoil were blown away across vast stretches of the Great Plains, an area that eventually became known as the Dust Bowl. The problem occurred because farmers either did not know about or chose not to use techniques for protecting and conserving their soil. The soil then blew away during droughts, resulting not only in the loss of valuable farmland, but also in the pollution of the surrounding atmosphere. Soil conservation techniques developed rapidly in the 1930s, including contour plowing, strip cropping, crop rotation, windbreaks, and minimum- or no-tillage farming, and thereby greatly reduced the possibility of erosion on such a scale. However, such events, though less dramatic, have continued to occur, and in recent decades they have presented new problems. When top soils are blown away by winds today, they can carry with them the pesticides, herbicides, and other crop chemicals now so widely used. In the worst cases, these chemicals have contributed to the collection of air pollutants that endanger the health of plants and animals, including humans. Ammonia, released from the decay of fertilizers, is one example of a compound that may cause minor irritation to the human respiratory system and more serious damage to the health of other animals and plants. A more serious type of agricultural pollution are the solid waste problems resulting from farming and livestock practices. Authorities estimate that slightly over half of all the solid wastes produced in the United States each year— a total of about 2 billion tons (2 billion metric tons)—come 23

Environmental Encyclopedia 3

Agricultural pollution

from a variety of agricultural activities. Some of these wastes pose little or no threat to the environment. Crop residue left on cultivated fields and animal manure produced on rangelands, for example, eventually decay, returning valuable nutrients to the soil. Some modern methods of livestock management, however, tend to increase the risks posed by animal wastes. Farmers are raising a larger variety of animals, as well as larger numbers of them, in smaller and smaller areas such as feedlots or huge barns. In such cases, large volumes of wastes are generated in these areas. Many livestock managers attempt to sell these waste products or dispose of them in a way that poses no threat to the environment. Yet in many cases the wastes are allowed to accumulate in massive dumps where soluble materials are leached out by rain. Some of these materials then find their way into groundwater or surface water, such as lakes and rivers. Some are harmless to the health of animals, though they may contribute to the eutrophication of lakes and ponds. Other materials, however, may have toxic, carcinogenic, or genetic effects on humans and other animals. The leaching of hazardous materials from animal waste dumps contributes to perhaps the most serious form of agricultural pollution: the contamination of water supplies. Many of the chemicals used in agriculture today can be harmful to plants and animals. Pesticides and herbicides are the most obvious of these; used by farmers to disable or kill plant and animal pests, they may also cause problems for beneficial plants and animals as well as humans. Runoff from agricultural land is another serious environmental problem posed by modern agricultural practices. Runoff constitutes a nonpoint source of pollution. Rainfall leaches out and washes away pesticides, fertilizers, and other agricultural chemicals from a widespread area, not a single source such as a sewer pipe. Maintaining control over nonpoint sources of pollution is an especially difficult challenge. In addition, agricultural land is more easily leached out than is non-agricultural land. When lands are plowed, the earth is broken up into smaller pieces, and the finer the soil particles, the more easily they are carried away by rain. Studies have shown that the nitrogen and phosphorus in chemical fertilizers are leached out of croplands at a rate about five times higher than from forest woodlands or idle lands. The accumulation of nitrogen and phosphorus in waterways from chemical fertilizers has contributed to the acceleration of eutrophication of lakes and ponds. Scientists believe that the addition of human-made chemicals such as those in chemical fertilizers can increase the rate of eutrophication by a factor of at least 10. A more deadly effect is the poisoning of plants and animals by toxic chemicals leached off of farmlands. The biological effects of such chemicals are commonly magnified many times as they move up a food 24

chain/web. The best known example of this phenomenon involved a host of biological problems—from reduced rates of reproduction to malformed animals to increased rates of death—attributed to the use of DDT in the 1950s and 1960s. Sedimentation also results from the high rate of erosion on cultivated land, and increased sedimentation of waterways poses its own set of environmental problems. Some of these are little more than cosmetic annoyances. For example, lakes and rivers may become murky and less attractive, losing potential as recreation sites. However, sedimentation can block navigation channels, and other problems may have fatal results for organisms. Aquatic plants may become covered with sediments and die; marine animals may take in sediments and be killed; and cloudiness from sediments may reduce the amount of sunlight received by aquatic plants so extensively that they can no longer survive. Environmental scientists are especially concerned about the effects of agricultural pollution on groundwater. Groundwater is polluted by much the same mechanisms as is surface water, and evidence for that pollution has accumulated rapidly in the past decade. Groundwater pollution tends to persist for long periods of time. Water flows through an aquifer much more slowly than it does through a river, and agricultural chemicals are not flushed out quickly. Many solutions are available for the problems posed by agricultural pollution, but many of them are not easily implemented. Chemicals that are found to have serious toxic effects on plants and animals can be banned from use, such as DDT in the 1970s, but this kind of decision is seldom easy. Regulators must always assess the relative benefit of using a chemical, such as increased crop yields, against its environmental risks. Such as a risk-benefit analysis means that some chemicals known to have certain deleterious environmental effects remain in use because of the harm that would be done to agriculture if they were banned. Another way of reducing agricultural pollution is to implement better farming techniques. In the practices of minimum- or no-tillage farming, for example, plowing is reduced or eliminated entirely. Ground is left essentially intact, reducing the rate at which soil and the chemicals it contains are eroded away.

[David E. Newton]

RESOURCES BOOKS Benning, L. E. Agriculture and Water Quality: International Perspectives. Boulder, CO: L. Rienner, 1990. ———, and L. W. Canter. Environmental Impacts of Agricultural Production Activities. Chelsea, MI: Lewis, 1986.

Environmental Encyclopedia 3 ———, and M. W. Fox. Agricide: The Hidden Crisis That Affects Us All. New York: Shocken Books, 1986. Crosson, P. R. Implementation Policies and Strategies for Agricultural NonPoint Pollution. Washington, DC: Resources for the Future, 1985.

Agricultural Research Service A branch of the U.S. Department of Agriculture charged with the responsibility of agricultural research on a regional or national basis. The Agricultural Research Service (ARS) has a mission to develop new knowledge and technology needed to solve agricultural problems of broad scope and high national priority in order to ensure adequate production of high quality food and agricultural products for the United States. The national research center of the ARS is located at Beltsville, Maryland, consisting of laboratories, land, and other facilities. In addition, there are many other research centers located throughout the United States, such as the U.S. Dairy/Forage Research Center at Madison, Wisconsin. Scientists of the ARS are also located at Land Grant Universities throughout the country where they conduct cooperative research with state scientists. RESOURCES ORGANIZATIONS Beltsville Agricultural Research Center, Rm. 223, Bldg. 003, BARC-West, 10300 Baltimore Avenue , Beltsville , MD USA 20705 ,

Agricultural revolution The development of agriculture has been a fundamental part of the march of civilization. It is an ongoing challenge, for as long as population growth continues, mankind will need to improve agricultural production. The agricultural revolution is actually a series of four major advances, closely linked with other key historical periods. The first, the Neolithic or New Stone Age, marks the beginning of sedentary (settled) farming. Much of this history is lost in antiquity, dating back perhaps 10,000 years or more. Still, humans owe an enormous debt to those early pioneers who so painstakingly nourished the best of each year’s crop. Archaeologists have found corn cobs a mere 2 in (5.1 cm) long, so different from today’s giant ears. The second major advance came as a result of Christopher Columbus’ voyages to the New World. Isolation had fostered the development of two completely independent agricultural systems in the New and Old Worlds. A short list of interchanged crops and animals clearly illustrates the global magnitude of this event; furthermore, the current population explosion began its upswing during this period. From the New World came maize, beans, the “Irish” potato,

Agricultural revolution

squash, peanuts, tomatoes, and tobacco. From the Old World came wheat, rice, coffee, cattle, horses, sheep, and goats. Maize is now a staple food in Africa. Several Indian tribes in America adopted new lifestyles, notably the Navajo as sheepherders, and the Cheyenne as nomads using the horse to hunt buffalo. The Industrial Revolution both contributed to and was nourished by agriculture. The greatest agricultural advances came in transportation, where first canals, then railroads and steamships made possible the shipment of food from areas of surplus. This in turn allowed more specialization and productivity, but most importantly, it reduced the threat of starvation. The steamship ultimately brought refrigerated meat to Europe from distant Argentina and Australia. Without these massive increases in food shipments the exploding populations and greatly increased demand for labor by newly emerging industries could not have been sustained. In turn the Industrial Revolution introduced major advances in farm technology, such as the cotton gin, mechanical reaper, improved plows, and, in this century, tractors and trucks. These advances enabled fewer and fewer farmers to feed larger and larger populations, freeing workers to fill demands for factory labor and the growing service industries. Finally, agriculture has fully participated in the scientific advances of the twentieth century. Key developments include hybrid corn, the high responders in tropical lands, described as the “Green Revolution,” and current genetic research. Agriculture has benefited enormously from scientific advances in biology, and the future here is bright for applied research, especially involving genetics. Great potential exists for the development of crop strains with greatly improved dietary characteristics, such as higher protein or reduced fat. Growing populations, made possible by these food surpluses, have forced agricultural expansion onto less and less desirable lands. Because agriculture radically simplifies ecosystems and greatly amplifies soil erosion, many areas such as the Mediterranean Basin and tropical forest lands have suffered severe degradation. Major developments in civilization are directly linked to the agricultural revolution. A sedentary lifestyle, essential to technological development, was both mandated and made possible by farming. Urbanization flourished, which encouraged specialization and division of labor. Large populations provided the energy for massive projects, such as the Egyptian pyramids and the colossal engineering efforts of the Romans. The plow represented the first lever, both lifting and overturning the soil. The draft animal provided the first in a long line of nonhuman energy sources. Plant and animal selectivity are likely the first application of science and technology toward specific goals. A number of important crops 25

Environmental Encyclopedia 3

Agricultural Stabilization and Conservation Service

bear little resemblance to the ancestors from which they were derived. Animals such as the fat-tailed sheep represent thoughtful cultural control of their lineage. Climate dominates agriculture, second only to irrigation. Farmers are especially vulnerable to variations, such as late or early frosts, heavy rains, or drought. Rice, wheat, and maize have become the dominant crops globally because of their high caloric yield, versatility within their climate range, and their cultural status as the “staff of life.” Many would not consider a meal complete without rice, bread, or tortillas. This cultural influence is so strong that even starving peoples have rejected unfamiliar food. China provides a good example of such cultural differences, with a rice culture in the south and a wheat culture (noodles) in the north. These crops all need a wet season for germination and growth, followed by a dry season to allow spoilage-free storage. Rice was domesticated in the monsoonal lands of Southeast Asia, while wheat originated in the Fertile Crescent of the Middle East. Historically, wheat was planted in the fall, and harvested in late spring, coinciding with the cycle of wet and dry seasons in the Mediterranean region. Maize needs the heavy summer rains provided by the Mexican highland climate. Other crops predominate in areas with less suitable climates. These include barley in semiarid lands; oats and potatoes in cool, moist lands; rye in colder climates with short growing seasons; and dry rice on hillsides and drier lands where paddy rice is impractical. Although food production is the main emphasis in agriculture, more and more industrial applications have evolved. Cloth fibers have been a mainstay, but paper products and many chemicals now come from cultivated plants. The agricultural revolution is also associated with some of mankind’s darker moments. In the tropical and subtropical climates of the New World, slave labor was extensive. Close, unsanitary living conditions have fostered plagues of biblical proportions. And the desperate dependence on agriculture is all too vividly evident in the records of historic and contemporary famine. As a world, people are never more than one harvest away from global starvation, a fact amplified by the growing understanding of cosmic catastrophes. Some argue that the agricultural revolution masks the growing hazards of an overpopulated, increasingly contaminated earth. Since the agricultural revolution has been so productive it has more than compensated for the population explosion of the last two centuries. Some appropriately labeled “cornucopians” believe there is yet much potential for increased food production, especially through scientific agriculture and genetic engineering. There is much room for optimism, and also for a sobering assessment of the 26

environmental costs of agricultural progress. We must continually strive for answers to the challenges associated with the agricultural revolution. [Nathan H. Meleen]

RESOURCES BOOKS Anderson, E. “Man as a Maker of New Plants and New Plant Communities.” In Man’s Role in Changing the Face of the Earth, edited by W. L. Thomas Jr. Chicago: The University of Chicago Press, 1956. Doyle, J. Altered Harvest: Agriculture, Genetics, and the Fate of the World’s Food Supply. New York: Penguin, 1985. Gliessman, S. R., ed. Agroecology: Researching the Ecological Basis for Sustainable Agriculture. New York: Springer-Verlag, 1990. Jackson, R. H., and L. E. Hudman. Cultural Geography: People, Places, and Environment. St. Paul, MN: West, 1990. Narr, K. J. “Early Food-Producing Populations.” In Man’s Role in Changing the Face of the Earth, edited by W. L. Thomas, Jr. Chicago: The University of Chicago Press, 1956. Simpson, L. B. “The Tyrant: Maize.” In The Cultural Landscape, edited by C. Salter. Belmont, CA: Wadsworth, 1971.

PERIODICALS Crosson, P. R., and N. J. Rosenberg. “Strategies for Agriculture.” Scientific American 261 (September 1989): 128–32+.

Agricultural Stabilization and Conservation Service For the past half century, agriculture in the United States has faced the somewhat unusual and enviable problem of overproduction. Farmers have produced more food than United States citizens can consume, and, as a result, per capita farm income has decreased as the volume of crops has increased. To help solve this problem, the Secretary of Agriculture established the Agricultural Stabilization and Conservation Service on June 5, 1961. The purpose of the service is to administer commodity and land-use programs designed to control production and to stabilize market prices and farm income. The service operates through state committees of three to five members each and committees consisting of three farmers in approximately 3,080 agricultural counties in the nation. RESOURCES ORGANIZATIONS Agricultural Stabilization and Conservation Service, 10500 Buena Vista Court, Urbandale, IA USA 50322-3782 (515) 254-1540, Fax: (515) 254-1573.

Agriculture and energy conservation see Environmental engineering

Environmental Encyclopedia 3

Agriculture, drainage see Runoff

Agriculture, sustainable see Sustainable agriculture

Agroforestry

world, and in areas such as sub-Saharan Africa about 75% of the population is involved in some form of it. As population pressures on the world food supply increase, the application of agroecological principles is expected to stem the ecological consequences of traditional agricultural practices such as pesticide poisoning and erosion. [Linda Rehkopf]

Agroecology Agroecology is an interdisciplinary field of study that applies ecological principles to the design and management of agricultural systems. Agroecology concentrates on the relationship of agriculture to the biological, economic, political, and social systems of the world. The combination of agriculture with ecological principles such as biogeochemical cycles, energy conservation, and biodiversity has led to practical applications that benefit the whole ecosystem rather than just an individual crop. For instance, research into integrated pest management has developed ways to reduce reliance on pesticides. Such methods include biological or biotechnological controls such as genetic engineering, cultural controls such as changes in planting patterns, physical controls such as quarantines to prevent entry of new pests, and mechanical controls such as physically removing weeds or pests. Sustainable agriculture is another goal of agroecological research. Sustainable agriculture views farming as a total system and stresses the long-term conservation of resources. It balances the human need for food with concerns for the environment and maintains that agriculture can be carried on without reliance on pesticides and fertilizers. Agroecology advocates the use of biological controls rather than pesticides to minimize agricultural damage from insects and weeds. Biological controls use natural enemies to control weeds and pests, such as ladybugs that kill aphids. Biological controls include the disruption of the reproductive cycles of pests and the introduction of more biologically diverse organisms to inhibit overpopulation of different agricultural pests. Agroecological principals shift the focus of agriculture from food production alone to wider concerns, such as environmental quality, food safety, the quality of rural life, humane treatment of livestock, and conservation of air, soil, and water. Agroecology also studies how agricultural processes and technologies will be impacted by wider environmental problems such as global warming, desertification, or salinization. The entire world population depends on agriculture, and as the number of people continues to grow agroecology is becoming more important, particularly in developing countries. Agriculture is the largest economic activity in the

RESOURCES BOOKS Altieri, M. A. Agroecology: The Scientific Basis of Alternative Agriculture. Boulder, CO: Westview Press, 1987. Carroll, D. R. Agroecology. New York: McGraw-Hill, 1990. Gliessman, S. R., ed. Agroecology. New York: Springer-Verlag, 1991.

PERIODICALS Norse, D. “A New Strategy for Feeding a Drowned Planet.” Environment 34 (June 1992): 6–19.

Agroforestry Agroforestry is a land use system in which woody perennials (trees, shrubs, vines, palms, bamboo, etc.) are intentionally combined on the same land management unit with crops and sometimes animals, either in a spatial arrangement or a temporal sequence. It is based on the premise that woody perennials in the landscape can enhance the productivity and sustainability of agricultural practice. The approach is especially pertinent in tropical and subtropical areas where improper land management and intensive, continuous cropping of land have led to widespread devastation. Agroforestry recognizes the need for an alternative agricultural system that will preserve and sustain productivity. The need for both food and forest products has led to an interest in techniques that combine production of both in a manner that can halt and may even reverse the ruin caused by existing practices. Although the term agroforestry has come into widespread use only in the last 20–25 years, environmentally sound farming methods similar to those now proposed have been known and practiced in some tropical and subtropical areas for many years. As an example, one type of intercropping found on small rubber plantations (less than 25 acres/ 10 ha), in Malaysia, Thailand, Nigeria, India, and Sri Lanka involves rubber plants intermixed with fruit trees, pepper, coconuts, and arable crops such as soybeans, corn, banana, and groundnut. Poultry may also be included. Unfortunately, in other areas the pressures caused by expanding human and animal populations have led to increased use of destructive farming practices. In the process, inhabitants have further reduced their ability to provide basic food, fiber, fuel, and 27

Environmental Encyclopedia 3

AIDS

timber needs and contributed to even more environmental degradation and loss of soil fertility.

The successful introduction of agroforestry practices in problem areas requires the cooperative efforts of experts from a variety of disciplines. Along with specialists in forestry, agriculture, meteorology, ecology, and related fields, it is often necessary to enlist the help of those familiar with local culture and heritage to explain new methods and their advantages. Usually, techniques must be adapted to local circumstances, and research and testing are required to develop viable systems for a particular setting. Intercropping combinations that work well in one location may not be appropriate for sites only a short distance away because of important meteorological or ecological differences. Despite apparent difficulties, agroforestry has great appeal as a means of arresting problems with deforestation and declining agricultural yields in warmer climates. The practice is expected to grow significantly in the next several decades. Some areas of special interest include intercropping with coconuts as the woody component, and mixing tree legumes with annual crops. Agroforestry does not seem to lend itself to mechanization as easily as the large scale grain, soybean and vegetable cropping systems used in industrialized nations because practices for each site are individualized and usually labor-intensive. For these reasons they have had less appeal in areas like the United States and Europe. Nevertheless, temperate zone applications have been developed or are under development. Examples include small scale organic gardening and farming, mining wasteland reclamation, and biomass energy crop production on marginal land. [Douglas C. Pratt]

RESOURCES BOOKS Huxley, P. A., ed. Plant Research and Agroforestry. Edinburgh, Scotland: Pillans & Wilson, 1983. Reifsnyder, W. S., and T. O. Darnhofer, eds. Meteorology and Agroforestry. Nairobi, Kenya: International Council for Research in Agroforestry, 1989. Zulberti, E., ed. Professional Education in Agroforestry. Nairobi, Kenya: International Council for Research in Agroforestry, 1987.

AIDS AIDS (acquired immune deficiency syndrome) is an infectious and fatal disease of apparently recent origin. AIDS is pandemic, which means that it is worldwide in distribution. A sufficient understanding of AIDS can be gained only by examining its causation (etiology), symptoms, treatments, and the risk factors for transmitting and contracting the disease. 28

AIDS occurs as a result of infection with the HIV (human immunodeficiency virus). HIV is a ribonucleic acid (RNA) virus that targets and kills special blood cells, known as helper T-lymphocytes, which are important in immune protection. Depletion of helper T-lymphocytes leaves the AIDS victim with a disabled immune system and at risk for infection by organisms that ordinarily pose no special hazard to the individual. Infection by these organisms is thus opportunistic and is frequently fatal. The initial infection with HIV may entail no symptoms at all or relatively benign symptoms of short duration that may mimic infectious mononucleosis. This initial period is followed by a longer period (from a few to as many as 10 years) when the infected person is in apparent good health. The HIV infected person, despite the outward image of good health, is in fact contagious, and appropriate care must be exercised to prevent spread of the virus at this time. Eventually the effects of the depletion of helper T cells become manifest. Symptoms include weight loss, persistent cough, persistent colds, diarrhea, periodic fever, weakness, fatigue, enlarged lymph nodes, and malaise. Following this, the AIDS patient becomes vulnerable to chronic infections by opportunistic pathogens. These include, but are not limited to oral yeast infection (thrush), pneumonia caused by the fungus Pneumocystis carinii, and infection by several kinds of herpes viruses. The AIDS patient is vulnerable to Kaposi’s sarcoma, which is a cancer seldom seen except in those individuals with depressed immune systems. Death of the AIDS patient may be accompanied by confusion, dementia, and coma. There is no cure for AIDS. Opportunistic infections are treated with antibiotics, and drugs such as AZT (azidothymidine), which slow the progress of the HIV infection, are available. But viral diseases in general, including AIDS, do not respond well to antibiotics. Vaccines, however, can provide protection against viral diseases. Research to find a vaccine for AIDS has not yet yielded satisfactory results, but scientists have been encouraged by the development of a vaccine for feline leukemia—a viral disease that has similarities to AIDS. Unfortunately, this does not provide hope of a cure for those already infected with the HIV virus. Prevention is crucial for a lethal disease with no cure. Thus, modes of transmission must be identified and avoided. Everyone is at risk, males constitute 52% and females 48% of the infected population. In 2002 there are about 40 million people infected with HIV or AIDS, and it is thought that this number will grow to 62 million by 2005. In the United States alone, 40,000 new cases are diagnosed each year. AIDS cases in heterosexual males and women are on the increase, and no sexually active person can be considered “safe” from AIDS any longer. Therefore, everyone who is sexually active should be aware of the principal modes of

Environmental Encyclopedia 3

Air pollution

transmission of the HIV virus—infected blood, semen from the male and genital tract secretions of the female—and use appropriate means to prevent exposure. While the virus has been identified in tears, saliva, and breast milk, contagions by exposure to those substances seems to be significantly less. [Robert G. McKinnell]

RESOURCES BOOKS Alcamo, I. E. AIDS, the Biological Basis. Dubuque, Iowa: William C. Brown, 1993. Fan, H., R. F. Connor, and L. P. Villarreal. The Biology of AIDS. 2nd edition. Boston: Jones and Bartlett, 1991. Stine, Gerald J. AIDS Update 2002. Prentice Hall, 2001.

to speak to elementary, middle school, and high school audiences on environmental management topics. The association’s 12,000 members, all of whom are volunteers, are involved in virtually every aspect of every A& WMA project. There are 21 association sections across the United States, facilitating meetings at regional and even local levels to discuss important issues. Training seminars are an important part of A&WMA membership, and members are taught the skills necessary to run public outreach programs designed for students of all ages and the general public. A&WMA’s publications deal primarily with air pollution and waste management, and include the Journal of the Air & Waste Management Association, a scientific monthly; a bimonthly newsletter; a wide variety of technical books; and numerous training manuals and educational videotapes. [Cathy M. Falk]

Ailuropoda melanoleuca see Giant panda

Air and Waste Management Association Founded in 1907 as the International Association for the Prevention of Smoke, this group changed its name several times as the interests of its members changed, becoming the Air and Waste Management Association (A&WMA) in the late 1980s. Although an international organization for environment professionals in more than 50 countries, the association is most active in North America and most concerned with North American environmental issues. Among its main concerns are air pollution control, environmental management, and waste processing and control. A nonprofit organization that promotes the basic need for a clean environment, A&WMA seeks to educate the public and private sectors of the world by conducting seminars, holding workshops and conferences, and offering continuing education programs for environmental professionals in the areas of pollution control and waste management. One of its main goals is to provide “a neutral forum where all viewpoints of an environmental management issue (technical, scientific, economic, social, political and public health) receive equal consideration.” Approximately 10–12 specialty conferences are held annually, as well as five or six workshops. The topics continuously revolve and change as new issues arise. Education is so important to A&WMA that it funds a scholarship for graduate students pursuing careers in fields related to waste management and pollution control. Although A&WMA members are all professionals, they seek to educate even the very young by sponsoring essay contests, science fairs, and community activities, and by volunteering

RESOURCES ORGANIZATIONS Air & Waste Management Association, 420 Fort Duquesne Blvd, One Gateway Center , Pittsburgh, PA USA 15222 (412) 232-3444, Fax: (412) 232-3450, Email: [email protected],

Air pollution Air pollution is a general term that covers a broad range of contaminants in the atmosphere. Pollution can occur from natural causes or from human activities. Discussions about the effects of air pollution have focused mainly on human health but attention is being directed to environmental quality and amenity as well. Air pollutants are found as gases or particles, and on a restricted scale they can be trapped inside buildings as indoor air pollutants. Urban air pollution has long been an important concern for civic administrators, but increasingly, air pollution has become an international problem. The most characteristic sources of air pollution have always been combustion processes. Here the most obvious pollutant is smoke. However, the widespread use of fossil fuels have made sulfur and nitrogen oxides pollutants of great concern. With increasing use of petroleum-based fuels, a range of organic compounds have become widespread in the atmosphere. In urban areas, air pollution has been a matter of concern since historical times. Indeed, there were complaints about smoke in ancient Rome. The use of coal throughout the centuries has caused cities to be very smoky places. Along with smoke, large concentrations of sulfur dioxide were produced. It was this mixture of smoke and sulfur dioxide that typified the foggy streets of Victorian London, paced by such figures as Sherlock Holmes and Jack the Ripper, 29

Air pollution

whose images remain linked with smoke and fog. Such situations are far less common in the cities of North America and Europe today. However, until recently, they have been evident in other cities, such as Ankara, Turkey, and Shanghai, China, that rely heavily on coal. Coal is still burnt in large quantities to produce electricity or to refine metals, but these processes are frequently undertaken outside cities. Within urban areas, fuel use has shifted towards liquid and gaseous hydrocarbons (petrol and natural gas). These fuels typically have a lower concentration of sulfur, so the presence of sulfur dioxide has declined in many urban areas. However, the widespread use of liquid fuels in automobiles has meant increased production of carbon monoxide, nitrogen oxides, and volatile organic compounds (VOCs). Primary pollutants such as sulfur dioxide or smoke are the direct emission products of the combustion process. Today, many of the key pollutants in the urban atmospheres are secondary pollutants, produced by processes initiated through photochemical reactions. The Los Angeles, California, type photochemical smog is now characteristic of urban atmospheres dominated by secondary pollutants. Although the automobile is the main source of air pollution in contemporary cities, there are other equally significant sources. Stationary sources are still important and the oil-burning furnaces that have replaced the older coalburning ones are still responsible for a range of gaseous emissions and fly ash. Incineration is also an important source of complex combustion products, especially where this incineration burns a wide range of refuse. These emissions can include chlorinated hydrocarbons such as dioxin. When plastics, which often contain chlorine, are incinerated, hydrochloric acid results in the waste gas stream. Metals, especially where they are volatile at high temperatures, can migrate to smaller, respirable particles. The accumulation of toxic metals, such as cadmium, on fly ash gives rise to concern over harmful effects from incinerator emissions. In specialized incinerators designed to destroy toxic compounds such as PCBs, many questions have been raised about the completeness of this destruction process. Even under optimum conditions where the furnace operation has been properly maintained, great care needs to be taken to control leaks and losses during transfer operations (fugitive emissions). The enormous range of compounds used in modern manufacturing processes have also meant that there has been an ever-widening range of emissions from both from the industrial processes and the combustion of their wastes. Although the amounts of these exotic compounds are often rather small, they add to the complex range of compounds found in the urban atmosphere. Again, it is not only the deliberate loss of effluents through discharge from pipes 30

Environmental Encyclopedia 3 and chimneys that needs attention. Fugitive emissions of volatile substances that leak from valves and seals often warrant careful control. Air pollution control procedures are increasingly an important part of civic administration, although their goals are far from easy to achieve. It is also noticeable that although many urban concentrations of primary pollutants, for example, smoke and sulfur dioxide, are on the decline in developed countries, this is not always true in the developing countries. Here the desire for rapid industrial growth has often lowered urban air quality. Secondary air pollutants are generally proving a more difficult problem to eliminate than primary pollutants like smoke. Urban air pollutants have a wide range of effects, with health problems being the most enduring concern. In the classical polluted atmospheres filled with smoke and sulfur dioxide, a range of bronchial diseases were enhanced. While respiratory diseases are still the principal problem, the issues are somewhat more subtle in atmospheres where the air pollutants are not so obvious. In photochemical smog, eye irritation from the secondary pollutant peroxyacetyl nitrate (PAN) is one on the most characteristic direct effects of the smog. High concentrations of carbon monoxide in cities where automobiles operate at high density means that the human heart has to work harder to make up for the oxygen displaced from the blood’s hemoglobin by carbon monoxide. This extra stress appears to reveal itself by increased incidence of complaints among people with heart problems. There is a widespread belief that contemporary air pollutants are involved in the increases in asthma, but the links between asthma and air pollution are probably rather complex and related to a whole range of factors. Lead, from automotive exhausts, is thought by many to be a factor in lowering the IQs of urban children. Air pollution also affects materials in the urban environment. Soiling has long been regarded as a problem, originally the result of the smoke from wood or coal fires, but now increasingly the result of fine black soot from diesel exhausts. The acid gases, particularly sulfur dioxide, increase the rate of destruction of building materials. This is most noticeable with calcareous stones, which are the predominant building material of many important historic structures. Metals also suffer from atmospheric acidity. In the modern photochemical smog, natural rubbers crack and deteriorate rapidly. Health problems relating to indoor air pollution are extremely ancient. Anthracosis, or black lung disease, has been found in mummified lung tissue. Recent decades have witnessed a shift from the predominance of concern about outdoor air pollution into a widening interest in indoor air quality.

Environmental Encyclopedia 3

Air pollution control

The production of energy from combustion and the release of solvents is so large in the contemporary world that it causes air pollution problems of a regional and global nature. Acid rain is now widely observed throughout the world. The sheer quantity of carbon dioxide emitted in combustion process is increasing the concentration of carbon dioxide in the atmosphere and enhancing the greenhouse effect. Solvents, such as carbon tetrachloride and aerosol propellants (such as chlorofluorocarbons are now detectable all over the globe and responsible for such problems as ozone layer depletion. At the other end of the scale, it needs to be remembered that gases leak indoors from the polluted outdoor environment, but more often the serious pollutants arise from processes that take place indoors. Here there has been particular concern with indoor air quality as regards to the generation of nitrogen oxides by sources such as gas stoves. Similarly formaldehyde from insulating foams causes illnesses and adds to concerns about our exposure to a substance that may induce cancer in the long run. In the last decade it has become clear that radon leaks from the ground can expose some members of the public to high levels of this radioactive gas within their own homes. Cancers may also result from the emanation of solvents from consumer products, glues, paints, and mineral fibers (asbestos). More generally these compounds and a range of biological materials, animal hair, skin and pollen spores, and dusts can cause allergic reactions in some people. At one end of the spectrum these simply cause annoyance, but in extreme cases, such as found with the bacterium Legionella, a large number of deaths can occur. There are also important issues surrounding the effects of indoor air pollutants on materials. Many industries, especially the electronics industry, must take great care over the purity of indoor air where a speck of dust can destroy a microchip or low concentrations of air pollutants change the composition of surface films in component design. Museums must care for objects over long periods of time, so precautions must be taken to protect delicate dyes from the effects of photochemical smog, paper and books from sulfur dioxide, and metals from sulfide gases. [Peter Brimblecombe]

RESOURCES BOOKS Bridgman, H. Global Air Pollution: Problems for the 1990s. New York: Columbia University Press, 1991. Elsom, D. M. Atmospheric Pollution. Oxford: Blackwell, 1992. Kennedy, D., and R. R. Bates, eds. Air Pollution, the Automobile, and Public Health. Washington, DC: National Academy Press, 1988.

MacKenzie, J. J. Breathing Easier: Taking Action on Climate Change, Air Pollution, and Energy Efficiency. Washington, DC: World Resources Institute, 1989. Smith, W. H. Air Pollution and Forests. 2nd ed. New York: SpringerVerlag, 1989.

Air pollution control The need to control air pollution was recognized in the earliest cities. In the Mediterranean at the time of Christ, laws were developed to place objectionable sources of odor and smoke downwind or outside city walls. The adoption of fossil fuels in thirteenth century England focused particular concern on the effect of coal smoke on health, with a number of attempts at regulation with regard to fuel type, chimney heights, and time of use. Given the complexity of the air pollution problem it is not surprising that these early attempts at control met with only limited success. The nineteenth century was typified by a growing interest in urban public health. This developed against a background of continuing industrialization, which saw smoke abatement clauses incorporated into the growing body of sanitary legislation in both Europe and North America. However, a lack of both technology and political will doomed these early efforts to failure, except in the most blatantly destructive situations (for example, industrial settings such as those around Alkali Works in England). The rise of environmental awareness has reminded people that air pollution ought not to be seen as a necessary product of industrialization. This has redirected responsibility for air pollution towards those who create it. The notion of “making the polluter pay” is seen as a central feature of air pollution control. History has also seen the development of a range of broad air pollution control strategies, among them: (1) Air quality management strategies that set ambient air quality standards so that emissions from various sources can be monitored and controlled; (2) Emission standards strategy that sets limits for the amount of pollutant that can be emitted from a given source. These may be set to meet air quality standards, but the strategy is optimally seen as one of adopting best available techniques not entailing excessive costs (BATNEEC); (3) Economic strategies that involve charging the party responsible for the pollution. If the level of charge is set correctly, some polluters will find it more economical to install air pollution control equipment than continue to pollute. Other methods utilize a system of tradable pollution rights; (4) Cost-benefit analysis, which attempts to balance economic benefits with environmental costs. This is an appealing strategy but difficult to implement because of its controversial and imprecise nature. In general air pollution strategies have either been airquality or emission-based. In the United Kingdom, emis31

Air pollution control

Environmental Encyclopedia 3

An industrial complex releases smoke from multiple chimneys. (Photograph by Josef Polleross. The Stock Market. Reproduced by permission.)

sion strategy is frequently used; for example the Alkali and Works Act of 1863 specifies permissible emissions of hydrochloric acid. By contrast, the United States has aimed to achieve air quality standards, as evidenced by the Clean Air Act. One criticism of using air quality strategy has been that while it improves air in poor areas it leads to degradation in areas with high air quality. Although the emission standards approach is relatively simple, it is criticized for failing to make explicit judgments about air quality and assumes that good practice will lead to an acceptable atmosphere. Until the mid-twentieth century, legislation was primarily directed towards industrial sources, but the passage of the United Kingdom Clean Air Act (1956), which followed the disastrous smog of December 1952, directed attention towards domestic sources of smoke. While this particular act may have reinforced the improvements already under way, rather than initiating improvements, it has served as a catalyst for much subsequent legislative thinking. Its mode of operation was to initiate a change in fuel, perhaps one of the oldest methods of control. The other well-tried

32

aspects were the creation of smokeless zones and an emphasis on tall chimneys to disperse the pollutants. As simplistic as such passive control measures seem, they remain at the heart of much contemporary thinking. Changes from coal and oil to the less polluting gas or electricity have contributed to the reduction in smoke and sulfur dioxide concentrations in cities all around the world. Industrial zoning has often kept power and large manufacturing plants away from centers of human population, and “superstacks,” chimneys of enormous height are now quite common. Successive changes in automotive fuels—lead-free gasoline, low volatility gas, methanol, or even the interest in the electric automobile—are further indications of continued use of these methods of control. There are more active forms of air pollution control that seek to clean up the exhaust gases. The earliest of these were smoke and grit arrestors that came into increasing use in large electrical stations during the twentieth century. Notable here were the cyclone collectors that removed large particles by driving the exhaust through a tight spiral that

Environmental Encyclopedia 3

Air quality

threw the grit outward where it could be collected. Finer particles could be removed by electrostatic precipitation. These methods were an important part of the development of the modern pulverized fuel power station. However they failed to address the problem of gaseous emissions. Here it has been necessary to look at burning fuel in ways that reduce the production of nitrogen oxides. Control of sulfur dioxide emissions from large industrial plants can be achieved by desulfurization of the flue gases. This can be quite successful by passing the gas through towers of solid absorbers or spraying solutions through the exhaust gas stream. However, these are not necessarily cheap options. Catalytic converters are also an important element of active attempts to control air pollutants. Although these can considerably reduce emissions, they have to be offset against the increasing use of the automobile. There is much talk of the development of zero pollution vehicles that do not emit any pollutants. Legislation and control methods are often associated with monitoring networks that assess the effectiveness of the strategies and inform the general public about air quality where they live. A balanced approach to the control of air pollution in the future may have to look far more broadly than simply at technological controls. It will become necessary to examine the way people structure their lives in order to find more effective solutions to air pollution. [Peter Brimblecombe]

Air Pollution Stages Index Value 0 100 200 300 400 500

Interpretations No concentration National Ambient Air Quality Standard Alert Warning Emergency Significant harm

The subindex of each pollutant or pollutant product is derived from a PSI nomogram which matches concentrations with subindex values. The highest subindex value becomes the PSI. The PSI has five health-related categories:

PSI Range 50 100 200 300

0 to 50 to 100 to 200 to 300 to 500

Category Good Moderate Unhealthful Very unhealthful Hazardous

RESOURCES BOOKS Elsom, D. M. Atmospheric Pollution. Oxford: Blackwell, 1992. Luoma, J. R. The Air Around Us: An Air Pollution Primer. Raleigh, NC: The Acid Rain Foundation, 1989. Wark, K., and C. F. Warner. Air Pollution: Its Origin and Control. 3rd ed. New York: Harper & Row, 1986.

Air pollution index The air pollution index is a value derived from an air quality scale which uses the measured or predicted concentrations of several criteria pollutants and other air quality indicators, such as coefficient of haze (COH) or visibility. The best known index of air pollution is the pollutant standard index (PSI). The PSI has a scale that spans from 0 to 500. The index represents the highest value of several subindices; there is a subindex for each pollutant, or in some cases, for a product of pollutant concentrations and a product of pollutant concentrations and COH. If a pollutant is not monitored, its subindex is not used in deriving the PSI. In general, the subindex for each pollutant can be interpreted as follows

Air quality Air quality is determined with respect to the total air pollution in a given area as it interacts with meteorological conditions such as humidity, temperature and wind to produce an overall atmospheric condition. Poor air quality can manifest itself aesthetically (as a displeasing odor, for example), and can also result in harm to plants, animals, people, and even damage to objects. As early as 1881, cities such as Chicago, Illinois, and Cincinnati, Ohio, had passed laws to control some types of pollution, but it wasn’t until several air pollution catastrophes occurred in the twentieth century that governments began to give more attention to air quality problems. For instance, in 1930, smog trapped in the Meuse River Valley in Belgium caused 60 deaths. Similarly, in 1948, smog was blamed for 20 deaths in Donora, Pennsylvania. Most dramatically, in 1952 a sulfur-laden fog enshrouded London for five days and caused as many as 4,000 deaths over two weeks. 33

Environmental Encyclopedia 3

Air quality control region

Disasters such as these prompted governments in a number of industrial countries to initiate programs to protect air quality. The year of the London tragedy, the United States passed the Air Pollution Control Act granting funds to assist the states in controlling airborne pollutants. In 1963, the Clean Air Act, which began to place authority for air quality into the hands of the federal government, was established. Today the Clean Air Act, with its 1970 and 1990 amendments, remains the principal air quality law in the United States. The Act established a National Ambient Air Quality Standard under which federal, state, and local monitoring stations at thousands of locations, together with temporary stations set up by the Environmental Protection Agency (EPA) and other federal agencies, directly measure pollutant concentrations in the air and compare those concentrations with national standards for six major pollutants: ozone, carbon monoxide, nitrogen oxides, lead, particulates, and sulfur dioxide. When the air we breathe contains amounts of these pollutants in excess of EPA standards, it is deemed unhealthy, and regulatory action is taken to reduce the pollution levels. In addition, urban and industrial areas maintain an air pollution index. This scale, a composite of several pollutant levels recorded from a particular monitoring site or sites, yields an overall air quality value. If the index exceeds certain values public warnings are given; in severe instances residents might be asked to stay indoors and factories might even be closed down. While such air quality emergencies seem increasingly rare in the United States, developing countries, as well as Eastern European nations, continue to suffer poor air quality, especially in urban areas such as Bangkok, Thailand and Mexico City, Mexico. In Mexico City, for example, seven out of 10 newborns have higher lead levels in their blood than the World Health Organization considers acceptable. At present, many Third World countries place national economic development ahead of pollution control—and in many countries with rapid industrialization, high population growth, or increasing per capita income, the best efforts of governments to maintain air quality are outstripped by rapid proliferation of automobiles, escalating factory emissions, and runaway urbanization. For all the progress the United States has made in reducing ambient air pollution, indoor air pollution may pose even greater risks than all of the pollutants we breathe outdoors. The Radon Gas and Indoor Air Quality Act of 1986 directed the EPA to research and implement a public information and technical assistance program on indoor air quality. From this program has come monitoring equipment to measure an individual’s “total exposure” to pollutants both in indoor and outdoor air. Studies done using this equipment 34

have shown indoor exposures to toxic air pollutants far exceed outdoor exposures for the simple reason that most people spend 90% of their time in office buildings, homes, and other enclosed spaces. Moreover, nationwide energy conservation efforts following the oil crisis of the 1970s led to building designs that trap pollutants indoors, thereby exacerbating the problem. [David Clarke and Jeffrey Muhr]

RESOURCES BOOKS Brown, Lester, ed. The World Watch Reader On Global Environmental Issues. Washington, DC: Worldwatch Institute, 1991. Council on Environmental Quality. Environmental Trends. Washington, DC: U. S. Government Printing Office, 1989. Environmental Progress and Challenges: EPA’s Update. Washington, DC: U. S. Environmental Protection Agency, 1988.

Air quality control region The Clean Air Act defines an air quality control region (AQCR) as a contiguous area where air quality, and thus air pollution, is relatively uniform. In those cases where topography is a factor in air movement, AQCRs often correspond with airsheds. AQCRs may consist of two or more cities, counties or other governmental entities, and each region is required to adopt consistent pollution control measures across the political jurisdictions involved. AQCRs may even cross state lines and, in these instances, the states must cooperate in developing pollution control strategies. Each AQCR is treated as a unit for the purposes of pollution reduction and achieving National Ambient Air Quality Standards. As of 1993, most AQCRs had achieved national air quality standards; however the remaining AQCRs where standards had not been achieved were a significant group, where a large percentage of the United States population dwelled. AQCRs involving major metro areas like Los Angeles, New York, Houston, Denver, and Philadelphia were not achieving air quality standards because of smog, motor vehicle emissions, and other pollutants.

Air quality criteria The relationship between the level of exposure to air pollutant concentrations and the adverse effects on health or public welfare associated with such exposure. Air quality criteria are critical in the development of ambient air quality standards which define levels of acceptably safe exposure to an air pollutant.

Environmental Encyclopedia 3

Alaska Highway

Air-pollutant transport

that Uniroyal conduct further studies on possible health risks from daminozide and UDMH. Even without a ban, Uniroyal felt the impact of the EPA’s research well before its own studies were concluded. Apple growers, fruit processors, legislators, and the general public were all frightened by the possibility that such a widely used chemical might be carcinogenic. Many growers, processors, and store owners pledged not to use the compound nor to buy or sell apples on which it had been used. By 1987, sales of Alar had dropped by 75%. In 1989, two new studies again brought the subject of Alar to the public’s attention. The consumer research organization Consumers’ Union found that, using a very sensitive test for the chemical, 11 of 20 red apples they tested contained Alar. In addition, 23 of 44 samples of apple juice tested contained detectable amounts of the compound. The Natural Resources Defense Council (NRDC) announced their findings on the compound at about the same time. The NRDC concluded that Alar and certain other agricultural chemicals pose a threat to children about 240 times higher than the one-in-a-million risk traditionally used by the EPA to determine the acceptability of a product used in human foods. The studies by the NRDC and the Consumers’ Union created a panic among consumers, apple growers, and apple processors. Many stores removed all apple products from their shelves, and some growers destroyed their whole crop of apples. The industry suffered millions of dollars in damage. Representatives of the apple industry continued to question how much of a threat Alar truly posed to consumers, claiming that the carcinogenic risks identified by the EPA, NRDC, and Consumers’ Union were greatly exaggerated. But in May of that same year, the EPA announced interim data from its most recent study, which showed that UDMH caused blood-vessel tumors in mice. The agency once more declared its intention to ban Alar, and within a month, Uniroyal announced it would end sales of the compound in the United States.

Air-pollutant transport is the advection or horizontal convection of air pollutants from an area where emission occurs to a downwind receptor area by local or regional winds. It is sometimes referred to as atmospheric transport of air pollutants. This movement of air pollution is often simulated with computer models for point sources as well as for large diffuse sources such as urban regions. In some cases, strong regional winds or low-level nocturnal jets can carry pollutants hundreds of miles from source areas of high emissions. The possibility of transport over such distances can be increased through topographic channeling of winds through valleys. Air-pollutant transport over such distances is often referred to as long-range transport. Air-pollutant transport is an important consideration in air quality planning. Where such impact occurs, the success of an air quality program may depend on the ability of air pollution control agencies to control upwind sources.

Airshed A geographical region, usually a topographical basin, that tends to have uniform air quality. The air quality within an airshed is influenced predominantly by emission activities native to that airshed, since the elevated topography around the basin constrains horizontal air movement. Pollutants move from one part of an airshed to other parts fairly quickly, but are not readily transferred to adjacent airsheds. An airshed tends to have a relatively uniform climate and relatively uniform meteorological features at any given point in time.

Alar Alar is the trade name for the chemical compound daminozide, manufactured by the Uniroyal Chemical Company. The compound has been used since 1968 to keep apples from falling off trees before they are ripe and to keep them red and firm during storage. As late as the early 1980s, up to 40% of all red apples produced in the United States were treated with Alar. In 1985, the Environmental Protection Agency (EPA) found that UDMH (N,N-dimethylhydrazine), a compound produced during the breakdown of daminozide, was a carcinogen. UDMH was routinely produced during the processing of apples, as in the production of apple juice and apple sauce, and the EPA suggested a ban on the use of Alar by apple growers. An outside review of the EPA studies, however, suggested that they were flawed, and the ban was not instituted. Instead, the agency recommended

[David E. Newton]

RESOURCES PERIODICALS “Alar: Not Gone, Not Forgotten.” Consumer Reports 52 (May 1989): 288–292. Roberts, L. “Alar: The Numbers Game.” Science 243 (17 March 1989): 1430. ———. “Pesticides and Kids.” Science 243 (10 March 1989): 1280–1281.

Alaska Highway The Alaska Highway, sometimes referred to as the Alcan (Alaska-Canada) Highway, is the final link of a binational 35

Environmental Encyclopedia 3

Alaska Highway

transportation corridor that provides an overland route

between the lower United States and Alaska. The first, allweather, 1,522-mi (2,451 km) Alcan Military Highway was hurriedly constructed during 1942–1943 to provide land access between Dawson Creek, a Canadian village in northeastern British Columbia, and Fairbanks, a town on the Yukon River in central Alaska. Construction of the road was motivated by perception of a strategic, but ultimately unrealized, Japanese threat to maritime supply routes to Alaska during World War II. The route of the Alaska Highway extended through what was then a wilderness. An aggressive technical vision was supplied by the United States Army Corps of Engineers and the civilian U.S. Public Roads Administration and labor by approximately 11,000 American soldiers and 16,000 American and Canadian civilians. In spite of the extraordinary difficulties of working in unfamiliar and inhospitable terrain, the route was opened for military passage in less than two years. Among the formidable challenges faced by the workers was a need to construct 133 bridges and thousands of smaller culverts across energetic watercourses, the infilling of alignments through a boggy muskeg capable of literally swallowing bulldozers, and working in winter temperatures that were so cold that vehicles were not turned off for fear they would not restart (steel dozer-blades became so brittle that they cracked upon impact with rock or frozen ground). In hindsight, the planning and construction of the Alaska Highway could be considered an unmitigated environmental debacle. The enthusiastic engineers were almost totally inexperienced in the specialized techniques of arctic construction, especially about methods dealing with permafrost, or permanently frozen ground. If the integrity of permafrost is not maintained during construction, then this underground, ice-rich matrix will thaw and become unstable, and its water content will run off. An unstable morass could be produced by the resulting erosion, mudflow, slumping, and thermokarst-collapse of the land into subsurface voids left by the loss of water. Repairs were very difficult, and reconstruction was often unsuccessful, requiring abandonment of some original alignments. Physical and biological disturbances caused terrestrial landscape scars that persist to this day and will continue to be visible (especially from the air) for centuries. Extensive reaches of aquatic habitat were secondarily degraded by erosion and/or sedimentation. The much more careful, intensively scrutinized, and ecologically sensitive approaches used in the Arctic today, for example during the planning and construction of the trans Alaska pipeline, are in marked contrast with the unfettered and free-wheeling engineering associated with the initial construction of the Alaska Highway. 36

Map of the Alaska (Alcan) Highway. (Line drawing by Laura Gritt Lawson. Reproduced by permission.)

The Alaska Highway has been more-or-less continuously upgraded since its initial completion and was opened to unrestricted traffic in 1947. Non-military benefits of the Alaska Highway include provision of access to a great region of the interior of northwestern North America. This access fostered economic development through mining, forestry, trucking, and tourism, as well as helping to diminish the perception of isolation felt by many northern residents living along the route. Compared with the real dangers of vehicular passage along the Alaska Highway during its earlier years, today the route safely provides one of North America’s most spectacular ecotourism opportunities. Landscapes range from alpine tundra to expansive boreal forest, replete with abundantly cold and vigorous streams and rivers. There are abundant opportunities to view large mammals such as moose (Alces alces), caribou (Rangifer tarandus), and bighorn sheep (Ovis canadensis), as well as charismatic smaller mammals and birds and a wealth of interesting arctic, boreal, and alpine species of plants. [Bill Freedman Ph.D.]

RESOURCES BOOKS Christy, J. Rough Road to the North. Markham, ON: Paperjacks, 1981.

PERIODICALS Alexandra, V., and K. Van Cleve. “The Alaska Pipeline: A Success Story.” Annual Review of Ecological Systems 14 (1983): 443–63.

Environmental Encyclopedia 3

Alaska National Interest Lands Conservation Act (1980)

Alaska National Interest Lands Conservation Act (1980) Commonly known as the Alaska Lands Act, The Alaska National Interest Lands Conservation Act (ANILCA) law protected 104 million acres (42 million ha), or 28%, of the state’s 375 million acres (152 million ha) of land. The law added 44 million acres (18 million ha) to the national park system, 55 million acres (22.3 million ha) to the fish and wildlife refuge system, 3 million acres (1.2 million ha) to the national forest system, and made 26 additions to the national wild and scenic rivers system. The law also designated 56.7 million acres (23 million ha) of land as wilderness, with the stipulation that 70 million acres (28.4 million ha) of additional land be reviewed for possible wilderness designation. The genesis of this act can be traced to 1959, when Alaska became the forty-ninth state. As part of the statehood act, Alaska could choose 104 million acres (42.1 million ha) of federal land to be transferred to the state. This selection process was halted in 1966 to clarify land claims made by Alaskan indigenous peoples. In 1971, the Alaska Native Claims Settlement Act (ANSCA) was passed to satisfy the native land claims and allow the state selection process to continue. This act stipulated that the Secretary of the Interior could withdraw 80 million acres (32.4 million ha) of land for protection as national parks and monuments, fish and wildlife refuges, and national forests, and that these lands would not be available for state or native selection. Congress would have to approve these designations by 1978. If Congress failed to act, the state and the natives could select any lands not already protected. These lands were referred to as national interest or d-2 lands. Secretary of the Interior Rogers Morton recommended 83 million acres (33.6 million ha) for protection in 1973, but this did not satisfy environmentalists. The ensuing conflict over how much and which lands should be protected, and how these lands should be protected, was intense. The environmental community formed the Alaska Coalition, which by 1980 included over 1,500 national, regional, and local organizations with a total membership of 10 million people. Meanwhile, the state of Alaska and developmentoriented interests launched a fierce and well-financed campaign to reduce the area of protected land. In 1978, the House passed a bill protecting 124 million acres (50.2 million ha). The Senate passed a bill protecting far less land, and House-Senate negotiations over a compromise broke down in October. Thus, Congress would not act before the December 1978 deadline. In response, the executive branch acted. Department of the Interior Secretary Cecil Andrus withdrew 110 million acres (44.6 million ha) from state selection and mineral entry. President Jimmy

Carter then designated 56 million acres (22.7 million ha) of these lands as national monuments under the authority of the Antiquities Act. Forty million additional acres (16.2 million ha) were withdrawn as fish and wildlife refuges, and 11 million acres (4.5 million ha) of existing national forests were withdrawn from state selection and mineral entry. Carter indicated that he would rescind these actions once Congress had acted. In 1979, the House passed a bill protecting 127 million acres (51.4 million ha). The Senate passed a bill designating 104 million acres (42.1 million ha) as national interest lands in 1980. Environmentalists and the House were unwilling to reduce the amount of land to be protected. In November, however, Ronald Reagan was elected President, and the environmentalists and the House decided to accept the Senate bill rather than face the potential for much less land under a President who would side with development interests. President Carter signed ANILCA into law on December 2, 1980. ANILCA also mandated that the U.S. Geological Service (USGS) conduct biological and petroleum assessments of the coastal plain section of the Arctic National Wildlife Refuge, 19.8 million acres (8 million ha) known as area 1002. While the USGS did determine a significant quantity of oil reserves in the area, they also reported that petroleum development would adversely impact many native species, including caribou (Rangifer tarandus), snow geese (Chen caerulescens), and muskoxen (Ovibos moschatus). In 2001, the Bush administration unveiled a new energy policy that would open up this area to oil and natural gas exploration. In June 2002, a House version of the energy bill (H.R.4) that favors opening ANWR to drilling and a Senate version (S.517) that does not were headed into conference to reconcile the differences between the two bills. [Christopher McGrory Klyza and Paula Anne Ford-Martin]

RESOURCES BOOKS Lentfer, Hank and C. Servid, eds. Arctic Refuge: A Circle of Testimony. Minneapolis, MN: Milkweed Editions, 2001.

OTHER Alaska National Interest Lands Conservation Act. 16 USC 3101-3223; Public Law 96-487. [June 2002]. . Douglas, D. C., et al., eds. Arctic Refuge Coastal Plain Terrestrial Wildlife Research Summaries. Biological Science Report USGS/BRD/BSR-20020001. [June 2002]. .

ORGANIZATIONS The Alaska Coalition, 419 6th St, #328 , Juneau, AK USA 99801 (907) 586-6667, Fax: (907) 463-3312, Email: [email protected],

37

Environmental Encyclopedia 3

Albedo

Alaska National Wildlife Refuge see Arctic National Wildlife Refuge

by clouds. The mean albedo for the earth, called the planetary albedo, is about 30–35%. [Mark W. Seeley]

Algal bloom

Alaska pipeline see Trans-Alaska pipeline

Albedo The reflecting power of a surface, expressed as a ratio of reflected radiation to incident or incoming radiation; it is sometimes expressed as a percentage. Albedo is also called the “reflection coefficient” and derives from the Latin root word albus, which means whiteness. Sometimes expressed as a percentage, albedo is more commonly measured as a fraction on a scale from zero to one, with a value of one denoting a completely reflective, white surface, while a value of zero would describe an absolutely black surface that reflects no light rays. Albedo varies with surface characteristics such as color and composition, as well as with the angle of the sun. The albedo of natural earth surface features such as oceans, forests, deserts, and crop canopies varies widely. Some measured values of albedo for various surfaces are shown below:

Types of Surface

Albedo

Fresh, dry snow cover Aged or decaying snow cover Oceans Dense clouds Thin clouds Tundra Desert Coniferous forest Deciduous forest Field crops Bare dark soils

0.80–0.95 0.40–0.70 0.07–0.23 0.70–0.80 0.25–0.50 0.15–0.20 0.25–0.29 0.10–0.15 0.15–0.20 0.20–0.30 0.05–0.15

The albedo of clouds in the atmosphere is important to life on Earth because extreme levels of radiation absorbed by the earth would make the planet uninhabitable; at any moment in time about 50% of the planet’s surface is covered 38

Algae are simple, single-celled, filamentous aquatic plants; they grow in colonies and are commonly found floating in ponds, lakes, and oceans. Populations of algae fluctuate with the availability of nutrients, and a sudden increase in nutrients often results in a profusion of algae known as algal bloom. The growth of a particular algal species can be both sudden and massive. Algal cells can increase to very high densities in the water, often thousands of cells per milliliter, and the water itself can be colored brown, red, or green. Algal blooms occur in freshwater systems and in marine environments, and they usually disappear in a few days to a few weeks. These blooms consume oxygen, increase turbidity, and clog lakes and streams. Some algal species release water-soluble compounds that may be toxic to fish and shellfish, resulting in fish kills and poisoning episodes. Algal groups are generally classified on the basis of the pigments that color their cells. The most common algal groups are blue-green algae, green algae, red algae, and brown algae. Algal blooms in freshwater lakes and ponds tend to be caused by blue-green and green algae. The excessive amounts of nutrients that cause these blooms are often the result of human activities. For example, nitrates and phosphates introduced into a lake from fertilizer runoff during a storm can cause rapid algal growth. Some common blue-green algae known to cause blooms as well as release nerve toxins are Microcystis, Nostoc, and Anabaena. Red tides in coastal areas are a type of algal bloom. They are common in many parts of the world, including the New York Bight, the Gulf of California, and the Red Sea. The causes of algal blooms are not as well understood in marine environments as they are in freshwater systems. Although human activities may well have an effect on these events, weather conditions probably play a more important role: turbulent storms that follow long, hot, dry spells have often been associated with algal blooms at sea. Toxic red tides most often consist of genera from the dinoflagellate algal group such as Gonyaulax and Gymnodinium. The potency of the toxins has been estimated to be 10 to 50 times higher than cyanide or curare, and people who eat exposed shellfish may suffer from paralytic shellfish poisoning within 30 minutes of consumption. A fish kill of 500 million fish was reported from a red tide in Florida in 1947. A number of blue-green algal genera such as Oscillatoria and Trichodesmium have also been associated with red blooms, but they

Environmental Encyclopedia 3 are not necessarily toxic in their effects. Some believe that the blooms caused by these genera gave the Red Sea its name. The economic and health consequences of algal blooms can be sudden and severe, but the effects are generally not long lasting. There is little evidence that algal blooms have long-term effects on water quality or ecosystem structure. [Usha Vedagiri and Douglas Smith]

RESOURCES BOOKS Lerman, M. Marine Biology: Environment, Diversity and Ecology. Menlo Park, CA: Benjamin/Cummings, 1986.

PERIODICALS Culotta, E. “Red Menace in the World’s Oceans.” Science 257 (11 September 1992): 1476–77. Mlot, C. “White Water Bounty: Enormous Ocean Blooms of WhitePlated Phytoplankton Are Attracting the Interest of Scientists.” Bioscience 39 (April 1989): 222–24.

Algicide The presence of nuisance algae can cause unsightly appearance, odors, slime, and coating problems in aquatic media. Algicides are chemical agents used to control or eradicate the growth of algae in aquatic media such as industrial tanks, swimming pools, and lakes. These agents used may vary from simple inorganic compounds such as copper sulphate which are broad-spectrum in effect and control a variety of algal groups to complex organic compounds that are targeted to be species-specific in their effects. Algicides usually require repeated application or continuous application at low doses in order to maintain effective control.

Aline, Tundra see Tundra

Allelopathy Derived from the Greek words allelo (other) and pathy (causing injury to), allelopathy is a form of competition among plants. One plant produces and releases a chemical into the surrounding soil that inhibits the germination or growth of other species in the immediate area. These chemical substances are both acids and bases and are called secondary compounds. For example, black walnut (Jugans nigra) trees release a chemical called juglone that prevents other plants such as tomatoes from growing in the immediate area around

Alligator, American

each tree. In this way, plants such as black walnut reduce competition for space, nutrients, water, and sunlight.

Allergen Any substance that can bring about an allergic response in an organism. Hay fever and asthma are two common allergic responses. The allergens that evoke these responses include pollen, fungi, and dust. Allergens can be described as hostspecific agents in that a particular allergen may affect some individuals, but not others. A number of air pollutants are known to be allergens. Formaldehyde, thiocyanates, and epoxy resins are examples. People who are allergic to natural allergens, such as pollen, are more inclined to be sensitive also to synthetic allergens, such as formaldehyde.

Alligator, American The American alligator (Alligator mississippiensis) is a member of the reptilian family Crocodylidae, which consists of 21 species found in tropical and subtropical regions throughout the world. It is a species that has been reclaimed from the brink of extinction. Historically, the American alligator ranged in the Gulf and Atlantic coast states from Texas to the Carolinas, with rather large populations concentrated in the swamps and river bottomlands of Florida and Louisiana. From the late nineteenth century into the middle of the twentieth century, the population of this species decreased dramatically. With no restrictions on their activities, hunters killed alligators as pests or to harvest their skin, which was highly valued in the leather trade. The American alligator was killed in such great numbers that biologists predicted its probable extinction. It has been estimated that about 3.5 million of these reptiles were slaughtered in Louisiana between 1880 and 1930. The population was also impacted by the fad of selling young alligators as pets, principally in the 1950s. States began to take action in the early 1960s to save the alligator from extinction. In 1963 Louisiana banned all legalized trapping, closed the alligator hunting season, and stepped up enforcement of game laws against poachers. By the time the Endangered Species Act was passed in 1973, the species was already experiencing a rapid recovery. Because of the successful re-establishment of alligator populations, its endangered classification was downgraded in several southeastern states, and there are now strictly regulated seasons that allow alligator trapping. Due to the persistent demand for its hide for leather goods and an increasing market for the reptile’s meat, alligator farms are now both legal and profitable. 39

Environmental Encyclopedia 3

Alligator, American

An American alligator (Alligator mississippiensis). (Photograph by B. Arroyo. U. S. Fish & Wildlife Service. Reproduced by permission.)

Human fascination with large, dangerous animals, along with the American alligator’s near extinction, have made it one of North America’s best studied reptile species. Population pressures, primarily resulting from being hunted so ruthlessly for decades, have resulted in a decrease in the maximum size attained by this species. The growth of a reptile is indeterminate, and they continue to grow as long as they are alive, but old adults from a century ago attained larger sizes than their counterparts do today. The largest recorded American alligator was an old male killed in January 1890, in Vermilion Parish, Louisiana, which measured 19.2 ft (6 m) long. The largest female ever taken was only about half that size. Alligators do not reach sexual maturity until they are about 6 ft (1.3 m) long and nearly 10 years old. Females construct a nest mound in which they lay about 35–50 eggs. The nest is usually 5–7 ft (1.5–2.1 m) in diameter and 2–3 ft (0.6–0.9 m) high, and decaying vegetation produces heat which keeps the eggs at a fairly constant temperature during incubation. The young stay with their mother through their 40

first winter, striking out on their own when they are about 1.5 ft (0.5 m) in length. [Eugene C. Beckham]

RESOURCES BOOKS Crocodiles. Proceedings of the 9th Working Meeting of the IUCN/SSC Crocodile Specialist Group, Lae, Papua New Guinea. Vol. 2. Gland, Switzerland: IUCN-The World Conservation Union, 1990. Dundee, H. A., and D. A. Rossman. The Amphibians and Reptiles of Louisiana. Baton Rouge: LSU Press, 1989. Webb, G. J. W., S. C. Manolis, and P. J. Whitehead, eds. Wildlife Management: Crocodiles and Alligators. Chipping Norton, Australia: Surrey Beatty and Sons, 1987.

OTHER “Alligator mississippiensis in the Crocodilians, Natural History and Conservation.” Florida Museum of Natural History. [cited May 2002] . “The American Alligator.” University of Florida, Gainesville. [cited May 2002]. .

Environmental Encyclopedia 3

Alligator mississippiensis see Alligator, American

All-terrain vehicle see Off-road vehicles

Alpha particle A particle emitted by certain kinds of radioactive materials. An alpha particle is identical to the nucleus of a helium atom, consisting of two protons and two neutrons. Some common alpha-particle emitters are uranium-235, uranium238, radium-226, and radon-222. Alpha particles have relatively low penetrating power. They can be stopped by a thin sheet of paper or by human skin. They constitute a health problem, therefore, only when they are taken into the body. The inhalation of alpha-emitting radon gas escaping from bedrock into houses in some areas is thought to constitute a health hazard.

Alternative energy sources Coal, oil, and natural gas provide over 85% of the total primary energy used around the world. Although figures differ in various countries, nuclear reactors and hydroelectric power together produce less than 10% of the total world energy. Wind power, active and passive solar systems, and geothermal energy are examples of alternative energy sources. Collectively, these make up the final small fraction of total energy production. The exact contribution alternative energy sources make to the total primary energy used around the world is not known. Conservative estimates place their share at 3–4%, but some energy experts dispute these figures. Amory Lovins has argued that the statistics collected are based primarily on large electric utilities and the regions they serve. They fail to account for areas remote from major power grids, which are more likely to use solar energy, wind energy, or other sources. When these areas are taken into consideration, Lovins claims, alternative energy sources contribute as much as 11% to the total primary energy used in the United States. Animal manure, furthermore, is widely used as an energy source in India, parts of China, and many African nations, and when this is taken into account the percentage of the worldwide contribution alternative sources make to energy production could rise as high as 10–15%. Now an alternative energy source, wind power is one of the earliest forms of energy used by humankind. Wind is caused by the uneven heating of the earth’s surface, and its energy is equal to about

Alternative energy sources

2% of the solar energy that reaches the earth. In quantitative terms, the amount of kinetic energy within the earth’s atmosphere is equal to about 10,000 trillion kilowatt hours. The kinetic energy of wind is proportional to the wind velocity, and the ideal location for a windmill generator is an area with constant and relatively fast winds and no obstacles such as buildings or trees. An efficient windmill can produce 175 watts per square meter of propeller blade area at a height of 75 ft (25 m). The estimated cost of generating one kilowatt hour by wind power is about eight cents, as compared to five cents for hydropower and 15 cents for nuclear power. The largest two utilities in California purchase wind-generated electricity, and though this state leads the country in the utilization of wind power, Denmark leads the world. The Scandinavian nation has refused to use nuclear power, and it expects to obtain 10% of its energy needs from windmills. Solar energy can be utilized either directly as heat or indirectly by converting it to electrical power using photovoltaic cells. Greenhouses and solariums are the most common examples of the direct use of solar energy, with glass windows concentrating the visible light from the sun but restricting the heat from escaping. Flatplate collectors are another direct method, and mounted on rooftops they can provide one third of the energy required for space heating. Windows and collectors alone are considered passive systems; an active solar system uses a fan, pump, or other machinery to transport the heat generated from the sun. Photovoltaic cells are made of semiconductor materials such as silicon. These cells are capable of absorbing part of the solar flux to produce a direct electric current with about 14% efficiency. The current cost of producing photovoltaic current is about four dollars a watt. However, a thin-film technology is being perfected for the production of these cells, and the cost per watt will eventually be reduced because less materials will be required. Photovoltaics are now being used economically in lighthouses, boats, rural villages, and other remote areas. Large solar systems have been most effective using trackers that follow the sun or mirror reflectors that concentrate its rays. Geothermal energy is the natural heat generated in the interior of the earth, and like solar energy it can also be used directly as heat or indirectly to generate electricity. Steam is classified as either dry (no water droplets), or wet (mixed with water). When it is generated in certain areas containing corrosive sulfur compounds, it is known as “sour steam,” and when generated in areas that are free of sulfur it is known as “sweet steam.” Geothermal energy can be used to generate electricity by the flashed steam method, in which high temperature geothermal brine is used as a heat exchanger to convert injected water into steam. The produced steam is used to turn a turbine. When geothermal 41

Environmental Encyclopedia 3

Aluminum

wells are not hot enough to create steam, a fluid which evaporates at a much lower temperature than water, such as isobutane or ammonia, can be placed in a closed system where the geothermal heat provides the energy to evaporate the fluid and run the turbine. There are 20 countries worldwide that utilize this energy source, and they include the United States, Mexico, Italy, Iceland, Japan, and the former Soviet Union. Unlike solar energy and wind power, geothermal energy is not free of environmental impact. It contributes to air pollution, it can emit dissolved salts and, in some cases, toxic heavy metals such as mercury and arsenic. Though there are several ways of utilizing energy from the ocean, the most promising are the harnessing of tidal power and ocean thermal energy conversion. The power of ocean tides is based on the difference between high and low water. In order for tidal power to be effective the differences in height need to be very great, more than 15 ft (3 m), and there are only a few places in the world where such differences exist. These include the Bay of Fundy and a few sites in China. Ocean thermal energy conversion utilizes temperature changes rather than tides. Ocean temperature is stratified, especially near the tropics, and the process takes advantage of this fact by using a fluid with a low boiling point, such as ammonia. The vapor from the fluid drives a turbine, and cold water from lower depths is pumped up to condense the vapor back into liquid. The electrical power generated by this method can be shipped to shore or used to operate a floating plant such as a cannery. Other sources of alternative energy are currently being explored, some of which are still experimental. These include harnessing the energy in biomass through the production of wood from trees or the production of ethanol from crops such as sugar cane or corn. Methane gas can be generated from the anaerobic breakdown of organic waste in sanitary landfills and from wastewater treatment plants. With the cost of garbage disposal rapidly increasing, the burning of garbage is becoming a viable option as an energy source. Adequate air pollution controls are necessary, but trash can be burned to heat buildings, and municipal garbage is currently being used to generate electricity in Hamburg, Germany. In an experimental method known as magnetohydrodynamics, hot gas is ionized (potassium and sulfur) and passed through a strong magnetic field where it produces an electrical current. This process contains no moving parts and has an efficiency of 20–30%. Ethanol and methanol can be produced from biomass and used in transportation; in fact, methanol currently powers Indianapolis race cars. Hydrogen could be valuable if problems of supply and storage can be solved. It is very clean-burning, forming water, and may be combined with

42

oxygen in fuel cells to generate electricity. Also, it is not nearly as explosive as gasoline. Of all the alternative sources, energy conservation is perhaps the most important, and improving energy efficiency is the best way to meet energy demands without adding to air and water pollution. One reason the United States survived the energy crises of the 1970s was that they were able to curtail some of their immense waste. Relatively easy lifestyle alterations, vehicle improvements, building insulation, and more efficient machinery and appliances have significantly reduced their potential energy demand. Experts have estimated that it is possible to double the efficiency of electric motors, triple the intensity of light bulbs, quadruple the efficiency of refrigerators and air conditioners, and quintuple the gasoline mileage of automobiles. Several automobile manufacturers in Europe and Japan have already produced prototype vehicles with very high gasoline mileage. Volvo has developed the LCP 2000, a passenger sedan that holds four to five people, meets all United States safety standards, accelerates from 0–50 MPH (0–80.5 km/hr) in 11 seconds, and has a high fuel efficiency rating. Alternative fuels will be required to meet future energy needs. Enormous investments in new technology and equipment will be needed, and potential supplies are uncertain, but there is clearly hope for an energy-abundant future. [Muthena Naseri and Douglas Smith]

RESOURCES BOOKS Alternative Energy Handbook. Englewood Cliffs, NJ: Prentice Hall, 1993. Brower, M. Cool Energy: Renewable Solutions to Environmental Problems. Cambridge: MIT Press, 1992. Brown, Lester R., ed. The World Watch Reader on Global Environmental Issues. New York: W. W. Norton, 1991. Goldemberg, J. Energy for a Sustainable World. New York: Wiley, 1988. Schaeffer, J. Alternative Energy Sourcebook: A Comprehensive Guide to Energy Sensible Technologies. Ukiah, CA: Real Goods Trading Corp., 1992. Shea, C. P. Renewable Energy: Today’s Contribution, Tomorrow’s Promise. Washington, DC: Worldwatch Institute, 1988.

PERIODICALS Stein, J. “Hydrogen: Clean, Safe, and Inexhaustible.” Amicus Journal 12 (Spring 1990): 33-36.

Alternative fuels see Renewable energy

Aluminum Aluminum, a light metal, comprises about 8% of the earth’s crust, ranking as the third-most abundant element after oxygen (47%) and silicon (28%). Virtually all environmental

Environmental Encyclopedia 3 aluminum is present in mineral forms that are almost insoluble in water, and therefore not available for uptake by organisms. Most common among these forms of aluminum are various aluminosilicate minerals, aluminum clays and sesquioxides, and aluminum phosphates. However, aluminum can also occur as chemical species that are available for biological uptake, sometimes causing toxicity. In general, bio-available aluminum is present in various water-soluble, ionic or organically complexed chemical species. Water-soluble concentrations of aluminum are largest in acidic environments, where toxicity to nonadapted plants and animals can be caused by exposure to Al3+ and Al(OH)2+ ions, and in alkaline environments, where Al(OH)4- is most prominent. Organically bound, watersoluble forms of aluminum, such as complexes with fulvic or humic acids, are much less toxic than ionic species. Aluminum is often considered to be the most toxic chemical factor in acidic soils and aquatic habitats.

Amazon basin The Amazon basin, the region of South America drained by the Amazon River, represents the largest area of tropical rain forest in the world. Extending across nine different countries and covering an area of 2.3 million square mi (6 million sq. km), the Amazon basin contains the greatest abundance and diversity of life anywhere on the earth. Tremendous numbers of plant and animal species that occur there have yet to be discovered or properly named by scientists, as this area has only begun to be explored by competent researchers. It is estimated that the Amazon basin contains over 20% of all higher plant species on Earth, as well as about 20% of all birdlife and 10% of all mammals. More than 2,000 known species of freshwater fishes live in the Amazon river and represent about 8% of all fishes on the planet, both freshwater and marine. This number of species is about three times the entire ichthyofauna of North America and almost ten times that of Europe. The most astonishing numbers, however, come from the river basin’s insects. Every expedition to the Amazon basin yields countless new species of insects, with some individual trees in the tropical forest providing scientists with hundreds of undescribed forms. Insects represent about three-fourths of all animal life on Earth, yet biologists believe the 750,000 species that have already been scientifically named account for less than 10% of all insect life that exists. However incredible these examples of biodiversity are, they may soon be destroyed as the rampant deforestation in the Amazon basin continues. Much of this destruction is directly attributable to human population growth.

Amazon basin

The number of people who have settled in the Amazonian uplands of Colombia and Ecuador has increased by 600% over the past 40 years, and this has led to the clearing of over 65% of the region’s forests for agriculture. In Brazil, up to 70% of the deforestation is tied to cattle ranching. In the past large governmental subsidies and tax incentives have encouraged this practice, which had little or no financial success and caused widespread environmental damage. Tropical soils rapidly lose their fertility, and this allows only limited annual meat production. It is often only 300 lb (136 kg) per acre, compared to over 3,000 lb (1,360 kg) per acre in North America. Further damage to the tropical forests of the Amazon basin is linked to commercial logging. Although only five of the approximately 1,500 tree species of the region are extensively logged, tremendous damage is done to the surrounding forest as these are selectively removed. When loggers build roads move in heavy equipment, they may damage or destroy half of the trees in a given area. The deforestation taking place in the Amazon basin has a wide range of environmental effects. The clearing and burning of vegetation produces smoke or air pollution, which at times has been so abundant that it is clearly visible from space. Clearing also leads to increased soil erosion after heavy rains, and can result in water pollution through siltation as well as increased water temperatures from increased exposure. Yet the most alarming, and definitely the most irreversible, environmental problem facing the Amazon basin is the loss of biodiversity. Through the irrevocable process of extinction, this may cost humanity more than the loss of species. It may cost us the loss of potential discoveries of medicines and other beneficial products derived from these species. [Eugene C. Beckham]

RESOURCES BOOKS Caufield, C. In the Rainforest: Report From a Strange, Beautiful, Imperiled World. Chicago: University of Chicago Press, 1986. Cockburn, A., and S. Hecht. The Fate of the Forest: Developers, Destroyers, and Defenders of the Amazon. New York: Harper/Perennial, 1990. Collins, M. The Last Rain Forests: A World Conservation Atlas. London: Oxford University Press, 1990. Cowell, A. Decade of Destruction: The Crusade to Save the Amazon Rain Forest. New York: Doubleday, 1991. Margolis, M. The Last New World: The Conquest of the Amazon Frontier. New York: Norton, 1992. Wilson, E. O. The Diversity of Life. Cambridge, MA: Belknap Press, 1992.

PERIODICALS Holloway, M. “Sustaining the Amazon.” Scientific American 269 (July 1993): 90–96+.

43

Ambient air

Environmental Encyclopedia 3

Ambient air The air, external to buildings and other enclosures, found in the lower atmosphere over a given area, usually near the surface. Air pollution standards normally refer to ambient air.

Amenity value The idea that something has worth because of the pleasant feelings it generates to those who use or view it. This value is often used in cost-benefit analysis, particularly in shadow pricing, to determine the worth of natural resources that will not be harvested for economic gain. A virgin forest will have amenity value, but its value will decrease if the forest is harvested, thus the amenity value is compared to the value of the harvested timber.

American alligator see Alligator, American

American Box Turtle Box turtles are in the Order Chelonia, Family Emydidae, and genus Terrapene. There are two major species in the United States: carolina (Eastern box turtle) and ornata (Western or ornate box turtle). Box turtles are easily recognized by their dome-shaped upper shell (carapace) and by their lower shell (plastron) which is hinged near the front. This hinging allows them to close up tightly into the “box” when in danger (hence their name). Box turtles are fairly small, having an adult maximum length of 4–7 in (10–18 cm). Their range is restricted to North America, with the Eastern species located over most of the eastern United States and the Western species located in the Central and Southwestern United States and into Mexico, but not as far west as California. Both species are highly variable in coloration and pattern, ranging from a uniform tan to dark brown or black, with yellow spots or streaks. They prefer a dry habitat such as woodlands, open brush lands, or prairie. They typically inhabit sandy soil, but are sometimes found in springs or ponds during hot weather. During the winter, they hibernate in the soil below the frost line, often as deep as 2 ft (60 cm). Their home range is usually fairly small, and they often live within areas less than 300 yd2 (300 m2). 44

Eastern box turtle. (Photograph by Robert Huffman. Fieldmark Publications. Reproduced by permission.)

Box turtles are omnivorous, feeding on living and dead insects, earthworms, slugs, fruits, berries (particularly blackberries and strawberries), leaves, and mushrooms. They have been known to ingest some mushrooms which are poisonous to humans, and there have been reports of people eating box turtles and getting sick. Other than this, box turtles are harmless to humans and are commonly collected and sold as pets (although this should be discouraged because they are now a threatened species). They can be fed raw hamburger, canned pet food, or leafy vegetables. Box turtles normally live as long as 30–40 years. Some have been reported with a longevity of more than one hundred years, and this makes them the longest-lived land turtle. They are active from March until November and are diurnal, usually being more active in the early morning. During the afternoons they typically seek shaded areas. They breed during the spring and autumn, and the females build nests from May until July, typically in sandy soil where they dig a hole with their hind feet. The females can store sperm for several years. They typically hatch three to eight eggs that are elliptically-shaped and about 1.5 in (4 cm) in diameter. Male box turtles have a slight concavity in their plastron that aids in mounting females during copulation. All four toes on the male’s hind feet are curved, which aids in holding down the posterior portion of the female’s plastron during copulation. Females have flat plastrons, shorter tails, and yellow or brown eyes. Most males have bright red or pink eyes. The upper jaw of both sexes ends in a down-turned beak. Predators of box turtles include skunks, raccoons, foxes, snakes, and other animals. Native American Indians used to eat box turtles and incorporated their shells into their ceremonies as rattles. [John Korstad]

Environmental Encyclopedia 3

American Committee for International Conservation

RESOURCES BOOKS

and a quarterly journal of scientific articles on the same subjects, called Whalewatcher. [Douglas Smith]

Conant, R. A Field Guide to Reptiles and Amphibians of Eastern and Central North America. Boston: Houghton Mifflin, 1998. Tyning, T. F. A Guide to Amphibians and Reptiles. Boston: Little, Brown and Co., 1990.

OTHER

RESOURCES ORGANIZATIONS

“Conservation and Preservation of American Box Turtles in the Wild.” The American Box Turtle Page. Fall 2000 [cited May 2002]. .

American Cetacean Society, P.O. Box 1391, San Pedro, CA USA 90733-1391 (310) 548-6279, Fax: (310) 548-6950, Email: [email protected],

American Cetacean Society

American Committee for International Conservation

The American Cetacean Society (ACS), located in San Pedro, California, is dedicated to the protection of whales and other cetaceans, including dolphins and porpoises. Principally an organization of scientists and teachers (though its membership does include students and laypeople) the ACS was founded in 1967 and claims to be the oldest whale conservation group in the world. The ACS believes the best protection for whales, dolphins, and porpoises is better public awareness about “these remarkable animals and the problems they face in their increasingly threatened habitat.” The organization is committed to political action through education, and much of its work has been in improving communication between marine scientists and the general public. The ACS has developed several educational resource materials on cetaceans, making such products as the “Gray Whale Teaching Kit,” “Whale Fact Pack,” and “Dolphin Fact Pack,” which are widely available for use in classrooms. There is a cetacean research library at the national headquarters in San Pedro, California, and the organization responds to thousands of inquiries every year. The ACS supports marine mammal research and sponsors a biennial conference on whales. It also assists in conducting whale-watching tours. The organization also engages in more traditional and direct forms of political action. A representative in Washington, DC, monitors legislation that might affect cetaceans, attends hearings at government agencies, and participates as a member of the International Whaling Commission. The ACS also networks with other conservation groups. In addition, the ACS directs letter-writing campaigns, sending out “Action Alerts” to citizens and politicians. The organization is currently emphasizing the threats to marine life posed by oil spills, toxic wastes from industry and agriculture, and particular fishing practices (including commercial whaling). The ACS publishes a quarterly newsletter on whale research, conservation, and education, called WhaleNews,

The American Committee for International Conservation (ACIC), located in Washington, DC, is an association of nongovernmental organizations (NGOs) that is concerned about international conservation issues. The ACIC, founded in 1930, includes 21 member organizations. It represents conservation groups and individuals in 40 countries. While ACIC does not fund conservation research, it does promote national and international conservation research activities. Specifically, ACIC promotes conservation and preservation of wildlife and other natural resources, and encourages international research on the ecology of endangered species. Formerly called the American Committee for International Wildlife Protection, ACIC assists IUCN—The World Conservation Union, an independent organization of nations, states, and NGOs, in promoting natural resource conservation. ACIC also coordinates its members’ overseas research activities. Member organizations of the ACIC include the African Wildlife Leadership Foundation, National Wildlife Federation, World Wildlife Fund (US)/RARE, Caribbean Conservation Corporation, National Audubon Society, Natural Resources Defense Council, Nature Conservancy, International Association of Fish and Wildlife Agencies, and National Parks and Conservation Association. Members also include The Conservation Foundation, International Institute for Environment and Development; Massachusetts Audubon Society; Chicago Zoological Society; Wildlife Preservation Trust; Wildfowl Trust; School of Natural Resources, University of Michigan; World Resources Institute; Global Tomorrow Coalition; and The Wildlife Society, Inc. ACIC holds no formal meetings or conventions, nor does it publish magazines, books, or newsletters. Contact: American Committee for International Conservation, c/o 45

Environmental Encyclopedia 3

American Farmland Trust

Center for Marine Conservation, 1725 DeSales Street, NW, Suite 500, Washington, DC 20036. [Linda Rehkopf]

American Farmland Trust Headquartered in Washington, DC, the American Farmland Trust (AFT) is an advocacy group for farmers and farmland. It was founded in 1980 to help reverse or at least slow the rapid decline in the number of productive acres nationwide, and it is particularly concerned with protecting land held by private farmers. The principles that motivate the AFT are perhaps best summarized in a line from William Jennings Bryan that the organization has often quoted: “Destroy our farms, and the grass will grow in the streets of every city in the country.” Over one million acres (404,700 ha) of farmland in the United States is lost each year to development, according to the AFT, and in Illinois one and a half bushels of topsoil are lost for every bushel of corn produced. The AFT argues that such a decline poses a serious threat to the future of the American economy. As farmers are forced to cultivate increasingly marginal land, food will become more expensive, and the United States could become a net importer of agricultural products, damaging its international economic position. The organization believes that a declining farm industry would also affect American culture, depriving the country of traditional products such as cherries, cranberries, and oranges and imperiling a sense of national identity that is still in many ways agricultural. The AFT works closely with farmers, business people, legislators, and environmentalists “to encourage sound farming practices and wise use of land.” The group directs lobbying efforts in Washington, working with legislators and policymakers and frequently testifying at congressional and public hearings on issues related to farming. In addition to mediating between farmers and state and federal government, the trust is also involved in political organizing at the grassroots level, conducting public opinion polls, contesting proposals for incinerators and toxic waste sites, and drafting model conservation easements. They conduct workshops and seminars across the country to discuss farming methods and soil conservation programs, and they worked with the State of Illinois to establish the Illinois Sustainable Agriculture Society. The group is currently developing kits for distribution to schoolchildren in both rural and urban areas called “Seed for the Future,” which teach the benefits of agriculture and help each child grow a plant. The AFT has a reputation for innovative and determined efforts to realize its goals, and former Secretary of Agriculture John R. Block has said that “this organization 46

has probably done more than any other to preserve the American farm.” Since its founding the trust has been instrumental in protecting nearly 30,000 acres (12,140 ha) of farmland in 19 states. In 1989, the group protected a 507acre (205-ha) cherry farm known as the Murray Farm in Michigan, and it has helped preserve 300 acres (121 ha) of farm and wetlands in Virginia’s Tidewater region. The AFT continues to battle urban sprawl in areas such as California’s Central Valley and Berks County, Pennsylvania, as well as working to support farms in states such as Vermont, which are threatened not so much by development but by a poor agricultural economy. The AFT promotes a wetland policy that is fair to farmers while meeting environment standards, and it recently won a national award from the Soil and Water Conservation Society for its publication Does Farmland Protection Pay? The AFT has 20,000 members and an annual budget of $3,850,000. The trust publishes a quarterly magazine called American Farmland, a newsletter called Farmland Update, and a variety of brochures and pamphlets which offer practical information on soil erosion, the cost of community services, and estate planning. They also distribute videos, including The Future of America’s Farmland, which explains the sale and purchase of development rights. [Douglas Smith]

RESOURCES ORGANIZATIONS The American Farmland Trust (AFT), 1200 18th Street, NW, Suite 800, Washington, D.C. USA 20036 (202) 331-7300, Fax: (202) 659-8339, Email: [email protected],

American Forests Located in Washington, DC, American Forests was founded in 1875, during the early days of the American conservation movement, to encourage forest management. Originally called the American Forestry Association, the organization was renamed in the later part of the twentieth century. The group is dedicated to promoting the wise and careful use of all natural resources, including soil, water, and wildlife, and it emphasizes the social and cultural importance of these resources as well as their economic value. Although benefiting from increasing national and international concern about the environment, American Forests takes a balanced view on preservation, and it has worked to set a standard for the responsible harvesting and marketing of forest products. American Forests sponsors the Trees for People program, which is designed to help meet the national demand for wood and paper products by increasing the productivity of private woodlands. It provides educational

Environmental Encyclopedia 3

American Indian Environmental Office

and technical information to individual forest owners, as well as making recommendations to legislators and policymakers in Washington. To draw attention to the greenhouse effect, American Forests inaugurated their Global ReLeaf program in October 1988. Global ReLeaf is what American Forests calls “a treeplanting crusade.” The message is, “Plant a tree, cool the globe,” and Global ReLeaf has organized a national campaign, challenging Americans to plant millions of trees. American Forests has gained the support of government agencies and local conservation groups for this program, as well as many businesses, including such Fortune-500 companies as Texaco, McDonald’s, and Ralston-Purina. The goal of the project is to plant 20 million trees by 2002. In August of 2001, there had been 19 million trees planted. Global ReLeaf also launched a cooperative effort with the American Farmland Trust called Farm ReLeaf, and it has also participated in the campaign to preserve Walden Woods in Massachusetts. In 1991 American Forests brought Global ReLeaf to Eastern Europe, running a workshop in Budapest, Hungary, for environmental activists from many former communist countries. American Forests has been extensively involved in the controversy over the preservation of old-growth forests in the American Northwest. They have been working with environmentalists and representatives of the timber industry, and consistent with the history of the organization, American Forests is committed to a compromise that both sides can accept: “If we have to choose between preservation and destruction of old-growth forests as our only options, neither choice will work.” American Forests supports an approach to forestry known as New Forestry, where the priority is no longer the quantity of wood or the number of board feet that can be removed from a site, but the vitality of the ecosystem the timber industry leaves behind. The organization advocates the establishment of an Old Growth Reserve in the Pacific Northwest, which would be managed by the principles of New Forestry under the supervision of a Scientific Advisory Committee. American Forests publishes the National Registry of Big Trees, which celebrated its sixtieth anniversary in 2000. The registry is designed to encourage the appreciation of trees, and it includes such trees as the recently fallen Dyerville Giant, a redwood tree in California; the General Sherman, a giant sequoia in Texas; and the Wye Oak in Maryland. The group also publishes American Forests, a bimonthly magazine, and Resource Hotline, a biweekly newsletter, as well as Urban Forests: The Magazine of Community Trees. It presents the Annual Distinguished Service Award, the John Aston Warder Medal, and the William B. Greeley Award, among others. American Forests has over 35,000 members, a staff of 21, and a budget of $2,725,000. [Douglas Smith]

RESOURCES ORGANIZATIONS American Forests, P.O. Box 2000, Washington, DC USA 20013 (202) 955-4500, Fax: (202) 955-4588, Email: [email protected],

American Indian Environmental Office The American Indian Environmental Office (AIEO) was created to increase the quality of public health and environmental protection on Native American land and to expand tribal involvement in running environmental programs. Native Americans are the second-largest landholders besides the government. Their land is often threatened by environmental degradation such as strip mining, clearcutting, and toxic storage. The AIEO, with the help of the President’s Federal Indian Policy (January 24, 1983), works closely with the U.S. Environmental Protection Agency (EPA) to prevent further degradation of the land. The AIEO has received grants from the EPA for environmental cleanup and obtained a written policy that requires the EPA to continue with the trust responsibility, a clause expressed in certain treaties that requires the EPA to notify the Tribe when performing any activities that may affect reservation lands or resources. This involves consulting with tribal governments, providing technical support, and negotiating EPA regulations to ensure that tribal facilities eventually comply. The pollution of Dine Reservation land is an example of an environmental injustice that the AIEO wants to prevent in the future. The reservation has over 1,000 abandoned uranium mines that leak radioactive contaminants and is also home to the largest coal strip mine in the world. The cancer rate for the Dine people is 17 times the national average. To help tribes with pollution problems similar to the Dine, several offices now exist that handle specific environmental projects. They include the Office of Water, Air, Environmental Justice, Pesticides and Toxic Substances; Performance Partnership Grants; Solid Waste and Emergency Response; and the Tribal Watershed Project. Each of these offices reports to the National Indian Headquarters in Washington, DC. At the Rio Earth Summit in 1992, the Biodiversity Convention was drawn up to protect the diversity of life on the planet. Many Native American groups believe that the convention also covered the protection of indigenous communities, including Native American land. In addition, the groups demand that prospecting by large companies for rare forms of life and materials on their land must stop. Tribal Environmental Concerns Tribal governments face both economic and social problems dealing with the demand for jobs, education, health care, and housing for tribal members. Often the reservations’ 47

Environmental Encyclopedia 3

American Oceans Campaign

largest employer is the government, which owns the stores, gaming operations, timber mills, and manufacturing facilities. Therefore, the government must deal with the conflicting interests of protecting both economic and environmental concerns. Many tribes are becoming self-governing and manage their own natural resources along with claiming the reserved right to use natural resources on portions of public land that border their reservation. As a product of the reserved treaty rights, Native Americans can use water, fish, and hunt anytime on nearby federal land. Robert Belcourt, Chippewa-Cree tribal member and director of the Natural Resources Department in Montana stated: “We have to protect nature for our future generations. More of our Indian people need to get involved in natural resource management on each of our reservations. In the long run, natural resources will be our bread and butter by our developing them through tourism and recreation and just by the opportunity they provide for us to enjoy the outdoor world.” Belcourt has fought to destroy the negative stereotypes of conservation organizations that exist among Native Americans who believe, for example, that conservationists are extreme tree-huggers and insensitive to Native American culture. These stereotypes are a result of cultural differences in philosophy, perspective, and communication. To work together effectively, tribes and conservation groups need to learn about one another’s cultures, and this means they must listen both at meetings and in one-on-one exchanges. The AIEO also addresses the organizational differences that exist in tribal governments and conservation organizations. They differ greatly in terms of style, motivation, and the pressures they face. Pressures on the Wilderness Society, for example, include fending off attempts in Washington, D.C. to weaken key environmental laws or securing members and raising funds. Pressures on tribal governments more often are economic and social in nature and have to do with the need to provide jobs, health care, education, and housing for tribal members. Because tribal governments are often the reservations’ largest employers and may own businesses like gaming operations, timber mills, manufacturing facilities, and stores, they function as both governors and leaders in economic development. Native Americans currently occupy and control over 52 million acres (21.3 million ha) in the continental United States and 45 million more acres (18.5 million ha) in Alaska, yet this is only a small fraction of their original territories. In the nineteenth century, many tribes were confined to reservations that were perceived to have little economic value, although valuable natural resources have subsequently been found on some of these land. Pointing to their treaties and other agreements with the federal government, many 48

tribes assert that they have reserved rights to use natural resources on portions of public land. In previous decades these natural resources on tribal lands were managed by the Bureau of Indian Affairs (BIA). Now many tribes are becoming self-governing and are taking control of management responsibilities within their own reservation boundaries. In addition, some tribes are pushing to take back management over some federally managed lands that were part of their original territories. For example, the Confederated Salish and Kootenai tribes of the Flathead Reservation are taking steps to assume management of the National Bison Range, which lies within the reservation’s boundaries and is currently managed by the U.S. Fish and Wildlife Service. Another issue concerns Native American rights to water. There are legal precedents that support the practice of reserved rights to water that is within or bordering a reservation. In areas where tribes fish for food, mining pollution has been a continues threat to maintaining clean water. Mining pollution is monitored, but the amount of fish that Native Americans consume is higher than the government acknowledges when setting health guidelines for their consumption. This is why the AIEO is asking that stricter regulations be imposed on mining companies. As tribes increasingly exercise their rights to use and consume water and fish, their roles in natural resource debates will increase. Many tribes are establishing their own natural resource management and environmental quality protection programs with the help of the AIEO. Tribes have established fisheries, wildlife, forestry, water quality, waste management, and planning departments. Some tribes have prepared comprehensive resource management plans for their reservations while others have become active in the protection of particular species. The AIEO is uniting tribes in their strategy and involvement level with improving environmental protection on Native American land. [Nicole Beatty]

RESOURCES ORGANIZATIONS American Indian Environmental Office, 1200 Pennsylvania Avenue, NW, Washington, D.C. USA 20460 (202) 564-0303, Fax: (202) 564-0298,

American Oceans Campaign Located in Los Angeles, California, the American Oceans Campaign (AOC) was founded in 1987 as a political interest group dedicated primarily to the restoration, protection, and preservation of the health and vitality of coastal waters, estu-

Environmental Encyclopedia 3 aries, bays, wetlands, and oceans. More national and conservationist (rather than international and preservationist) in its focus than other groups with similar concerns, the AOC tends to view the oceans as a valuable resource whose use should be managed carefully. As current president Ted Danson puts it, the oceans must be regarded as far more than a natural preserve by environmentalists; rather, healthy oceans “sustain biological diversity, provide us with leisure and recreation, and contribute significantly to our nation’s GNP.” The AOC’s main political efforts reflect this focus. Central to the AOC’s lobbying strategy is a desire to build cooperative relations and consensus among the general public, public interest groups, private sector corporations and trade groups, and public/governmental authorities around responsible management of ocean resources. The AOC is also active in grassroots public awareness campaigns through mass media and community outreach programs. This highprofile media campaign has included the production of a series of informational bulletins (Public Service Announcements) for use by local groups, as well as active involvement in the production of several documentary television series that have been broadcast on both network and public television. The AOC also has developed extensive connections with both the news and entertainment industries, frequently scheduling appearances by various celebrity supporters such as Jamie Lee Curtis, Whoopi Goldberg, Leonard Nimoy, Patrick Swayze, and Beau Bridges. As a lobbying organization, the AOC has developed contacts with government leaders at all levels from local to national, attempting to shape and promote a variety of legislation related to clean water and oceans. It has been particularly active in lobbying for strengthening various aspects of the Clean Water Act, the Safe Drinking Water Act, the Oil Pollution Act, and the Ocean Dumping Ban Act. The AOC regularly provides consultation services, assistance in drafting legislation, and occasional expert testimony on matters concerning ocean ecology. Recently this has included AOC Political Director Barbara Polo’s testimony before the U.S. House of Representatives Subcommittee on Fisheries, Conservation, Wildlife, and Oceans on the substance and effect of legislation concerning the protection of coral reef ecosystems. Also very active at the grassroots level, AOC has organized numerous cleanup operations which both draw attention to the problems caused by ocean dumping and make a practical contribution to reversing the situation. Concentrating its efforts in California and the Pacific Northwest, the AOC launched its “Dive for Trash” program in 1991. As many as 1,000 divers may team up at AOC-sponsored events to recover garbage from the coastal waters. In cooperation with the U.S. Department of Commerce’s National

American Oceans Campaign

Maritime Sanctuary Program, the AOC is planning to add a marine environmental assessment component to this diving program, and to expand the program into Gulf and Atlantic coastal waters. Organizationally, the AOC divides its political and lobbying activity into three separate substantive policy areas: “Critical Oceans and Coastal Habitats,” which includes issues concerning estuaries, watersheds, and wetlands; “Coastal Water Pollution,” which focuses on beach water quality and the effects of storm water runoff, among other issues; and “Living Resources of the Sea,” which include coral reefs, fisheries, and marine mammals (especially dolphins). Activities in all these areas have run the gamut from public and legislative information campaigns to litigation. The AOC has been particularly active along the California coastline and has played a central role in various programs aimed at protecting coastal wetland ecosystems from development and pollution. It has also been active in the Santa Monica Bay Restoration Project, which seeks to restore environmental balance to Santa Monica Bay. Typical of the AOC’s multi-level approach, this project combines a program of public education and citizen (and celebrity) involvement with the monitoring and reduction of privatesector pollution and with the conducting of scientific studies on the impact of a various activities in the surrounding area. These activities are also combined with an attempt to raise alternative revenues to replace funds recently lost due to the reduction of both federal (National Estuary Program) and state government support for the conservation of coastal and marine ecosystems. In addition, the AOC has been involved in litigation against the County of Los Angeles over a plan to build flood control barriers along a section of the Los Angeles River. AOC’s major concern is that these barriers will increase the amount of polluted storm water runoff being channeled into coastal waters. The AOC contends that prudent management of this storm water would be better used in recharging Southern California’s scant water resources via storage or redirection into underground aquifers before this runoff becomes polluted. In February of 2002, AOC teamed up with a new nonprofit ocean advocacy organization called Oceana. The focus of this partnership is the Oceans at Risk program that concentrates on the impact that wasteful fisheries have on the marine environment. [Lawrence J. Biskowski]

RESOURCES ORGANIZATIONS American Oceans Campaign, 6030 Wilshire Blvd Suite 400, Los Angeles, CA USA 90036 (323) 936-8242, Fax: (323) 936-2320, Email: [email protected],

49

Environmental Encyclopedia 3

American Wildlands

American Wildlands American Wildlands (AWL) is a nonprofit wildland resource conservation and education organization founded in 1977. AWL is dedicated to protecting and promoting proper management of America’s publicly owned wild areas and to securing wilderness designation for public land areas. The organization has played a key role in gaining legal protection for many wilderness and river areas in the U.S. interior west and in Alaska. Founded as the American Wilderness Alliance, AWL is involved in a wide range of wilderness resource issues and programs including timber management policy reform, habitat corridors, rangeland management policy reform, riparian and wetlands restoration, and public land management policy reform. AWL promotes ecologically sustainable uses of public wildlands resources including forests, wilderness, wildlife, fisheries, and rivers. It pursues this mission through grassroots activism, technical support, public education, litigation, and political advocacy. AWL maintains three offices: the central Rockies office in Lakewood, Colorado; the northern Rockies office in Bozeman, Montana; and the Sierra-Nevada office in Reno, Nevada. The organization’s annual budget of $350,000 has been stable for many years, but with programs that are now being considered for addition to its agenda, that figure is expected to increase over the next few years. The Central Rockies office in Bozeman considers its main concern timber management reform. It has launched the Timber Management Reform Policy Program, which monitors the U.S. Forest Service and works toward a better management of public forests. Since initiation of the program in 1986, the program includes resource specialists, a wildlife biologist, forester, water specialist, and an aquatic biologist who all report to an advisory council. A major victory of this program was stopping the sale of 4.2 million board feet (1.3 million m) of timber near the Electric Peak Wilderness Area. Other programs coordinated by the Central Rockies office include: 1) Corridors of Life Program which identifies and maps wildlife corridors, land areas essential to the genetic interchange of wildlife that connect roadless lands or other wildlife habitat areas. Areas targeted are in the interior West, such as Montana, North and South Dakota, Wyoming, and Idaho; 2) The Rangeland Management Policy Reform Program monitors grazing allotments and files appeals as warranted. An education component teaches citizens to monitor grazing allotments and to use the appeals process within the U.S. Forest Service and Bureau of Land Management; 3) The Recreation-Conservation Connection, through newsletters and travel-adventure programs, teaches the public how to enjoy 50

the outdoors without destroying nature. Six hundred travelers have participated in ecotourism trips through AWL. AWL is also active internationally. The AWL/Leakey Fund has aided Dr. Richard Leakey’s wildlife habitat conservation and elephant poaching elimination efforts in Kenya. A partnership with the Island Foundation has helped fund wildlands and river protection efforts in Patagonia, Argentina. AWL also is an active member of Canada’s Tatshenshini International Coalition to protect that river and its 2.3 million acres (930,780 ha) of wilderness. [Linda Rehkopf]

RESOURCES ORGANIZATIONS American Wildlands, 40 East Main #2, Bozeman, MT USA 59715 (406) 586-8175, Fax: (406) 586-8242, Email: [email protected],

Ames test A laboratory test developed by biochemist Bruce N. Ames to determine the possible carcinogenic nature of a substance. The Ames test involves using a particular strain of the bacteria Salmonella typhimurium that lacks the ability to synthesize histidine and is therefore very sensitive to mutation. The bacteria are inoculated into a medium deficient in histidine but containing the test compound. If the compound results in DNA damage with subsequent mutations, some of the bacteria will regain the ability to synthesize histidine and will proliferate to form colonies. The culture is evaluated on the basis of the number of mutated bacterial colonies it produced. The ability to replicate mutated colonies leads to the classification of a substance as probably carcinogenic. The Ames test is a test for mutagenicity not carcinogenicity. However, approximately nine out of 10 mutagens are indeed carcinogenic. Therefore, a substance that can be shown to be mutagenic by being subjected to the Ames test can be reliably classified as a suspected carcinogen and thus recommended for further study. [Brian R. Barthel]

RESOURCES BOOKS Taber, C. W. Taber’s Cyclopedic Medical Dictionary. Philadelphia: F. A. Davis, 1990. Turk J., and A. Turk. Environmental Science. Philadelphia: W. B. Saunders, 1988.

Environmental Encyclopedia 3

Cleveland Armory

Amoco Cadiz This shipwreck in March 1978 off the Brittany coast was the first major supertanker accident since the Torrey Canyon 11 years earlier. Ironically, this spill, more than twice the size of the Torrey Canyon, blackened some of the same shores and was one of four substantial oil spills there since 1967. It received great scientific attention because it occurred near several renowned marine laboratories. The cause of the wreck was a steering failure as the ship entered the English Channel off the northwest Brittany coast, and failure to act swiftly enough to correct it. During the next 12 hours, the Amoco Cadiz could not be extricated from the site. In fact, three separate lines from a powerful tug broke trying to remove the tanker before it drifted onto rocky shoals. Eight days later the Amoco Cadiz split in two. Seabirds seemed to suffer the most from the spill, although the oil devastated invertebrates within the extensive, 20–30 ft (6-9 m) high intertidal zone. Thousands of birds died in a bird hospital described by one oil spill expert as a bird morgue. Thirty percent of France’s seafood production was threatened, as well as an extensive kelp crop, harvested for fertilizer, mulch, and livestock feed. However, except on oyster farms located in inlets, most of the impact was restricted to the few months following the spill. In an extensive journal article, Erich Grundlach and others reported studies on where the oil went and summarized the findings of biologists. Of the 223,000 metric tons released, 13.5% was incorporated within the water column, 8% went into subtidal sediments, 28% washed into the intertidal zone, 20–40% evaporated, and 4% was altered while at sea. Much research was done on chemical changes in the hydrocarbon fractions over time, including that taken up within organisms. Researchers found that during early phases, biodegradation was occurring as rapidly as evaporation. The cleanup efforts of thousands of workers were helped by storm and wave action that removed much of the stranded oil. High energy waves maintained an adequate supply of nutrients and oxygenated water, which provided optimal conditions for biodegradation. This is important because most of the biodegradation was done by aerobic organisms. Except for protected inlets, much of the impact was gone three years later, but some effects were expected to last a decade. [Nathan H. Meleen]

RESOURCES PERIODICALS Grove, N. “Black Day for Brittany: Amoco Cadiz Wreck.” National Geographic 154 (July 1978): 124–135.

The Amoco Cadiz oil spill in the midst of being contained. (Photograph by Leonard Freed. Magnum Photos, Inc. Reproduced by permission.)

Grundlach, E. R., et al. “The Fate of Amoco Cadiz Oil.” Science 221 (8 July 1983): 122–129. Schneider, E. D. “Aftermath of the Amoco Cadiz: Shorline Impact of the Oil Spill.” Oceans 11 (July 1978): 56–9. Spooner, M. F., ed. Amoco Cadiz Oil Spill. New York: Pergamon, 1979. (Reprint of Marine Pollution Bulletin, v. 9, no. 11, 1978)

Cleveland Amory (1917 – 1998) American Activist and writer Amory is known both for his series of classic social history books and his work with the Fund for Animals. Born in Nahant, Massachusetts, to an old Boston family, Amory attended Harvard University, where he became editor of The Harvard Crimson. This prompted his well-known remark, “If you have been editor of The Harvard Crimson in your senior year at Harvard, there is very little, in after life, for you.” Amory was hired by The Saturday Evening Post after graduation, becoming the youngest editor ever to join that publication. He worked as an intelligence officer in the United States Army during World War II, and in the years after the war, wrote a trilogy of social commentary books, now considered to be classics. The Proper Bostonians was 51

Environmental Encyclopedia 3

Anaerobic

published to critical acclaim in 1947, followed by The Last Resorts (1948), and Who Killed Society? (1960), all of which became best sellers. Beginning in 1952, Amory served for 11 years as social commentator on NBC’s “The Today Show.” The network fired him after he spoke out against cruelty to animals used in biomedical research. From 1963 to 1976, Amory served as a senior editor and columnist for Saturday Review magazine, while doing a daily radio commentary, entitled “Curmudgeon-at-Large.” He was also chief television critic for TV Guide, where his biting attacks on sport hunting angered hunters and generated bitter but unsuccessful campaigns to have him fired. In 1967, Amory founded The Fund for Animals “to speak for those who can’t,” and served as its unpaid president. Animal protection became his passion and his life’s work, and he was considered one of the most outspoken and provocative advocates of animal welfare. Under his leadership, the Fund became a highly activist and controversial group, engaging in such activities as confronting hunters of whales and seals, and rescuing wild horses, burros, and goats. The Fund, and Amory in particular, are well known for their campaigns against sport hunting and trapping, the fur industry, abusive research on animals, and other activities and industries that engage in or encourage what they consider cruel treatment of animals. In 1975, Amory published ManKind? Our Incredible War on Wildlife, using humor, sarcasm, and graphic rhetoric to attack hunters, trappers, and other exploiters of wild animals. The book was praised by The New York Times in a rare editorial. His next book, AniMail, (1976) discussed animal issues in a question-and-answer format. In 1987, he wrote The Cat Who Came for Christmas, a book about a stray cat he rescued from the streets of New York, which became a national best seller. This was followed in 1990 by its sequel, also a best seller, The Cat and the Curmudgeon. Amory had been a senior contributing editor of Parade magazine since 1980, where he often profiled famous personalities. Amory died of an aneurysm at the age of 81 on October 14, 1998. He remained active right up until the end, spending the day in his office at the Fund for Animals and then passing away in his sleep later that evening. Staffers at both the Fund for Animals have vowed that Amory’s work will continue, “just the way Cleveland would have wanted it.” [Lewis G. Regenstein]

RESOURCES BOOKS Amory, C. The Cat and the Curmudgeon. New York: G. K. Hall, 1991. ———. The Cat Who Came for Christmas. New York: Little Brown, 1987.

52

Cleveland Amory. (The Fund for Animals. Reproduced by permission.)

PERIODICALS Pantridge, M. “The Improper Bostonian.” Boston Magazine 83 (June 1991): 68–72.

Anaerobic This term refers to an environment lacking in molecular oxygen (O2), or to an organism, tissue, chemical reaction, or biological process that does not require oxygen. Anaerobic organisms can use a molecule other than O2 as the terminal electron acceptor in respiration. These organisms can be either obligate, meaning that they cannot use O2, or facultative, meaning that they do not require oxygen but can use it if it is available. Organic matter decomposition in poorly aerated environments, including water-logged soils, septic tanks, and anaerobically-operated waste treatment facilities, produces large amounts of methane gas. The methane can become an atmospheric pollutant, or it may be captured and used for fuel, as in “biogas"-powered electrical generators. Anaerobic decomposition produces the notorious “swamp gases” that have been reported as unidentified flying objects (UFOs).

Environmental Encyclopedia 3

Anaerobic digestion Refers to the biological degradation of either sludges or solid waste under anaerobic conditions, meaning that no oxygen is present. In the digestive process, solids are converted to noncellular end products. In the anaerobic digestion of sludges, the goals are to reduce sludge volume, insure the remaining solids are chemically stable, reduce disease-causing pathogens, and enhance the effectiveness of subsequent dewatering methods, sometimes recovering methane as a source of energy. Anaerobic digestion is commonly used to treat sludges that contain primary sludges, such as that from the first settling basins in a wastewater treatment plant, because the process is capable of stabilizing the sludge with little biomass production, a significant benefit over aerobic sludge digestion, which would yield more biomass in digesting the relatively large amount of biodegradable matter in primary sludge. The microorganisms responsible for digesting the sludges anaerobically are often classified in two groups, the acid formers and the methane formers. The acid formers are microbes that create, among others, acetic and propionic acids from the sludge. These chemicals generally make up about a third of the by-products initially formed based on a chemical oxygen demand (COD) mass balance, and some of the propionic and other acids are converted to acetic acid. The methane formers convert the acids and by-products resulting from prior metabolic steps (e.g., alcohols, hydrogen, carbon dioxide) to methane. Often, approximately 70% of the methane formed is derived from acetic acid, about 10–15% from propionic acid. Anaerobic digesters are designed as either standardor high-rate units. The standard-rate digester has a solids retention time of 30–90 days, as opposed to 10–20 days for the high-rate systems. The volatile solids loadings of the standard- and high-rate systems are in the area of 0.5–1.6 and 1.6–6.4 Kg/m3/d, respectively. The amount of sludge introduced into the standard-rate is therefore generally much less than the high-rate system. Standard-rate digestion is accomplished in single-stage units, meaning that sludge is fed into a single tank and allowed to digest and settle. Highrate units are often designed as two-stage systems in which sludge enters into a completely-mixed first stage that is mixed and heated to approximately 98°F (35°C) to speed digestion. The second-stage digester, which separates digested sludge from the overlying liquid and scum, is not heated or mixed. With the anaerobic digestion of solid waste, the primary goal is generally to produce methane, a valuable source

Anemia

of fuel that can be burned to provide heat or used to power motors. There are basically three steps in the process. The first involves preparing the waste for digestion by sorting the waste and reducing its size. The second consists of constantly mixing the sludge, adding moisture, nutrients, and pH neutralizers while heating it to about 143°F (60°C) and digesting the waste for a week or longer. In the third step, the generated gas is collected and sometimes purified, and digested solids are disposed of. For each pound of undigested solid, about 8–12 ft3 of gas is formed, of which about 60% is methane. [Gregory D. Boardman]

RESOURCES BOOKS Corbitt, R. A. Standard Handbook of Environmental Engineering. New York: McGraw-Hill, 1990. Davis, M. L., and D. A. Cornwell. Introduction to Environmental Engineering. New York: McGraw-Hill, 1991. Viessman, W., Jr., and M. J. Hammer. Water Supply and Pollution Control. 5th ed. New York: Harper Collins, 1993.

Anemia Anemia is a medical condition in which the red cells of the blood are reduced in number or volume or are deficient in hemoglobin, their oxygen-carrying pigment. Almost 100 different varieties of anemia are known. Iron deficiency is the most common cause of anemia worldwide. Other causes of anemia include ionizing radiation, lead poisoning, vitamin B12 deficiency, folic acid deficiency, certain infections, and pesticide exposure. Some 350 million people worldwide—mostly women of child-bearing age—suffer from anemia. The most noticeable symptom is pallor of the skin, mucous membranes, and nail beds. Symptoms of tissue oxygen deficiency include pulsating noises in the ear, dizziness, fainting, and shortness of breath. The treatment varies greatly depending on the cause and diagnosis, but may include supplying missing nutrients, removing toxic factors from the environment, improving the underlying disorder, or restoring blood volume with transfusion. Aplastic anemia is a disease in which the bone marrow fails to produce an adequate number of blood cells. It is usually acquired by exposure to certain drugs, to toxins such as benzene, or to ionizing radiation. Aplastic anemia from radiation exposure is well-documented from the Chernobyl experience. Bone marrow changes typical of aplastic anemia can occur several years after the exposure to the offending agent has ceased. 53

Environmental Encyclopedia 3

Animal cancer tests

Aplastic anemia can manifest itself abruptly and progress rapidly; more commonly it is insidious and chronic for several years. Symptoms include weakness and fatigue in the early stages, followed by headaches, shortness of breath, fever and a pounding heart. Usually a waxy pallor and hemorrhages occur in the mucous membranes and skin. Resistance to infection is lowered and becomes the major cause of death. While spontaneous recovery occurs occasionally, the treatment of choice for severe cases is bone marrow transplantation. Marie Curie, who discovered the element radium and did early research into radioactivity, died in 1934 of aplastic anemia, most likely caused by her exposure to ionizing radiation. While lead poisoning, which leads to anemia, is usually associated with occupational exposure, toxic amounts of lead can leach from imported ceramic dishes. Other environmental sources of lead exposure include old paint or paint dust, and drinking water pumped through lead pipes or leadsoldered pipes. Cigarette smoke is known to cause an increase in the level of hemoglobin in smokers, which leads to an underestimation of anemia in smokers. Studies suggest that carbon monoxide (a by-product of smoking) chemically binds to hemoglobin, causing a significant elevation of hemoglobin values. Compensation values developed for smokers can now detect possible anemia. [Linda Rehkopf]

RESOURCES BOOKS Harte, J., et. al. Toxics A to Z. Berkeley: University of California Press, 1991. Nordenberg, D., et al. “The Effect of Cigarette Smoking on Hemoglobin Levels and Anemia Screening.” Journal of the American Medical Association (26 September 1990): 1556. Stuart-Macadam, P., ed. Diet, Demography and Disease: Changing Perspectives on Anemia. Hawthrone: Aldine de Gruyter, 1992.

Animal cancer tests Cancer causes more loss of life-years than any other disease in the United States. At first reading, this statement seems to be in error. Does not cardiovascular disease cause more deaths? The answer to that rhetorical question is “yes.” However, many deaths from heart attack and stroke occur in the elderly. The loss of life-years of an 85 year old person (whose life expectancy at the time of his/her birth was between 55 and 60) is, of course, zero. However, the loss of life-years of a child of 10 who dies of a pediatric leukemia is between 65 to 70 years. This comparison of youth with the elderly is not meant in any way to demean the value that reasonable

54

people place on the lives of the elderly. Rather, the comparison is made to emphasize the great loss of life due to malignant tumors. The chemical causation of cancer is not a simple process. Many, perhaps most, chemical carcinogens do not in their usual condition have the potency to cause cancer. The non-cancer causing form of the chemical is called a “procarcinogen.” Procarcinogens are frequently complex organic compounds that the human body attempts to dispose of when ingested. Hepatic enzymes chemically change the procarcinogen in several steps to yield a chemical that is more easily excreted. The chemical changes result in modification of the procarcinogen (with no cancer forming ability) to the ultimate carcinogen (with cancer causing competence). Ultimate carcinogens have been shown to have a great affinity for DNA, RNA, and cellular proteins, and it is the interaction of the ultimate carcinogen with the cell macromolecules that causes cancer. It is unfortunate indeed that one cannot look at the chemical structure of a potential carcinogen and predict whether or not it will cause cancer. There is no computer program that will predict what hepatic enzymes will do to procarcinogens and how the metabolized end product(s) will interact with cells. Great strides have been made in the development of chemotherapeutic agents designed to cure cancer. The drugs have significant efficacy with certain cancers (these include but are not limited to pediatric acute lymphocytic leukemia, choriocarcinoma, Hodgkin’s disease, and testicular cancer), and some treated patients attain a normal life span. While this development is heartening, the cancers listed are, for the most part, relatively infrequent. More common cancers such as colorectal carcinoma, lung cancer, breast cancer, and ovarian cancer remain intractable with regard to treatment. These several reasons are why animal testing is used in cancer research. The majority of Americans support the effort of the biomedical community to use animals to identify potential carcinogens with the hope that such knowledge will lead to a reduction of cancer prevalence. Similarly, they support efforts to develop more effective chemotherapy. Animals are used under terms of the Animal Welfare Act of 1966 and its several amendments. The act designates that the U. S. Department of Agriculture is responsible for the humane care and handling of warm-blooded and other animals used for biomedical research. The act also calls for inspection of research facilities to insure that adequate food, housing, and care are provided. It is the belief of many that the constraints of the current law have enhanced the quality of biomedical research. Poorly maintained animals do not provide quality research. The law also has enhanced the care of animals used in cancer research. [Robert G. McKinnell]

Environmental Encyclopedia 3

Animal rights

RESOURCES PERIODICALS

RESOURCES ORGANIZATIONS

Abelson, P. H. “Tesing for Carcinogens With Rodents.” Science 249 (21 September 1990): 1357.

Animal Legal Defense Fund, 127 Fourth Street, Petaluma, CA USA 94952 Fax: (707) 769-7771, Toll Free: (707) 769-0785, Email: [email protected],

Donnelly, S., and K. Nolan. “Animals, Science, and Ethics.” Hastings Center Report 20 (May-June 1990): suppl 1–l32. Marx, J. “Animal Carcinogen Testing Challenged: Bruce Ames Has Stirred Up the Cancer Research Community.” Science 250 (9 November 1990): 743–5.

Animal Legal Defense Fund Originally established in 1979 as Attorneys for Animal Rights, this organization changed its name to Animal Legal Defense Fund (ALDF) in 1984, and is known as “the law firm of the animal rights movement.” Their motto is “we may be the only lawyers on earth whose clients are all innocent.” ALDF contends that animals have a fundamental right to legal protection against abuse and exploitation. Over 350 attorneys work for ALDF, and the organization has more than 50,000 supporting members who help the cause of animal rights by writing letters and signing petitions for legislative action. The members are also strongly encouraged to work for animal rights at the local level. ALDF’s work is carried out in many places including research laboratories, large cities, small towns, and the wild. ALDF attorneys try to stop the use of animals in research experiments, and continue to fight for expanded enforcement of the Animal Welfare Act. ALDF also offers legal assistance to humane societies and city prosecutors to help in the enforcement of anti-cruelty laws and the exposure of veterinary malpractice. The organization attempts to protect wild animals from exploitation by working to place controls on trappers and sport hunters. In California, ALDF successfully stopped the hunting of mountain lions and black bears. ALDF is also active internationally bringing legal action against elephant poachers as well as against animal dealers who traffic in endangered species. ALDF’s clear goals and swift action have resulted in many court victories. In 1992 alone, the organization won cases involving cruelty to dolphins, dogs, horses, birds, and cats. It has also blocked the importation of over 70,000 monkeys from Bangladesh for research purposes, and has filed suit against the National Marine Fisheries Services to stop the illegal gray market in dolphins and other marine mammals. ALDF also publishes a quarterly magazine, The Animals’ Advocate. [Cathy M. Falk]

Animal rights Recent concern about the way humans treat animals has spawned a powerful social and political movement driven by the conviction that humans and certain animals are similar in morally significant ways, and that these similarities oblige humans to extend to those animals serious moral consideration, including rights. Though animal welfare movements, concerned primarily with humane treatment of pets, date back to the 1800s, modern animal rights activism has developed primarily out of concern about the use and treatment of domesticated animals in agriculture and in medical, scientific, and industrial research. The rapid growth in membership of animal rights organizations testifies to the increasing momentum of this movement. The leading animal rights group today, People for the Ethical Treatment of Animals (PETA), was founded in 1980 with 100 individuals; today, it has over 300,000 members. The animal rights activist movement has closely followed and used the work of modern philosophers who seek to establish a firm logical foundation for the extension of moral considerability beyond the human community into the animal community. The nature of animals and appropriate relations between humans and animals have occupied Western thinkers for millennia. Traditional Western views, both religious and philosophical, have tended to deny that humans have any moral obligations to nonhumans. The rise of Christianity and its doctrine of personal immortality, which implies a qualitative gulf between humans and animals, contributed significantly to the dominant Western paradigm. When seventeenth century philosopher Rene´ Descartes declared animals mere biological machines, the perceived gap between humans and nonhuman animals reached its widest point. Jeremy Bentham, the father of ethical utilitarianism, challenged this view and fostered a widespread anticruelty movement and exerted powerful force in shaping our legal and moral codes. Its modern legacy, the animal welfare movement, is reformist in that it continues to accept the legitimacy of sacrificing animal interests for human benefit, provided animals are spared any suffering which can conveniently and economically be avoided. In contrast to the conservatively reformist platform of animal welfare crusaders, a new radical movement began in the late 1970s. This movement, variously referred to as animal liberation or animal rights, seeks to put an end to the routine sacrifice of animal interests for human benefit. In 55

Animal rights

Environmental Encyclopedia 3

Animal rights activists dressed as monkeys in prison suits block the entrance to the Department of Health and Human Services in Washington, DC, in protest of the use of animals in laboratory research. (Corbis-Bettmann. Reproduced by permission.)

seeking to redefine the issue as one of rights, some animal protectionists organized around the well-articulated and widely disseminated utilitarian perspective of Australian philosopher Peter Singer. In his 1975 classic Animal Liberation, Singer argued that because some animals can experience pleasure and pain, they deserve our moral consideration. While not actually a rights position, Singer’s work nevertheless uses the language of rights and was among the first to abandon welfarism and to propose a new ethic of moral considerability for all sentient creatures. To assume that humans are inevitably superior to other species simply by virtue of their species membership is an injustice which Singer terms speciesism, an injustice parallel to racism and sexism. Singer does not claim all animal lives to be of equal worth, nor that all sentient beings should be treated identically. In some cases, human interests may outweigh those of nonhumans, and Singer’s utilitarian calculus would allow us to engage in practices which require the use of animals 56

in spite of their pain, where those practices can be shown to produce an overall balance of pleasure over suffering. Some animal advocates thus reject utilitarianism on the grounds that it allows the continuation of morally abhorrent practices. Lawyer Christopher Stone and philosophers Joel Feinberg and Tom Regan have focused on developing cogent arguments in support of rights for certain animals. Regan’s 1983 book The Case For Animal Rights developed an absolutist position which criticized and broke from utilitarianism. It is Regan’s arguments, not reformism or the pragmatic principle of utility, which have come to dominate the rhetoric of the animal rights crusade. The question of which animals possess rights then arises. Regan asserts it is those who, like us, are subjects experiencing their own lives. By “experiencing” Regan means conscious creatures aware of their environment and with goals, desires, emotions, and a sense of their own identity. These characteristics give an individual inherent value, and this value entitles the bearer to certain inalienable rights,

Environmental Encyclopedia 3

Animal Welfare Institute

especially the right to be treated as an end in itself, and never merely as a means to human ends. The environmental community has not embraced animal rights; in fact, the two groups have often been at odds. A rights approach focused exclusively on animals does not cover all the entities such as ecosystems that many environmentalists feel ought to be considered morally. Yet a rights approach that would satisfy environmentalists by encompassing both living and nonliving entities may render the concept of rights philosophically and practically meaningless. Regan accuses environmentalists of environmental fascism, insofar as they advocate the protection of species and ecosystems at the expense of individual animals. Most animal rightists advocate the protection of ecosystems only as necessary to protect individual animals, and assign no more value to the individual members of a highly endangered species than to those of a common or domesticated species. Thus, because of its focus on the individual, animal rights can offer no realistic plan for managing natural systems or for protecting ecosystem health, and may at times hinder the efforts of resource managers to effectively address these issues. For most animal activists, the practical implications of the rights view are clear and uncompromising. The rights view holds that all animal research, factory farming, and commercial or sport hunting and trapping should be abolished. This change of moral status necessitates a fundamental change in contemporary Western moral attitudes towards animals, for it requires humans to treat animals as inherently valuable beings with lives and interests independent of human needs and wants. While this change is not likely to occur in the near future, the efforts of animal rights advocates may ensure that wholesale slaughter of these creatures for unnecessary reasons that is no longer routinely the case, and that when such sacrifice is found to be necessary, it is accompanied by moral deliberation. [Ann S. Causey]

RESOURCES BOOKS Hargrove, E. C. The Animal Rights/Environmental Ethics Debate. New York: SUNY Press, 1992. Regan, T. The Case For Animal Rights. Los Angeles: University of California Press, 1983. ———, and P. Singer. Animal Rights and Human Obligations. 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 1989. Singer, P. Animal Liberation. New York: Avon Books, 1975. Zimmerman, M. E., et al, eds. Environmental Philosophy: From Animal Rights To Radical Ecology. Englewood Cliffs, NJ: Prentice-Hall, 1993.

Animal waste Animal wastes are commonly considered the excreted materials from live animals. However, under certain production conditions, the waste may also include straw, hay, wood shavings, or other sources of organic debris. It has been estimated that there may be as much as 2 billion tons of animal wastes produced in the United States annually. Application of excreta to soil brings benefits such as improved soil tilth, increased water-holding capacity, and some plant nutrients. Concentrated forms of excreta or high application rates to soils without proper management may lead to high salt concentrations in the soil and cause serious on- or offsite pollution.

Animal Welfare Institute Founded in 1951, the Animal Welfare Institute (AWI) is a non-profit organization that works to educate the public and to secure needed action to protect animals. AWI is a highly respected, influential, and effective group that works with Congress, the public, the news media, government officials, and the conservation community on animal protection programs and projects. Its major goals include improving the treatment of laboratory animals and a reduction in their use; eliminating cruel methods of trapping wildlife; saving species from extinction; preventing painful experiments on animals in schools and encouraging humane science teaching; improving shipping conditions for animals in transit; banning the importation of parrots and other exotic wild birds for the pet industry; and improving the conditions under which farm animals are kept, confined, transported, and slaughtered. In 1971 AWI launched the Save the Whales Campaign to help protect whales. The organization provides speakers and experts for conferences and meetings around the world, including Congressional hearings and international treaty and commission meetings. Each year, the institute awards its prestigious Albert Schweitzer Medal to an individual for outstanding achievement in the advancement of animal welfare. Its publications include The AWI Quarterly; books such as Animals and Their Legal Rights; Facts about Furs; and The Endangered Species Handbook; booklets, brochures, and other educational materials, which are distributed to schools, teachers, scientists, government officials, humane societies, libraries, and veterinarians. AWI works closely with its associate organization, The Society for Animal Protective Legislation (SAPL), a lobbying group based in Washington, D.C. Founded in 1955, SAPL devotes its efforts to supporting legislation to protect animals, often mobilizing its 14,000 “correspondents” in letter-writing campaigns to members of Congress. 57

Environmental Encyclopedia 3

Antarctic Treaty (1961)

SAPL has been responsible for the passage of more animal protection laws than any other organization in the country, and perhaps the world, and it has been instrumental in securing the enactment of 14 federal laws. Major federal legislation which SAPL has promoted includes the first federal Humane Slaughter Act in 1958 and its strengthening in 1978; the 1959 Wild Horse Act; the 1966 Laboratory Animal Welfare Act and its strengthening in 1970, 1976, 1985, and 1990; the 1969 Endangered Species Act and its strengthening in 1973; a 1970 measure banning the crippling or “soring” of Tennessee Walking Horses; measures passed in 1971 prohibiting hunting from aircraft, protecting wild horses, and resolutions calling for a moratorium on commercial whaling; the 1972 Marine Mammal Protection Act; negotiation of the 1973 Convention on International Trade in Endangered Species of Fauna and Flora (CITES); the 1979 Packwood-Magnuson Amendment protecting whales and other ocean creatures; the 1981 strengthening of the Lacey Act to restrict the importation of illegal wildlife; the 1990 Pet Theft Act; and, in 1992, The Wild Bird Conservation Act, protecting parrots and other exotic wild birds; the International Dolphin Conservation Act, restricting the killing of dolphins by tuna fishermen; and the Driftnet Fishery Conservation Act, protecting whales, sea birds, and other ocean life from being caught and killed in huge, 30-mi-long (48-km-long) nets. Major goals of SAPL include enacting legislation to end the use of cruel steel-jaw leg-hold traps and to secure proper enforcement, funding, administration, and reauthorization of existing animal protection laws. Both AWI and SAPL have long been headed by their chief volunteer, Christine Stevens, a prominent Washington, D.C. humanitarian and community leader. [Lewis G. Regenstein]

RESOURCES ORGANIZATIONS Animal Welfare Institute, P.O. Box 3650, Washington, D.C USA 20007 (202) 337-2332, Email: [email protected], Society for Animal Protective Legislation, P.O. Box 3719, Washington, D.C. USA 20007 (202) 337-2334, Fax: (202) 338-9478, Email: [email protected],

Anion see Ion

Antarctic Treaty (1961) The Antarctic Treaty, signed in 1961, established an international administrative system for the continent. The impetus 58

for the treaty was the International Geophysical Year, 1957– 1958, which had brought scientists from many nations together to study Antarctica. The political situation in Antarctica was complex at the time, with seven nations having made sometimes overlapping territorial claims to the continent: Argentina, Australia, Chile, France, New Zealand, Norway, and the United Kingdom. Several other nations, most notably the former USSR and the United States, had been active in Antarctic exploration and research and were concerned with how the continent would be administered. Negotiations on the treaty began in June 1958 with Belgium, Japan, and South Africa joining the original nine countries. The treaty was signed in December 1959 and took effect in June 1961. It begins by “recognizing that it is in the interest of all mankind that Antarctica shall continue forever to be used exclusively for peaceful purposes.” The key to the treaty was the nations’ agreement to disagree on territorial claims. Signatories of the treaty are not required to renounce existing claims, nations without claims shall have an equal voice as those with claims, and no new claims or claim enlargements can take place while the treaty is in force. This agreement defused the most controversial and complex issue regarding Antarctica, and in an unorthodox way. Among the other major provisions of the treaty are: the continent will be demilitarized; nuclear explosions and the storage of nuclear wastes are prohibited; the right of unilateral inspection of all facilities on the continent to ensure that the provisions of the treaty are being honored is guaranteed; and scientific research can continue throughout the continent. The treaty runs indefinitely and can be amended, but only by the unanimous consent of the signatory nations. Provisions were also included for other nations to become parties to the treaty. These additional nations can either be “acceding parties,” which do not conduct significant research activities but agree to abide by the terms of the treaty, or “consultative parties,” which have acceded to the treaty and undertake substantial scientific research on the continent. Twelve nations have joined the original 12 in becoming consultative parties: Brazil, China, Finland, Germany, India, Italy, Peru, Poland, South Korea, Spain, Sweden, and Uruguay. Under the auspices of the treaty, the Convention on the Conservation of Antarctic Marine Living Resources was adopted in 1982. This regulatory regime is an effort to protect the Antarctic marine ecosystem from severe damage due to overfishing. Following this convention, negotiations began on an agreement for the management of Antarctic mineral resources. The Convention on the Regulation of Antarctic Mineral Resource Activities was concluded in June 1988, but in 1989 Australia and France rejected the convention, urging that Antarctica be declared an international

Environmental Encyclopedia 3 wilderness closed to mineral development. In 1991 the Protocol on Environmental Protection, which included a 50-year ban on mining, was drafted. At first the United States refused to endorse this protocol, but it eventually joined the other treaty parties in signing the new convention in October 1991.

[Christopher McGrory Klyza]

RESOURCES BOOKS Shapley, D. The Seventh Continent: Antarctica in a Resource Age. Baltimore: Johns Hopkins University Press for Resources for the Future, 1985.

Antarctica The earth’s fifth largest continent, centered asymmetrically around the South Pole. Ninety-eight percent of this land mass, which covers approximately 5.4 million mi2 (13.8 million km2), is covered by snow and ice sheets to an average depth of 1.25 mi (2 km). This continent receives very little precipitation, less than 5 in (12 cm) annually, and the world’s coldest temperature was recorded here, at -128°F (-89°C). Exposed shorelines and inland mountain tops support life only in the form of lichens, two species of flowering plants, and several insect species. In sharp contrast, the ocean surrounding the Antarctic continent is one of the world’s richest marine habitats. Cold water rich in oxygen and nutrients supports teeming populations of phytoplankton and shrimp-like Antarctica krill, the food source for the region’s legendary numbers of whales, seals, penguins, and fish. During the nineteenth and early twentieth century, whalers and sealers severely depleted Antarctica’s marine mammal populations. In recent decades the whale and seal populations have begun to recover, but interest has grown in new resources, especially oil, minerals, fish, and tourism. The Antarctic’s functional limit is a band of turbulent ocean currents and high winds that circle the continent at about 60 degrees south latitude. This ring is known as the Antarctic convergence zone. Ocean turbulence in this zone creates a barrier marked by sharp differences in salinity and water temperature. Antarctic marine habitats, including the limit of krill populations, are bounded by the convergence. Since 1961 the Antarctic Treaty has formed a framework for international cooperation and compromise in the use of Antarctica and its resources. The treaty reserves the Antarctic continent for peaceful scientific research and bans all military activities. Nuclear explosions and radioactive waste are also banned, and the treaty neither recognizes nor establishes territorial claims in Antarctica. However, neither does the treaty deny pre-1961 claims, of which seven exist. Furthermore, some signatories to the treaty, including

Antarctica Project

the United States, reserve the right to make claims at a later date. At present the United States has no territorial claims, but it does have several permanent stations, including one at the South Pole. Questions of territorial control could become significant if oil and mineral resources were to become economically recoverable. The primary resources currently exploited are fin fish and krill fisheries. Interest in oil and mineral resources has risen in recent decades, most notably during the 1973 “oil crisis.” The expense and difficulty of extraction and transportation has so far made exploitation uneconomical, however. Human activity has brought an array of environmental dangers to Antarctica. Oil and mineral extraction could seriously threaten marine habitat and onshore penguin and seal breeding grounds. A growing and largely uncontrolled fishing industry may be depleting both fish and krill populations in Antarctic waters. The parable of the Tragedy of the Commons seems ominously appropriate to Antarctica fisheries, which have already nearly eliminated many whale, seal, and penguin species. Solid waste and oil spills associated with research stations and with tourism pose an additional threat. Although Antarctica remains free of “permanent settlement,” 40 year-round scientific research stations are maintained on the continent. The population of these bases numbers nearly 4,000. In 1989 the Antarctic had its first oil spill when an Argentine supply ship, carrying 81 tourists and 170,000 gal (643,500 l) of diesel fuel, ran aground. Spilled fuel destroyed a nearby breeding colony of Adele penguins (Pygoscelis adeliae). With more than 3,000 cruise ships visiting annually, more spills seem inevitable. Tourists themselves present a further threat to penguins and seals. Visitors have been accused of disturbing breeding colonies, thus endangering the survival of young penguins and seals. [Mary Ann Cunningham]

RESOURCES BOOKS Child, J. Antarctica and South American Geopolitics. New York: Praeger, 1988. Parsons, A. Antarctica: The Next Decade. Cambridge: Cambridge University Press, 1987. Shapely, D. The Seventh Continent: Antarctica in a Resource Age. Baltimore: Johns Hopkins University Press for Resources for the Future, 1985. Suter, K. D. World Law and the Last Wilderness. Sydney: Friends of the Earth, 1980.

Antarctica Project The Antarctica Project, founded in 1982, is an organization designed to protect Antarctica and educate the public, government, and international groups about its current and 59

Environmental Encyclopedia 3

Anthrax

future status. The group monitors activities that affect the Antarctic region, conducts policy research and analysis in both national and international arenas, and maintains an impressive library of books, articles, and documents about Antarctica. It is also a member of the Antarctic and Southern Ocean Coalition (ASOC), which has 230 member organizations in 49 countries. In 1988, ASOC received a limited observer status to the Convention on the Conservation of Antarctic Marine Living Resources (CCAMLR). So far, the observer status continues to be renewed, providing ASOC with a way to monitor CCAMLR and to present proposals. In 1989, the Antarctica Project served as an expert adviser to the U.S. Office of Technology Assessment on its study and report of the Minerals Convention. The group prepared a study paper outlining the need for a comprehensive environmental protection convention. Later, a conservation strategy on Antarctica was developed with IUCN—The World Conservation Union. Besides continuing the work it has already begun, the Antarctica Project has several goals for the future. One calls for the designation of Antarctica as a world park. Another focuses on developing a bilateral plan to pump out the oil and salvage the Bahia Parasio, a ship which sank in early 1989 near the U.S. Palmer Station. Early estimated salvage costs ran at $50 million. One of the more recent projects is the Southern Ocean Fisheries Campaign. This campaign targets the illegal fishing taking place in the Southern Ocean which is depleting the Chilean sea bass population. The catch phrase of this movement is “Take a Pass on Chilean Sea Bass.” Three to four times a year, The Antarctica Project publishes ECO, an international publication which covers current political topics concerning the Antarctic Treaty System (provided free to members). Other publications include briefing materials, critiques, books, slide shows, videos, and posters for educational and advocacy purposes. [Cathy M. Falk]

RESOURCES ORGANIZATIONS The Antarctica Project, 1630 Connecticut Ave., NW, 3rd Floor, Washington, D.C. USA 20009 (202) 234-2480, Email: [email protected],

Anthracite coal see Coal 60

Anthrax Anthrax is a bacterial infection caused by Bacillus anthracis. It usually affects cloven-hoofed animals, such as cattle, sheep, and goats, but it can occasionally spread to humans. Anthrax is almost always fatal in animals, but it can be successfully treated in humans if antibiotics are given soon after exposure. In humans, anthrax is usually contracted when spores are inhaled or come in contact with the skin. It is also possible for people to become infected by eating the meat of contaminated animals. Anthrax, a deadly disease in nature, gained worldwide attention in 2001 after it was used as a bioterrorism agent in the United States. Until the 2001 attack, only 18 cases of anthrax had been reported in the United States in the previous 100 years. Anthrax occurs naturally. The first reports of the disease date from around 1500 B.C., when it is believed to have been the cause of the fifth Egyptian plague described in the Bible. Robert Koch first identified the anthrax bacterium in 1876 and Louis Pasteur developed an anthrax vaccine for sheep and cattle in 1881. Anthrax bacteria are found in nature in South and Central America, southern and eastern Europe, Asia, Africa, the Caribbean, and the Middle East. Anthrax cases in the United States are rare, probably due to widespread vaccination of animals and the standard procedure of disinfecting animal products such as cowhide and wool. Reported cases occur most often in Texas, Louisiana, Mississippi, Oklahoma, and South Dakota. Anthrax spores can remain dormant (inactive) for years in soil and on animal hides, wool, hair, and bones. There are three forms of the disease, each named for its means of transmission: cutaneous (through the skin), inhalation (through the lungs), and intestinal (caused by eating anthraxcontaminated meat). Symptoms appear within several weeks of exposure and vary depending on how the disease was contracted. Cutaneous anthrax is the mildest form of the disease. Initial symptoms include itchy bumps, similar to insect bites. Within two days, the bumps become inflamed and a blister forms. The centers of the blisters are black due to dying tissue. Other symptoms include shaking, fever, and chills. In most cases, cutaneous anthrax can be treated with antibiotics such as penicillin. Intestinal anthrax symptoms include stomach and intestinal inflammation and pain, nausea, vomiting, loss of appetite, and fever, all becoming progressively more severe. Once the symptoms worsen, antibiotics are less effective, and the disease is usually fatal. Inhalation anthrax is the form of the disease that occurred during the bioterrorism attacks of October and November 2001 in the eastern United States. Five people died after being exposed to anthrax through contaminated mail. At least 17 other people contracted the disease but survived.

Environmental Encyclopedia 3

Anthropogenic

spores released from a military laboratory infected 77 people, 69 of whom died. Anthrax is an attractive weapon to bioterrorists. It is easy to transport and is highly lethal. The World Health Organization (WHO) estimates that 110 lb (50 kg) of anthrax spores released upwind of a large city would kill tens of thousands of people, with thousands of others ill and requiring medical treatment. The Geneva Convention, which established a code of conduct for war, outlawed the use of anthrax as a weapon in 1925. However, Japan developed anthrax weapons in the 1930s and used them against civilian populations during World War II. During the 1980s, Iraq mass produced anthrax as a weapon. [Ken R. Wells] Anthrax lesion on the shoulder of a patient. (NMSB/Custom Medical Stock Photo. Reproduced by permission.)

One or more terrorists sent media organizations in Florida and New York envelopes containing anthrax. Anthrax-contaminated letters also were sent to the Washington, D.C. offices of two senators. Federal agents were still investigating the incidents as of May 2002 but admitted they had no leads in the case. Initial symptoms of inhalation anthrax are flulike, but breathing becomes progressively more difficult. Inhalation anthrax can be treated successfully if antibiotics are given before symptoms develop. Once symptoms develop, the disease is usually fatal. The only natural outbreak of anthrax among people in the United States occurred in Manchester, New Hampshire, in 1957. Nine workers in a textile mill that processed wool and goat hair contracted the disease, five with inhalation anthrax and four with cutaneous anthrax. Four of the five people with inhalation anthrax died. By coincidence, workers at the mill were participating in a study of an experimental anthrax vaccine. No workers who had been vaccinated contracted the disease. Following this outbreak, the study was stopped, all workers at the mill were vaccinated, and vaccination became a condition of employment. After that, no mill workers contracted anthrax. The mill closed in 1968. However, in 1966 a man who worked across the street from the mill died from inhalation anthrax. He is believed to have contracted it from anthrax spores carried from the mill by the wind. The United States Food and Drug Administration approved the anthrax vaccine in 1970. It is used primarily for military personnel and some health care workers. During the 2001 outbreak, thousands of postal workers were offered the vaccine after anthrax spores from contaminated letters were found at several post office buildings. The largest outbreak worldwide of anthrax in humans occurred in the former Soviet Union in 1979, when anthrax

RESOURCES BOOKS The Parents’ Committee for Public Awareness. Anthrax: A Practical Guide for Citizens. Cambridge, MA: Harvard Perspectives Press, 2001.

PERIODICALS Consumers’ Research Staff. “What You Need to Know About Anthrax.” Consumers’ Research Magazine (Nov. 2001):10–14. Belluck, Pam. “Anthrax Outbreak of ’57 Felled a Mill but Yielded Answers.” The New York Times (Oct. 27, 2001). Bia, Frank, et al. “Anthrax: What You—And Your Patients—Need To Know Now.” Consultant (Dec. 2001):1797–1804. Masibay, Kim Y. “Anthrax: Facts, Not Fear.” Science World (Nov. 26, 2001):4–6. Spencer, Debbi Ann, et al. “Inhalation Anthrax.” MedSurg Nursing (Dec. 2001):308ndash;313.

ORGANIZATIONS Centers for Disease Control and Prevention, 1600 Clifton Road, Atlanta, GA USA 30333 (404)639-3534, Toll Free: (888) 246-2675, Email: [email protected], >http://www.cdc.gov. United States Golf Association. [cited July 2002]. .

ORGANIZATIONS United States Golf Association, P.O. Box 708, Far Hills, N.J. USA 079310708, Fax: 908-781-1735, Email: usga.org, http://www.usga.org/

Good wood Good wood, or smart wood, is a term certifying that the wood is harvested from a forest operating under environmentally sound and sustainable practices. A “certified wood” label indicates to consumers that the wood they purchase comes from a forest operating within specific guidelines designed to ensure future use of the forest. A well-managed forestry operation takes into account the overall health of the forest and its ecosystems, the use of the forest by indigenous people and cultures, and the economic influences the forest has on local communities. Certification of wood allows the wood to be traced from harvest through processing to the final product (i.e., raw wood or an item made from wood) in an attempt to reduce uncontrollable deforestation, while meeting the demand for wood and wood products by consumers around the world. Public concern regarding the disappearance of tropical forests initially spurred efforts to reduce the destruction of vast acres of rainforests by identifying environmentally responsible forestry operations and encouraging such practices by paying foresters higher prices. Certification, however, is not limited to tropical forests. All forest types—tropical, temperate, and boreal (those located in northern climes)— from all countries may apply for certification. Plantations (stands of timber that have been planted for the purpose of logging or that have been altered so that they no longer 648

support the ecosystems of a natural forest) may also apply for certification. Certification of forests and forest owners and managers is not required. Rather, the process is entirely voluntary. Several organizations currently assess forests and forest management operations to determine whether they meet the established guidelines of a well-managed, sustainable forest. The Forest Stewardship Council (FSC), founded in 1993, is an organization of international members with environmental, forestry, and socioeconomic backgrounds that monitors these organizations and verifies that the certification they issue is legitimate. A set of 10 guiding principles known as Principles and Criteria (P&C) were established by the FSC for certifying organizations to utilize when evaluating forest management operations. The P&C address a wide range of issues, including compliance with local, national, and international laws and treaties; review of the forest operation’s management plans; the religious or cultural significance of the forest to the indigenous inhabitants; maintenance of the rights of the indigenous people to use the land; provision of jobs for nearby communities; the presence of threatened or endangered species; control of excessive erosion when building roads into the forest; reduction of the potential for lost soil fertility as a result of harvesting; protection against the invasion of non-native species; pest management that limits the use of certain chemical types and of genetically altered organisms; and protection of forests when deemed necessary (for example, a forest that protects a watershed or that contains threatened and/or endangered species). Guarding against illegal harvesting is a major hurdle for those forest managers working to operate within the established regulations for certification. Forest devastation occurs not only from harvesting timber for wood sales but when forests are clear cut to make way for cattle crazing or farming, or to provide a fuel source for local inhabitants. Illegal harvesting often occurs in developing countries where enforcement against such activities is limited (for example, the majority of the trees harvested in Indonesia are done so illegally). Critics argue against the worthiness of managing forests, suggesting that the logging of select trees from a forest should be allowed and that once completed, the remaining forest should be placed off limits to future logging. Nevertheless, certified wood products are in the market place; large wood and wood product suppliers are offering certified wood and wood products to their consumers. In 2001 the Forest Leadership Forum (a group of environmentalists, forest industry representatives, and retailers) met to identify how wood retailers can promote sustainable forests. It is hoped that consumer demand for good wood will drive up the number of forests participating in the certification program,

Environmental Encyclopedia 3

Jane Goodall

thereby reducing the rate of irresponsible deforestation of the world’s forests. [Monica Anderson]

RESOURCES BOOKS Bass, Stephen, et al. Certification’s Impact on Forests, Stakeholders and Supply Chains. London: IIED, 2001.

ORGANIZATIONS Forest Stewardship Council United States, 1155 30th Street, NW, Suite 300, Washington, DC USA 20007 (202) 342 0413, Fax: (202) 342 6589, Email: [email protected], 5 mg/l; barium 0.100 mg/l; cadmium >1 mg/l; chromium >5 mg/l; lead >5 mg/l). Additional codes under toxicity include an “acute hazardous waste” with code “H": a substance which has been found to be fatal to humans in low doses or has been found to be fatal in corresponding human concentrations in laboratory animals. Toxic waste (hazard code “T") designates wastes which have been found through laboratory studies to be a carcinogen, mutagen, or teratogen for humans or other life forms. Certain wastes are specifically excluded from classification as hazardous wastes under RCRA, including domestic sewage, irrigation return flows, household waste, and nuclear waste. The latter is controlled via other legislation. The impetus for this effort at legislation and classification comes from several notable cases such as Love Canal, New York; Bhopal, India; Stringfellow Acid Pits (Glen Avon, California); and Seveso, Italy; which have brought media and public attention to the need for identification and classification of dangerous substances, their effects on health and the environment, and the importance of having knowledge about the potential risk associated with various wastes. A notable feature of the legislation is its attempt at defining terms so that professionals in the field and government officials will share the same vocabulary. For example, the difference between “toxic” and “hazardous” has been established; the former denotes the capacity of a substance to produce injury and the latter denotes the probability that injury will result from the use of (or contact with) a substance. The RCRA legislation on hazardous waste is targeted toward larger generators of hazardous waste rather than small operations. The small generator is one who generates less than 2,205 lb (1,000 kg) per month; accumulates less than 703

Hazardous waste site remediation

2,205 lb (1,000 kg); produces wastes which contain no more than 2.2 lb (1 kg) of acutely hazardous waste; has containers no larger than 5.3 gal (20 l) or contained in liners less than 22 lb (10 kg) of weight of acutely hazardous waste; has no greater than 220 lb (100 kg) of residue or soil contaminated from a spill, etc. The purpose of this exclusion is to enable the system of regulations to concentrate on the most egregious and sizeable of the entities that contribute to hazardous waste and thus provide the public with the maximum protection within the resources of the regulatory and legal systems. [Malcolm T. Hepworth]

RESOURCES BOOKS Dawson, G. W., and B. W. Mercer. Hazardous Waste Management. New York: Wiley, 1986. Dominguez, G. S., and K. G. Bartlett. Hazardous Waste Management. Vol. 1, The Law of Toxics and Toxic Substances. Boca Raton: CRC Press, 1986. U.S. Environmental Protection Agency. Hazardous Waste Management: A Guide to the Regulations. Washington, DC: U.S. Government Printing Office, 1980. Wentz, C. A. Hazardous Waste Management. New York: McGraw-Hill, 1989.

Hazardous waste site remediation The overall objective in remediating hazardous waste sites is the protection of human health and the environment by reducing risk. There are three primary approaches which can be used in site remediation to achieve acceptable levels of risk: Othe hazardous waste at a site can be contained to preclude additional migration and exposure Othe hazardous constituents can be removed from the site to make them more amenable to subsequent ex situ treatment, whether in the form of detoxification or destruction Othe hazardous waste can be treated in situ (in place) to destroy or otherwise detoxify the hazardous constituents Each of these approaches has positive and negative ramifications. Combinations of the three principal approaches may be used to address the various problems at a site. There is a growing menu of technologies available to implement each of these remedial approaches. Given the complexity of many of the sites, it is not uncommon to have treatment trains with a sequential implementation of various in situ and/or ex situ technologies to remediate a site. Hazardous waste site remediation usually addresses soils and groundwater. However, it can also include wastes, surface water, sediment, sludges, bedrock, buildings, and other man-made items. The hazardous constituents may be organic, inorganic and, occasionally, radioactive. They may 704

Environmental Encyclopedia 3 be elemental ionic, dissolved, sorbed, liquid, gaseous, vaporous, solid, or any combination of these. Hazardous waste sites may be identified, evaluated, and if necessary, remediated by their owners on a voluntary basis to reduce environmental and health effects or to limit prospective liability. However, in the United States, there are two far-reaching federal laws which may mandate entry into the remediation process: the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA, also called the Superfund law), and the Resource Conservation and Recovery Act (RCRA). In addi-

tion, many of the states have their own programs concerning abandoned and uncontrolled sites, and there are other laws that involve hazardous site remediation, such as the cleanup of polychlorinated biphenyl (PCB) under the auspices of the federal Toxic Substances Control Act (TSCA). Potential sites may be identified by their owners, by regulatory agencies, or by the public in some cases. Site evaluation is usually a complicated and lengthy process. In the federal Superfund program, sites at which there has been a release of one or more hazardous substances that might result in a present or potential future threat to human health and/or the environment are first evaluated by a Preliminary Assessment/Site Inspection (PA/SI). The data collected at this stage is evaluated, and a recommendation for further action may be formulated. The Hazard Ranking System (HRS) of the U.S. Environmental Protection Agency (EPA) may be employed to score the site with respect to the potential hazards it may pose and to see if it is worthy of inclusion on the National Priorities List (NPL) of sites most deserving the attention of resources. Regardless of the HRS score or NPL status, the EPA may require the parties responsible for the release (including the present property owner) to conduct a further assessment, in the form of the two-phase Remedial Investigation/Feasibility Study (RI/FS). The objective of the RI is to determine the nature and extent of contamination at and near the site. The RI data is next considered in a baseline risk assessment. The risk assessment evaluates the potential threats to human health and the environment in the absence of any remedial action, considering both present and future conditions. Both exposure and toxicity are considered at this stage. The baseline risk assessment may support a decision of no action at a site. If remedial actions are warranted, the second phase of the RI/FS, an engineering Feasibility Study, is performed to allow for educated selection of an appropriate remedy. The final alternatives are evaluated on the basis of nine criteria in the federal Superfund program. The EPA selects the remedial action it deems to be most appropriate and describes it and the process which led to its selection in the Record of Decision (ROD). Public comments are solicited on the proposed clean-up plan before the ROD is

Environmental Encyclopedia 3 issued. There are also other public comment opportunities during the RI/FS process. Once the ROD is issued, the project moves to the Remedial Design/Remedial Action (RD/RA) phase unless there is a decision of no action. Upon design approval, construction commences. Then, after construction is complete, long-term operation, maintenance, and monitoring activities begin. For Superfund sites, the EPA may allow one or more of the responsible parties to conduct the RI/FS and RD/ RA under its oversight. If possibly responsible parties are not willing to participate or are unable to be involved for technical, legal, or financial reasons, the EPA may choose to conduct the project with government funding and then later seek to recover costs in lawsuits against the parties. Other types of site remediation programs often replicate or approximate the approaches described above. Some states, such as Massachusetts, have very definite programs, while others are less structured. Containment is one of the available treatment options. There are several reasons for using containment techniques. A primary reason is difficulty in excavating the waste or treating the hazardous constituents in place. This may be caused by construction and other man-made objects located over and in the site. Excavation could also result in uncontrollable releases at concentrations potentially detrimental to the surrounding area. At many sites, the low levels of risks posed, in conjunction with the relative costs of treatment technologies, may result in the selection of a containment remedy. One means of site containment is the use of an impermeable cap to reduce rainfall infiltration and to prevent exposure of the waste through erosion. Another means of containment is the use of cut-off walls to restrict or direct the movement of groundwater. In situ solidification can also be used to limit the mobility of contaminants. Selection among alternatives is very site specific and reflects such things as the site hydrogeology, the chemical and physical nature of the contamination, proposed land use, and so on. Of course, the resultant risk must be acceptable. As with any in situ approach, there is less control and knowledge of the performance and behavior of the technology than is possible with off-site treatment. Since the use of containment techniques leaves the waste in place, it usually results in long-term monitoring programs to determine if the remediation remains effective. If a containment remedy were to fail, the site could require implementation of another type of technology. The ex situ treatment of hazardous waste provides the most control over the process and permits the most detailed assessments of its efficacy. Ex situ treatment technologies offer the biggest selection of options, but include an additional risk factor during transport. Examples of treatment options include incineration; innovative thermal destruc-

Hazardous waste site remediation

tion, such as infrared incineration; bioremediation; stabilization/solidification; soil washing; chemical extraction; chemical destruction; and thermal desorption. Another approach to categorizing the technologies available for hazardous waste site remediation is based upon their respective levels of demonstration. There are existing technologies, which are fully demonstrated and in routine commercial use. Performance and cost information is available. Examples of existing technologies include slurry walls, caps, incineration, and conventional solidification/stabilization. The next level of technology is innovative and has grown rapidly as the number of sites requiring remediation grew. Innovative technologies are characterized by limited availability of cost and performance data. More site-specific testing is required before an innovative technology can be considered ready for use at a site. Examples of innovative technologies are vacuum extraction, bioremediation, soil washing/flushing, chemical extraction, chemical destruction, and thermal desorption. Vapor extraction and in situ bioremediation are expected to be the next innovative technologies to reach “existing” status as a result of the growing base of cost and performance information generated by their use at many hazardous waste sites. The last category is that of emerging technologies. These technologies are at a very early stage of development and therefore require additional laboratory and pilot scale testing to demonstrate their technical viability. No cost or performance information is available. An example of an emerging technology is electrokinetic treatment of soils for metals removal. Groundwater contaminated by hazardous materials is a widespread concern. Most hazardous waste site remediations use a pump and treat approach as a first step. Once the groundwater has been brought to the surface, various treatment alternatives exist, depending upon the constituents present. In situ air sparging of the groundwater using pipes, wells, or curtains is also being developed for removal of volatile constituents. The vapor is either treated above ground with technologies for off-gas emissions, or biologically in the unsaturated or vadose zone above the aquifer. While this approach eliminates the costs and difficulties in treating the relatively large volumes of water (with relatively low contaminant concentrations) generated during pumpand-treat, it does not necessarily speed up remediation. Contaminated bedrock frequently serves as a source of groundwater or soil recontamination. Constituents with densities greater than water enter the bedrock at fractures, joints or bedding planes. From these locations, the contamination tends to diffuse in all directions. After many years of accumulation, bedrock contamination may account for the majority of the contamination at a site. Currently, little can be done to remediate contaminated bedrock. Specially 705

Environmental Encyclopedia 3

Hazardous waste siting

designed vapor stripping applications have been proposed when the constituents of concern are volatile. Efforts are on-going in developing means to enhance the fractures of the bedrock and thereby promote removal. In all cases, the ultimate remediation will be driven by the diffusion of contaminants back out of the rock, a very slow process. The remediation of buildings contaminated with hazardous waste offers several alternatives. Given the cost of disposal of hazardous wastes, the limited disposal space available, and the volume of demolition debris, it is beneficial to determine the extent of contamination of construction materials. This contamination can then be removed through traditional engineering approaches, such as scraping or sand blasting. It is then only this reduced volume of material that requires treatment or disposal as hazardous waste. The remaining building can be reoccupied or disposed of as nonhazardous waste. See also Hazardous material; Hazardous waste siting; Solidification of hazardous materials; Vapor recovery system [Ann N. Clarke and Jeffrey L. Pintenich]

RESOURCES BOOKS U.S. Environmental Protection Agency. Office of Emergency and Remedial Response. Guidance for Conducting Remedial Investigations and Feasibility Studies Under CERCLA. Washington, DC: U.S. Government Printing Office, 1988. U.S. Environmental Protection Agency. Office of Emergency and Remedial Response. Handbook: Remedial Action at Waste Disposal Sites. Washington, DC: U.S. Government Printing Office, 1985. U.S. Environmental Protection Agency. Office of Emergency and Remedial Response. Guidance on Remedial Actions for Contaminated Groundwater at Superfund Sites. Washington, DC: U.S. Government Printing Office, 1988. U.S. Environmental Protection Agency. Office of Environmental Engineering. Guide for Treatment Technologies for Hazardous Waste at Superfund Sites. Washington, DC: U.S. Government Printing Office, 1989. U.S. Environmental Protection Agency. Office of Solid Waste and Emergency Response. Innovative Treatment Technologies. Washington, DC: U.S. Government Printing Office, 1991. U.S. Environmental Protection Agency. Risk Reduction Engineering Laboratory. Handbook on In Situ Treatment of Hazardous Waste: Contaminated Soils. Washington, DC: U.S. Government Printing Office, 1990. U.S. Environmental Protection Agency. Technology Screening Guide for Treatment of CERCLA Soils and Sludges. Washington, DC: U.S. Government Printing Office, 1988.

Hazardous waste siting Regardless of the specific technologies to be employed, there are many technical and nontechnical considerations to be addressed before hazardous waste can be treated or disposed of at a given location. The specific nature and relative importance of these considerations to the successful siting reflect the chemodynamic behavior (i.e., transport and fate of the waste and/or treated residuals in the environment 706

after emission) as well as the specifics of the location and associated, proximate areas. Examples of these considerations are: the nature of the soil and hydrogeological features such as depth to and quality of groundwater; quality, use, and proximity of surface waters; and ambient air quality and meteorological conditions; and nearby critical environmental areas (wetlands, preserves, etc.), if any. Other considerations include surrounding land use; proximity of residences and other potentially sensitive receptors such as schools, hospitals, parks, etc.; availability of utilities; and the capacity and quality of the roadway system. It is also critical to develop and obtain the timely approval of all appropriate local, state, and federal permits. Associated with these permits is the required documentation of financial viability as established by escrowed closure funds, site insurance, etc. Site-specific standard operating procedures as well as contingency plans for use in emergencies are also required. Additionally, there needs to be baseline and ongoing monitoring plans developed and implemented to determine if there are any releases to or general degradation of the environment. One should also anticipate public hearings before permits are granted. Several states in the United States have specific regulations which restrict the siting of hazardous waste management facilities.

Haze An aerosol in the atmosphere of sufficient concentration and extent to decrease visibility significantly when the relative humidity is below saturation is known as haze. Haze may contain dry particles or droplets or a mixture of both, depending on the precise value of the humidity. In the use of the word, there is a connotation of some degree of permanence. For example, a dust storm is not a haze, but the coarse particles may settle rapidly and leave a haze behind once the velocity drops. Human activity is responsible for many hazes. Enhanced emission of sulfur dioxide results in the formation of aerosols of sulfuric acid. In the presence of ammonia, which is excreted by most higher animals including humans, such emissions result in aerosols of ammonium sulfate and bisulfate. Organic hazes are part of photochemical smog, such as the smog often associated with Los Angeles, and they consist primarily of polyfunctional, highly oxygenated compounds with at least five carbon atoms. Such hazes can also form if air with an enhanced nitrogen oxide content meets air containing the natural terpenes emitted by vegetation. All hazes, however, are not products of human activity. Natural hazes can result from forest fires, dust storms, and the natural processes that convert gaseous contaminants into particles for subsequent removal by precipitation or deposi-

Environmental Encyclopedia 3

Heat (stress) index

tion to the surface or to vegetation. Still other hazes are of mixed origin, as noted above, and an event such as a dust storm can be enhanced by human-caused devegetation of soil. Though it may contain particles injurious to health, haze is not of itself a health hazard. It can have a significant economic impact, however, when tourists cannot see scenic views, or if it becomes sufficiently dense to inhibit aircraft operations. See also Air pollution; Air quality; Air quality criteria; Los Angeles Basin; Mexico City, Mexico [James P. Lodge Jr.]

RESOURCES BOOKS Husar, R. B. Trends in Seasonal Haziness and Sulfur Emissions Over the Eastern United States. Research Triangle Park, NC: U. S. Environmental Protection Agency, 1989.

PERIODICALS Husar, R. B., and W. E. Wilson. “Haze and Sulfur Emission Trends in the Eastern United States.” Environmental Science and Technology 27 (January 1993): 12–16. Malm, W. C. “Characteristics and Origins of Haze in the Continental United States.” Earth-Science Reviews 33 (August 1992): 1–36. Raloff, J. “Haze May Confound Effects of Ozone Loss.” Science News 141 (4 January 1992): 5.

Heat (stress) index The heat index (HI) or heat stress index—sometimes called the apparent temperature or comfort index—is a temperature measure that takes into account the relative humidity. Based on human physiology and on clothing science, it measures how a given air temperature feels to the average person at a given relative humidity. The HI temperature is measured in the shade and assumes a wind speed of 5.6 mph (9 kph) and normal barometric pressure. At low relative humidity, the HI is less than or equal to the air temperature. At higher relative humidity, the HI exceeds the air temperature. For example, according to the National Weather Service’s (NWS) HI chart, if the air temperature is 70°F (21°C), the HI is 64°F (18°C) at 0% relative humidity and 72°F (22°C) at 100% relative humidity. At 95°F (35°C) and 55% relative humidity, the HI is 110°F (43°C). In very hot weather, humidity can raise the HI to extreme levels: at 115°F (46°C) and 40% relative humidity, the HI is 151°F (66°C). This is because humidity affects the body’s ability to regulate internal heat through perspiration. The body feels warmer when it is humid because perspiration evaporates more slowly; thus the HI is higher. The HI is used to predict the risk of physiological heat stress for an average individual. Caution is advised

at an HI of 80–90°F (27–32°C): fatigue may result with prolonged exposure and physical activity. An HI of 90– 105°F (32–41°C) calls for extreme caution, since sunstroke, muscle cramps, and heat exhaustion are possible. Danger warnings are issued at HIs of 105–130°F (41–54°C), when sunstroke and heat exhaustion are likely and there is a potential for heat stroke. Category IV, extreme danger, occurs at HIs above 130°F (54°C), when heatstroke and sunstroke are imminent. Individual physiology influences how people are affected by high HIs. Children and older people are more vulnerable. Acclimatization (being used to the climate) can alleviate some of the danger. However sunburn can increase the effective HI by slowing the skin’s ability to shed excess heat from blood vessels and through perspiration. Exposure to full sunlight can increase HI values by as much as 15°F (8°C). Winds, especially hot dry winds, also can increase the HI. In general, the NWS issues excessive heat alerts when the daytime HI reaches 105°F (41°C) and the nighttime HI stays above 80°F (27°C) for two consecutive days; however these values depend somewhat on the region or metropolitan area. In cities, high HIs often mean increased air pollution. The concentration of ozone, the major component of smog, tends to rise at ground level as the HI increases, causing respiratory problems for many people. The National Center for Health Statistics estimates that heat exposure results in an average of 371 deaths annually in the United States. About 1,700 Americans died in the heat waves of 1980. In Chicago in 1995, more than 700 people died during a five-day heat wave when the nighttime HI stayed above 89°F (32°C). At higher temperatures, the air can hold more water vapor; thus humidity and HI values increase as the atmosphere warms. Since the late nineteenth century, the mean annual surface temperature of the earth has risen between 0.5 and 1.0°F (0.3 and 0.6°C). According to the National Aeronautics and Space Administration, the five-year-mean temperature increased about 0.9°F (0.5°C) between 1975 and 1999, the fastest rate of recorded increase. In 1998 global surface temperatures were the warmest since the advent of reliable measurements and the 1990s accounted for seven of the 10 warmest years on record. Nighttime temperatures have been increasing twice as fast as daytime temperatures. Greenhouse gases, including carbon dioxide, methane, nitrous oxide, and chlorofluorocarbons, increase the heat-trapping capabilities of the atmosphere. Evaporation from the ocean surfaces increased during the twentieth century, resulting in higher humidity that enhanced the greenhouse effect. It is projected that during the twenty-first century greenhouse gas concentrations will double or even quadruple from pre-industrial levels. Increased urbanization also contributes to global warming, as 707

Heavy metals and heavy metal poisoning

buildings and roads hold in the heat. Climate simulations predict an average surface air temperature increase of 4.5– 7°F (2.5–4°C) by 2100. This will increase the number of extremely hot days and, in temperate climates, double the number of very hot days, for an average increase in summer temperatures of 4–5°F (2–3°C). More heat-related illnesses and deaths will result. The National Oceanic and Atmospheric Administration projects that the HI could rise substantially in humid regions of the tropics and sub-tropics. Warm, humid regions of the southeastern United States are expected to experience substantial increases in the summer HI due to increased humidity, even though temperature increases may be smaller than in the continental interior. Predictions for the increase in the summer HI for the Southeast United States over the next century range from 8–20°F (4–11°C). [Margaret Alic Ph.D.]

RESOURCES BOOKS Wigley, T. M. L. The Science of Climate Change: Global and U.S. Perspectives. Arlington, VA: Pew Center on Global Climate Change, 1999.

PERIODICALS Delworth, T. L., J. D. Mahlman, and T. R. Knutson. “Changes in Heat Index Associated with CO2-Induced Global Warming.” Climatic Change 43 (1999): 369–86.

OTHER Darling, Allan. “Heat Wave.” Internet Weather Source. June 27, 2000 [cited May 2002]. . Davies, Kert. “Heat Waves and Hot Nights.” Ozone Action and Physicians for Social Responsibility. July 26, 2000 [cited May 2002]. . National Assessment Synthesis Team. Climate Change Impacts on the United States: The Potential Consequences of Climate Variability and Change. Overview: Southeast. U. S. Global Change Research Program. March 5, 2002 [May 2002]. . Union of Concerned Scientists. Early Warning Signs of Global Warming: Heat Waves and Periods of Unusually Warm Weather. 2000 [cited May 2002]. . Trenberth, Kevin E. “The IPCC Assessment of Global Warming 2001.” FailSafe. Spring 2001 [cited May 2002]. .

ORGANIZATIONS National Weather Service, National Oceanic and Atmospheric Administration, U. S. Department of Commerce, 1325 East West Highway, Silver Spring, USA 20910 , Physicians for Social Responsibility, 1875 Connecticut Avenue, NW, Suite 1012, Washington , DC USA 20009 (202) 667-4260, Fax: (202) 6674201, Email: [email protected], Union of Concerned Scientists, 2 Brattle Square, Cambridge, MA USA 02238 (617) 547-5552, Email: [email protected],

708

Environmental Encyclopedia 3

Heavy metals and heavy metal poisoning Heavy metals are generally defined as environmentally stable elements of high specific gravity and atomic weight. They have such characteristics as luster, ductility, malleability, and high electric and thermal conductivity. Whether based on their physical or chemical properties, the distinction between heavy metals and non-metals is not sharp. For example, arsenic, germanium, selenium, tellurium, and antimony possess chemical properties of both metals and non-metals. Defined as metalloids, they are often loosely classified as heavy metals. The category “heavy metal” is, therefore, somewhat arbitrary and highly non-specific because it can refer to approximately 80 of the 103 elements in the periodic table. The term “trace element” is commonly used to describe substances which cannot be precisely defined but most frequently occur in the environment in concentrations of a few parts per million (ppm) or less. Only a relatively small number of heavy metals such as cadmium, copper, iron, cobalt, zinc, mercury, vanadium, lead, nickel, chromium, manganese, molybdenum, silver, and tin as well as the metalloids arsenic and selenium are associated with environmental, plant, animal, or human health problems. While the chemical forms of heavy metals can be changed, they are not subject to chemical/biological destruction. Therefore, after release into the environment they are persistent contaminants. Natural processes such as bedrock and soil weathering, wind and water erosion, volcanic activity, sea salt spray, and forest fires release heavy metals into the environment. While the origins of anthropogenic releases of heavy metals are lost in antiquity, they probably began as our prehistoric ancestors learned to recover metals such as gold, silver, copper, and tin from their ores and to produce bronze. The modern age of heavy metal pollution has its beginning with the Industrial Revolution. The rapid development of industry, intensive agriculture, transportation, and urbanization over the past 150 years, however, has been the precursor of today’s environmental contamination problems. Anthropogenic utilization has also increased heavy metal distribution by removing the substances from localized ore deposits and transporting them to other parts of the environment. Heavy metal by-products result from many activities including: ore extraction and smelting, fossil fuel combustion, dumping and landfilling of industrial wastes, exhausts from leaded gasolines, steel, iron, cement and fertilizer production, refuse and wood combustion. Heavy metal cycling has also increased through activities such as farming, deforestation, construction, dredging of harbors, and the disposal of municipal sludges and industrial wastes on land. Thus, anthropogenic processes, especially combustion, have substantially supplemented the natural atmospheric

Environmental Encyclopedia 3 emissions of selected heavy metals/metalloids such as selenium, mercury, arsenic, and antimony. They can be transported as gases or adsorbed on particles. Other metals such as cadmium, lead, and zinc are transported atmospherically only as particles. In either state heavy metals may travel long distances before being deposited on land or water. The heavy metal contamination of soils is a far more serious problem than either air or water pollution because heavy metals are usually tightly bound by the organic components in the surface layers of the soil and may, depending on conditions, persist for centuries or millennia. Consequently, the soil is an important geochemical sink which accumulates heavy metals rapidly and usually depletes them very slowly by leaching into groundwater aquifers or bioaccumulating into plants. However, heavy metals can also be very rapidly translocated through the environment by erosion of the soil particles to which they are adsorbed or bound and redeposited elsewhere on the land or washed into rivers, lakes or oceans to the sediment. The cycling, bioavailability, toxicity, transport, and fate of heavy metals are markedly influenced by their physico-chemical forms in water, sediments, and soils. Whenever a heavy metal containing ion or compound is introduced into an aquatic environment, it is subjected to a wide variety of physical, chemical, and biological processes. These include: hydrolysis, chelation, complexation, redox, biomethylation, precipitation and adsorption reactions. Often heavy metals experience a change in the chemical form or speciation as a result of these processes and so their distribution, bioavailability, and other interactions in the environment are also affected. The interactions of heavy metals in aquatic systems are complicated because of the possible changes due to many dissolved and particulate components and non-equilibrium conditions. For example, the speciation of heavy metals is controlled not only by their chemical properties but also by environmental variables such as: 1) pH; 2) redox potential; 3) dissolved oxygen; 4) ionic strength; 5) temperature; 6) salinity; 7) alkalinity; 8) hardness; 9) concentration and nature of inorganic ligands such as carbonate, bicarbonate, sulfate, sulfides, chlorides; 10) concentration and nature of dissolved organic chelating agents such as organic acids, humic materials, peptides, and polyamino-carboxylates; 11) the concentration and nature of particulate matter with surface sites available for heavy metal binding; and 12) biological activity. In addition, various species of bacteria can oxidize arsenate or reduce arsenate to arsenite, or oxidize ferrous iron to ferric iron, or convert mercuric ion to elemental mercury or the reverse. Various enzyme systems in living organisms can biomethylate a number of heavy metals. While it had been known for at least 60 years that arsenic

Heavy metals and heavy metal poisoning

and selenium could be biomethylated, microorganisms capable of converting inorganic mercury into monomethyl and dimethylmercury in lake sediments were not discovered until 1967. Since then, numerous heavy metals such as lead, tin, cobalt, antimony, platinum, gold, tellurium, thallium, and palladium have been shown to be biomethylated by bacteria and fungi in the environment. As environmental factors change the chemical reactivities and speciation of heavy metals, they influence not only the mobilization, transport, and bioavailability, but also the toxicity of heavy metal ions toward biota in both freshwater and marine ecosystems. The factors affecting the toxicity and bioaccumulation of heavy metals by aquatic organisms include: 1) the chemical characteristics of the ion; 2) solution conditions which affect the chemical form (speciation) of the ion; 3) the nature of the response such as acute toxicity, bioaccumulation, various types of chronic effects, etc.; 4) the nature and condition of the aquatic animal such as age or life stage, species, or trophic level in the food chain. The extent to which most of the methylated metals are bioaccumulated and/or biomagnified is limited by the chemical and biological conditions and how readily the methylated metal is metabolized by an organism. At present, only methylmercury seems to be sufficiently stable to bioaccumulate to levels that can cause adverse effects in aquatic organisms. All other methylated metal ions are produced in very small concentrations and are degraded naturally faster than they are bioaccumulated. Therefore, they do not biomagnify in the food chain. The largest proportion of heavy metals in water is associated with suspended particles, which are ultimately deposited in the bottom sediments where concentrations are orders of magnitude higher than those in the overlying or interstitial waters. The heavy metals associated with suspended particulates or bottom sediments are complex mixtures of: 1) weathering and erosion residues such as iron and aluminum oxyhydroxides, clays and other aluminosilicates; 2) methylated and non-methylated forms in organic matter such as living organisms, bacteria and algae, detritus and humus; 3) inorganic hydrous oxides and hydroxides, phosphates and silicates; and 4) diagenetically produced iron and manganese oxyhydroxides in the upper layer of sediments and sulfides in the deeper, anoxic layers. In anoxic waters the precipitation of sulfides may control the heavy metal concentrations in sediments while in oxic waters adsorption, absorption, surface precipitation and coprecipitation are usually the mechanisms by which heavy metals are removed from the water column. Moreover, physical, chemical and microbiological processes in the sediments often increase the concentrations of heavy metals in the pore waters which are released to overlying waters by diffusion or as the result of consolidation and bioturbation. 709

Environmental Encyclopedia 3

Heavy metals precipitation

Transport by living organisms does not represent a significant mechanism for local movement of heavy metals. However, accumulation by aquatic plants and animals can lead to important biological responses. Even low environmental levels of some heavy metals may produce subtle and chronic effects in animal populations. Despite these adverse effects, at very low levels, some metals have essential physiological roles as micronutrients. Heavy metals such as chromium, manganese, iron, cobalt, molybdenum, nickel, vanadium, copper, and selenium are required in small amounts to perform important biochemical functions in plant and animal systems. In higher concentrations they can be toxic, but usually some biological regulatory mechanism is available by means of which animals can speed up their excretion or retard their uptake of excessive quantities. In contrast, non-essential heavy metals are primarily of concern in terrestrial and aquatic systems because they are toxic and persist in living systems. Metal ions commonly bond with sulfhydryl and carboxylic acid groups in amino acids, which are components of proteins (enzymes) or polypeptides. This increases their bioaccumulation and inhibits excretion. For example, heavy metals such as lead, cadmium, and mercury bind strongly with -SH and -SCH3 groups in cysteine and methionine and so inhibit the metabolism of the bound enzymes. In addition, other heavy metals may replace an essential element, decreasing its availability and causing symptoms of deficiency. Uptake, translocation, and accumulation of potentially toxic heavy metals in plants differ widely depending on soil type, pH, redox potential, moisture, and organic content. Public health officials closely regulate the quantities and effects of heavy metals that move through the agricultural food chain to be consumed by human beings. While heavy metals such as zinc, copper, nickel, lead, arsenic, and cadmium are translocated from the soil to plants and then into the animal food chain, the concentrations in plants are usually very low and generally not considered to be an environmental problem. However, plants grown on soils either naturally enriched or highly contaminated with some heavy metals can bioaccumulate levels high enough to cause toxic effects in the animals or human beings that consume them. Contamination of soils due to land disposal of sewage and industrial effluents and sludges may pose the most significant long term problem. While cadmium and lead are the greatest hazard, other elements such as copper, molybdenum, nickel, and zinc can also accumulate in plants grown on sludge-treated land. High concentrations can, under certain conditions, cause adverse effects in animals and human beings that consume the plants. For example, when soil contains high concentrations of molybdenum and selenium, 710

they can be translocated into edible plant tissue in sufficient quantities to produce toxic effects in ruminant animals. Consequently, the U. S. Environmental Protection Agency has issued regulations which prohibit and/or tightly regulate the disposal of contaminated municipal and industrial sludges on land to prevent heavy metals, especially cadmium, from entering the food supply in toxic amounts. However, presently, the most serious known human toxicity is not through bioaccumulation from crops but from mercury in fish, lead in gasoline, paints and water pipes, and other metals derived from occupational or accidental exposure. See also Aquatic chemistry; Ashio, Japan; Atmospheric pollutants; Biomagnification; Biological methylation; Contaminated soil; Ducktown, Tennessee; Hazardous material; Heavy metals precipitation; Itai-Itai disease; Methylmercury seed dressings; Minamata disease; Smelters; Sudbury, Ontario; Xenobiotic [Frank M. D’Itri]

RESOURCES BOOKS Craig, P. J. “Metal Cycles and Biological Methylation.” The Handbook of Environmental Chemistry. Vol. 1, Part A, edited by O. H. Hutzinger. Berlin: Springer Verlag, 1980. Fo¨rstner, U., and G. T. W. Wittmann. Metal Pollution in the Aquatic Environment. 2nd ed. Berlin: Springer Verlag, 1981. Kramer, J. R., and H. E. Allen, eds. Metal Speciation: Theory, Analysis and Application. Chelsea, MI: Lewis, 1988.

Heavy metals precipitation The principle technology to remove metals pollutants from wastewater is by chemical precipitation. Chemical precipitation includes two secondary removal mechanisms, coprecipitation and adsorption. Precipitation processes are characterized by the solubility of the metal to be removed. They are generally designed to precipitate trace metals to their solubility limits and obtain additional removal by coprecipitation and adsorption during the precipitation reaction. There are many different treatment variables that affect these processes. They include the optimum pH, the type of chemical treatments used, and the number of treatment stages, as well as the temperature and volume of wastewater, and the chemical specifications of the pollutants to be removed. Each of these variables directly influences treatment objectives and costs. Treatability studies must be performed to optimize the relevant variables, so that goals are met and costs minimized. In theory, the precipitation process has two steps, nucleation followed by particle growth. Nucleation is represented by the appearance of very small particle seeds which are generally composed of 10–100 molecules. Particle growth

Environmental Encyclopedia 3

Robert Louis Heilbroner

involves the addition of more atoms or molecules into this particle structure. The rate and extent of this process is dependent upon the temperature and chemical characteristics of the wastewater, such as the concentration of metal initially present and other ionic species present, which can compete with or form soluble complexes with the target metal species. Heavy metals are present in many industrial wastewaters. Examples of such metals are cadmium, copper, lead, mercury, nickel, and zinc. In general, these metals can be complexed to insoluble species by adding sulfide, hydroxide, and carbonate ions to a solution. For example, the precipitation of copper (Cu) hydroxide is accomplished by adjusting the pH of the water to above 8, using precipitant chemicals such as lime (Ca(OH)2) or sodium hydroxide (NaOH). Precipitation of metallic carbonate and sulfide species can be accomplished by the addition of calcium carbonate or sodium sulfide. The removal of coprecipitive metals during precipitation of the soluble metals is aided by the presence of solid ferric oxide, which acts as an adsorbent during the precipitation reaction. For example, hydroxide precipitation of ferric chloride can be used as the source of ferric oxide for coprecipitation and adsorption reactions. Precipitation, coprecipitation, and adsorption reactions generate suspended solids which must be separated from the wastewater. Flocculation and clarification are again employed to assist in solids separation. The treatment is an important variable which must be optimized to effect the maximum metal removal possible. Determining the optimal pH range to facilitate the maximum precipitation of metal is a difficult task. It is typically accomplished by laboratory studies, such as by-jar tests rather than theoretical calculations. Often the actual wastestream behaves differently, and the theoretical metal solubilities and corresponding optimal pH ranges can vary considerably from theoretical values. See also Heavy metals and heavy metal poisoning; Industrial waste treatment; Itaiitai disease; Minamata disease; Sludge; Waste management [James W. Patterson]

RESOURCES BOOKS Nemerow, N. L., and A. Dasgupta. Industrial and Hazardous Waste Treatment. New York: Van Nostrand Reinhold, 1991.

Robert Louis Heilbroner (1919 – ) American economist and author An economist by profession, Robert Heilbroner is the author of a number of books and articles that put economic theories

and developments into historical perspective and relate them to contemporary social and political problems. He is especially noteworthy for his gloomy speculations on the future of a world confronted by the environmental limits to economic growth. Born in New York City in 1919, Heilbroner received a bachelor’s degree from Harvard University in 1940 and a Bronze Star for his service in World War II. In 1963 he earned a Ph.D. in economics from the New School for Social Research in New York, and in 1972, became the Norman Thomas Professor of Economics there. His books include The Worldly Philosophers (1955), The Making of Economic Society (1962), Marxism: For and Against (1980), and The Nature and Logic of Capitalism (1985). He has also served on the editorial board of the socialist journal Dissent. In 1974, Heilbroner published An Inquiry into the Human Prospect, in which he argues that three “external challenges” confront humanity: the population explosion, the threat of war, and “the danger...of encroaching on the environment beyond its ability to support the demands made on it.” Each of these problems, he maintains, arises from the development of scientific technology, which has increased human life span, multiplied weapons of destruction, and encouraged industrial production that consumes natural resources and pollutes the environment. Heilbroner believes that these challenges confront all economies, and that meeting them will require more than adjustments in economic systems. Societies will have to muster the will to make sacrifices. Heilbroner goes on to argue that persuading people to make these sacrifices may not be possible. Those living in one part of the world are not likely to give up what they have for the sake of those in another part, and people living now are not likely to make sacrifices for future generations. His reluctant conclusion is that coercion is likely to take the place of persuasion. Authoritarian governments may well supplant democracies because “the passage through the gantlet ahead may be possible only under governments capable of rallying obedience far more effectively than would be possible in a democratic setting. If the issue for mankind is survival, such governments may be unavoidable, even necessary.” Heilbroner wrote An Inquiry into the Human Prospect in 1972 and 1973, but his position had not changed by the end of the decade. In a revised edition written in 1979, he continued to insist upon the environmental limits to economic growth: “the industrialized capitalist and socialist worlds can probably continue along their present growth paths” for about 25 years, at which point “we must expect...a general recognition that the possibilities for expansion are limited, and that social and economic life must be maintained within fixed...material boundaries.” Heilbroner has published a number of books, including 21st Century Capitalism 711

Environmental Encyclopedia 3

Hells Canyon

Robert L. Heilbroner. (Photograph by Jose Pelaez. W. W. Norton. Reproduced by permission.)

and Visions of the Future. He also received the New York Council for the Humanities Scholar of the Year award in 1994. Heilbroner currerently holds the position of Norman Thomas Professor of Economics, Emeritus, at the New School for Social Research, in New York City. [Richard K. Dagger]

RESOURCES BOOKS Heilbroner, Robert L. An Inquiry into the Human Prospect. Rev. ed. New York: Norton, 1980. ———. The Making of an Economic Society. 6th ed. Englewood Cliffs, NJ: Prentice-Hall, 1980. ———. The Nature and Logic of Capitalism. New York: Norton, 1985. ———. Twenty-First Century Capitalism. Don Mills, Ont.: Anansi, 1992. ———. The Worldly Philosophers: The Lives, Times and Ideas of the Great Economic Thinkers. 6th ed. New York: Simon & Schuster, 1986. Straub, D., ed. Contemporary Authors: New Revision Series. Vol. 21. Detroit, MI: Gale Research, 1987.

Hells Canyon Hells Canyon is a stretch of canyon on the Snake River between Idaho and Oregon. This canyon, deeper than the Grand Canyon and formed in ancient basalt flows, contains some of 712

the United States’ wildest rapids and has provided extensive recreational and scenic boating since the 1920s. The narrow canyon has also provided outstanding dam sites. Hells Canyon became the subject of nationwide controversy between 1967 and 1975, when environmentalists challenged hydroelectric developers over the last stretch of free-flowing water in the Snake River from the border of Wyoming to the Pacific. Historically Hells Canyon, over 100 mi (161 km) long, filled with rapids, and averaging 6,500 ft (1,983 m) deep, presented a major obstacle to travelers and explorers crossing the mountains and deserts of southern Idaho and eastern Oregon. Nez Perce´, Paiute, Cayuse, and other Native American groups of the region had long used the area as a mild wintering ground with good grazing land for their horses. European settlers came for the modest timber and with cattle and sheep to graze. As early as the 1920s travelers were arriving in this scenic area for recreational purposes, with the first river runners navigating the canyon’s rapids in 1928. By the end of the Depression the Federal Power Commission was urging regional utility companies to tap the river’s hydroelectric potential, and in 1958 the first dam was built in the canyon. Falling from the mountains in southern Yellowstone National Park through Idaho, and into the Columbia River, the Snake River drops over 7,000 vertical ft (2,135 m) in 1,000 mi (1,609 km) of river. This drop and the narrow gorges the river has carved presented excellent dam opportunities, and by the end of the 1960s there were 18 major dams along the river’s course. By that time the river was also attracting great numbers of whitewater rafters and kayakers, as well as hikers and campers in the adjacent national forests. When a proposal was developed to dam the last freerunning section of the canyon, protesters brought a suit to the United States Supreme Court. In 1967, Justice William O. Douglas led the majority in a decision directing the utilities to consider alternatives to the proposed dam. Hells Canyon became a national environmental issue. Several members of Congress flew to Oregon to raft the river. The Sierra Club and other groups lobbied vigorously. Finally, in 1975 President Gerald Ford signed a bill declaring the remaining stretch of the canyon a National Scenic Waterway, creating a 650,000-acre (260,000-ha) Hells Canyon National Recreation Area, and adding 193,000 acres (77,200 ha) of the area to the National Wilderness Preservation System. See also Wild and Scenic Rivers Act; Wild river [Mary Ann Cunningham Ph.D.]

RESOURCES BOOKS Collins, R. O., and R. Nash. The Big Drops. San Francisco: Sierra Club Books, 1978.

Environmental Encyclopedia 3 OTHER Hells Canyon Recreation Area. “Hells Canyon.” Washington, DC: U.S. Government Printing Office, 1988.

Hazel Henderson (1933 – ) English/American environmental activist and writer Hazel Henderson is an environmental activist and futurist who has called for an end to current “unsustainable industrial modes” and urges redress for the “unequal access to resources which is now so dangerous, both ecologically and socially.” Born in Clevedon, England, Henderson immigrated to the United States after finishing high school; she became a naturalized citizen in 1962. After working for several years as a free-lance journalist, she married Carter F. Henderson, former London bureau chief of the Wall Street Journal in 1957. Her activism began when she became concerned about air quality in New York City, where she was living. To raise public awareness, she convinced the FCC and television networks to broadcast the air pollution index with the weather report. She persuaded an advertising agency to donate their services to her cause and teamed up with a New York City councilman to co-found Citizens for Clean Air. Her endeavors were rewarded in 1967, when she was commended as Citizen of the Year by the New York Medical Society. Henderson’s career as an advocate for social and environmental reform took flight from there. She argued passionately against the spread of industrialism, which she called “pathological” and decried the use of an economic yardstick to measure quality of life. Indeed, she termed economics “merely politics in disguise” and even “a form of brain damage.” Henderson believed that society should be measured by less tangible means, such as political participation, literacy, education, and health. “Per-capita income,” she felt, is “a very weak indicator of human well-being.” She became convinced that traditional industrial development wrought little but “ecological devastation, social unrest, and downright hunger...I think of development, instead,...as investing in ecosystems, their restoration and management.” Even the fundamental idea of labor should, Henderson argued, “be replaced by the concept of ’Good Work’—which challenges individuals to grow and develop their faculties; to overcome their ego-centeredness by joining with others in common tasks; to bring forth those goods and services needed for a becoming existence; and to do all this with an ethical concern for the interdependence of all life forms...” To advance her theories, Henderson has published several books, Creative Alternative Futures: The End of Economics (1978), The Politics of the Solar Age: Alternatives to

Herbicide

Economics (1981), Building a Win-Win World (1996), Toward Sustainable Communities: Resources for Citizens and Their Governments (1998), and Beyond Globalization: Shaping a Sustainable Global Economy (1999). She has also contributed to several periodicals, and lectured at colleges and universities. In 1972 she co-founded the Princeton Center for Alternative Futures, of which she is still a director. She is a member of the board of directors for Worldwatch Institute and the Council for Economic Priorities, among other organizations. In 1982 she was appointed a Horace Allbright Professor at the University of California at Berkeley. In 1996, Henderson was awarded the Global Citizen Award. [Amy Strumolo]

RESOURCES BOOKS Henderson, H. Beyond Globalization: Shaping a Sustainable Global Economy. 1999. ———. Building a Win-Win World. 1996. ———. Creative Alternative Futures: The End of Economics. 1978. ———. The Politics of the Solar Age: Alternatives to Economics. 1981. ———. Toward Sustainable Communities: Resources for Citizens and Their Governments. 1998. Telephone Interview with Hazel Henderson. Whole Earth Review (Winter 1988): 58–59.

PERIODICALS Henderson, H. “The Legacy of E. F. Schumacher.” Environment 20 (May 1978): 30–36. Holden, C. “Hazel Henderson: Nudging Society Off Its Macho Trip.” Science 190 (November 28, 1975): 863–64.

Herbicide Herbicides are chemical pesticides that are used to manage vegetation. Usually, herbicides are used to reduce the abundance of weedy plants, so as to release desired crop plants from competition. This is the context of most herbicide use in agriculture, forestry, and for lawn management. Sometimes herbicides are not used to protect crops, but to reduce the quantity or height of vegetation, for example along highways and transmission corridors. The reliance on herbicides to achieve these ends has increased greatly in recent decades, and the practice of chemical weed control appears to have become an entrenched component of the modern technological culture of humans, especially in agroecosystems. The total use of pesticides in the United States in the mid-1980s was 957 million lb per year (434 million kg/ year), used over 592,000 mi2 (1.5 million km2). Herbicides were most widely used, accounting for 68% of the total quantity [646 million lb per year [293 million kg/year]), and applied to 82% of the treated land [484,000 square miles 713

Herbicide

per year (121 million hectares/year)]. Note that especially in agriculture, the same land area can be treated numerous times each year with various pesticides. A wide range of chemicals is used as herbicides, including: chlorophenoxy acids, especially 2,4-D and 2,4,5-T, which have an auxin-like growth-regulating property and are selective against broadleaved angiosperm plants; Otriazines such as atrazine, simazine, and hexazinone; Ochloroaliphatics such as dalapon and trichloroacetate; Othe phosphonoalkyl chemical, glyphosate, and Oinorganics such as various arsenicals, cyanates, and chlorates. A “weed” is usually considered to be any plant that interferes with the productivity of a desired crop plant or some other human purpose, even though in other contexts weed species may have positive ecological and economic values. Weeds exert this effect by competing with the crop for light, water, and nutrients. Studies in Illinois demonstrated an average reduction of yield of corn or maize (Zea mays) of 81% in unweeded plots, while a 51% reduction was reported in Minnesota. Weeds also reduce the yield of small grains, such as wheat (Triticum aestivum) and barley (Hordeum vulgare), by 25–50%. Because there are several herbicides that are toxic to dicotyledonous weeds but not grasses, herbicides are used most intensively used in grain crops of the Gramineae. For example, in North America almost all of the area of maize cultivation is treated with herbicides. In part this is due to the widespread use of no-tillage cultivation, a system that reduces erosion and saves fuel. Since an important purpose of plowing is to reduce the abundance of weeds, the notillage system would be impracticable if not accompanied by herbicide use. The most important herbicides used in maize cultivation are atrazine, propachlor, alachlor, 2,4-D, and butylate. Most of the area planted to other agricultural grasses such as wheat, rice (Oryza sativa), and barley is also treated with herbicide, mostly with the phenoxy herbicides 2,4-D or MCPA. The intended ecological effect of any pesticide application is to control a pest species, usually by reducing its abundance to below some economically acceptable threshold. In a few situations, this objective can be attained without important nontarget damage. For example, a judicious spotapplication of a herbicide can allow a selective kill of large lawn weeds in a way that minimizes exposure to nontarget plants and animals. Of course, most situations where herbicides are used are more complex and less well-controlled than this. Whenever a herbicide is broadcast-sprayed over a field or forest, a wide variety of on-site, nontarget organisms is affected, O

714

Environmental Encyclopedia 3 and sprayed herbicide also drifts from the target area. These cause ecotoxicological effects directly, through toxicity to nontarget organisms and ecosystems, and indirectly, by changing habitat or the abundance of food species of wildlife. These effects can be illustrated by the use of herbicides in forestry, with glyphosate used as an example. The most frequent use of herbicides in forestry is for the release of small coniferous plants from the effects of competition with economically undesirable weeds. Usually the silvicultural use of herbicides occurs within the context of an intensive harvesting-and-management system, which may include clear-cutting, scarification, planting seedlings of a single desired species, spacing, and other practices. Glyphosate is a commonly used herbicide in forestry and agriculture. The typical spray rate in silviculture is about 2.2–4.9 mi (1–2.2 kg) active ingredient/ha, and the typical projection is for one to two treatments per forest rotation of 40–100 years. Immediately after an aerial application in forestry, glyphosate residues are about six times higher than litter on the forest floor, which is physically shielded from spray by overtopping foliage. The persistence of glyphosate residues is relatively short, with typical half-lives of two to four weeks in foliage and the forest floor, and up to eight weeks in soil. The disappearance of residues from foliage is mostly due to translocation and wash-off, but in the forest floor and soil glyphosate is immobile (and unavailable for root uptake or leaching) because of binding to organic matter and clay, and residue disappearance is due to microbial oxidation. Residues in oversprayed waterbodies tend to be small and short-lived. For example, two hours after a deliberate overspray on Vancouver Island, Canada, residues of glyphosate in stream water rose to high levels, then rapidly dissipated through flushing to only trace amounts 94 hours later. Because glyphosate is soluble in water, there is no propensity for bioaccumulation in organisms in preference to the inorganic environment, or to occur in larger concentrations at higher levels of the food chain/web. This is in marked contrast to some other pesticides such as DDT, which is soluble in organic solvents but not in water, so it has a strong tendency to bioaccumulate into the fatty tissues of organisms. As a plant poison, glyphosate acts by inhibiting the pathway by which four essential amino acids are synthesized. Only plants and some microorganisms have this metabolic pathway; animals obtain these amino acids from food. Consequently, glyphosate has a relatively small acute toxicity to animals, and there are large margins of toxicological safety in comparison with environmental exposures that are realistically expected during operational silvicultural sprays. Acute toxicity of chemicals to mammals is often indexed by the oral dose required to kill 50% of a test popula-

Environmental Encyclopedia 3 tion, usually of rats (i.e., rat LD50). The LD50 value for pure glyphosate is 5,600 mg/kg, and its silvicultural formulation has a value of 5,400 mg/kg. Compare these to LD50s for some chemicals which many humans ingest voluntarily: nicotine 50 mg/kg, caffeine 366, acetylsalicylic acid (ASA) 1,700, sodium chloride 3,750, and ethanol 13,000. The documented risks of longer-term, chronic exposures of mammals to glyphosate are also small, especially considering the doses that might be received during an operational treatment in forestry. Considering the relatively small acute and chronic toxicities of glyphosate to animals, it is unlikely that wildlife inhabiting sprayed clearcuts would be directly affected by a silvicultural application. However, glyphosate causes large habitat changes through species-specific effects on plant productivity, and by changing habitat structure. Therefore, wildlife such as birds and mammals could be secondarily affected through changes in vegetation and the abundance of their arthropod foods. These indirect effects of herbicide spraying are within the context of ecotoxicology. Indirect effects can affect the abundance and reproductive success of terrestrial and aquatic wild life on a sprayed site, irrespective of a lack of direct, toxic effects. Studies of the effects of habitat changes caused by glyphosate spraying have found relatively small effects on the abundance and species composition of wildlife. Much larger effects on wildlife are associated with other forestry practices, such as clear-cutting and the broadcast spraying of insecticides. For example, in a study of clearcuts sprayed with glyphosate in Nova Scotia, Canada, only small changes in avian abundance and species composition could be attributed to the herbicide treatment. However, such studies of bird abundance are conducted by enumerating territories, and the results cannot be interpreted in terms of reproductive success. Regrettably, there are not yet any studies of the reproductive success of birds breeding on clearcuts recently treated with a herbicide. This is an important deficiency in terms of understanding the ecological effects of herbicide spraying in forestry. An important controversy related to herbicides focused on the military use of herbicides during the Viet Nam war. During this conflict, the United States Air Force broadcastsprayed herbicides to deprive their enemy of food production and forest cover. More than 5,600 mi2 (14,503 km2) were sprayed at least once, about 1/7 the area of South Viet Nam. More than 55 million lb (25 million kg) of 2,4-D, 43 million lb (21 million kg) of 2,4,5-T, and 3.3 million lb (1.5 million kg) of picloram were used in this military program. The most frequently used herbicide was a 50:50 formulation of 2,4,5-T and 2,4-D known as Agent Orange. The rate of application was relatively large, averaging about 10 times the application rate for silvicultural purposes. About 86% of

Herbicide

spray missions were targeted against forests, and the remainder against cropland. As was the military intention, these spray missions caused great ecological damage. Opponents of the practice labelled it “ecocide,” i.e., the intentional use of anti-environmental actions as a military tactic. The broader ecological effects included severe damage to mangrove and tropical forests, and a great loss of wildlife habitat. In addition, the Agent Orange used in Viet Nam was contaminated by the dioxin isomer known as TCDD, an incidental by-product of the manufacturing process of 2,4,5T. Using post-Vietnam manufacturing technology, the contamination by TCDD in 2,4,5-T solutions can be kept to a concentration well below the maximum of 0.1 parts per million (ppm) set by the United States Environmental Protection Agency (EPA). However, the 2,4,5-T used in Viet Nam was grossly contaminated with TCDD, with a concentration as large as 45 ppm occurring in Agent Orange, and an average of about 2.0 ppm. Perhaps 243–375 lb (110–170 kg) of TCDD was sprayed with herbicides onto Vietnam. TCDD is well known as being extremely toxic, and it can cause birth defects and miscarriage in laboratory mammals, although as is often the case, toxicity to humans is less well understood. There has been great controversy about the effects on soldiers and civilians exposed to TCDD in Vietnam, but epidemiological studies have been equivocal about the damages. It seems likely that the effects of TCDD added little to human mortality or to the direct ecological effects of the herbicides that were sprayed in Vietnam. A preferable approach to pesticide use is integrated pest management (IPM). In the context of IPM, pest control is achieved by employing an array of complementary approaches, including: use of natural predators, parasites, and other biological controls; Ouse of pest-resistant varieties of crops; Oenvironmental modifications to reduce optimality of pest habitat; Ocareful monitoring of pest abundance; and Oa judicious use of pesticides, when necessary as a component of the IPM strategy. A successful IPM program can greatly reduce, but not necessarily eliminate, the reliance on pesticides. With specific relevance to herbicides, more research into organic systems and into procedures that are pest-specific are required for the development of IPM systems. Examples of pest-specific practices are the biological control of certain introduced weeds, for example: O

St. John’s wort (Hypericum perforatum) is a serious weed of pastures of the United States Southwest because it is

O

715

Heritage Conservation and Recreation Service

Environmental Encyclopedia 3

toxic to cattle, but it was controlled by the introduction in 1943 of two herbivorous leaf beetles; Othe prickly pear cactus (Opuntia spp.) became a serious weed of Australian rangelands after it was introduced as an ornamental plant, but it has been controlled by release of the moth Cactoblastis cactorum, whose larvae feed on the cactus. Unfortunately, effective IPM systems have not yet been developed for most weed problems for which herbicides are now used. Until there are alternative, pest-specific methods to achieve an economically acceptable degree of control of weeds in agriculture and forestry, herbicides will continue to be used for that purpose. See also Agricultural chemicals

1981 HCRS was abolished as an agency and its responsibilities were transferred to the National Park Service.

[Bill Freedman Ph.D.]

RESOURCES BOOKS Freedman, B. Environmental Ecology. 2nd Edition. San Diego, CA: Academic Press, 1995. McEwen, F. L., and G. R. Stephenson. The Use and Significance of Pesticides in the Environment. New York: Wiley, 1979.

PERIODICALS Pimentel, D., et al. “Environmental and Economic Effects of Reducing Pesticide Use.” Bioscience 41 (1991): 402–409. ———. “Controversy Over the Use of Herbicides in Forestry, With Particular Reference to Glyphosate Usage.” Environmental Carcinogenesis Reviews C8 (1991): 277–286.

Heritage Conservation and Recreation Service The Heritage Conservation and Recreation Service (HCRS) was created in 1978 as an agency of the U. S. Department of the Interior (Secretarial Executive Order 3017) to administer the National Heritage Program initiative of President Carter. The new agency was an outgrowth of and successor to the former Bureau of Outdoor Recreation. The HCRS resulted from the consolidation of some 30 laws, executive orders and interagency agreements that provided federal funds to states, cities and local community organizations to acquire, maintain, and develop historic, natural and recreation sites. HCRS focused on the identification and protection of the nation’s significant natural, cultural and recreational resources. It classified and established registers for heritage resources, formulated policies and programs for their preservation, and coordinated federal, state and local resource and recreation policies and actions. In February 716

Hetch Hetchy Reservoir The Hetch Hetchy Reservoir, located on the Tuolumne River in Yosemite National Park, was built to provide water and hydroelectric power to San Francisco. Its creation in the early 1900s led to one of the first conflicts between preservationists and those favoring utilitarian use of natural resources. The controversy spanned the presidencies of Roosevelt, Taft, and Wilson. A prolonged conflict between San Francisco and its only water utility, Spring Valley Water Company, drove the city to search for an independent water supply. After surveying several possibilities, the city decided to build a dam and reservoir in the Hetch Hetchy Valley because the river there could supply the most abundant and purest water. This option was also the least expensive, since the city planned to use the dam to generate hydroelectric power. It would also provide an abundant supply of irrigation water for area farmers and the recreation potential of a new lake. The city applied to the U. S. Department of the Interior in 1901 for permission to construct the dam, but the request was not approved until 1908. The department then turned the issue over to Congress to work out an exchange of land between the federal government and the city. Congressional debate spanned several years and produced a number of bills. Part of the controversy involved the Right of Way Act of 1901, which gave Congress power to grant rights of way through government lands; some claimed this was designed specifically for the Hetch Hetchy project. Opponents of the project likened the valley to Yosemite on a smaller scale. They wanted to preserve its high cliff walls, waterfalls, and diverse plant species. One of the most well-known opponents, John Muir, described the Hetch Hetchy Valley as “a grand landscape garden, one of Nature’s rarest and most precious mountain temples.” Campers and mountain climbers fought to save the campgrounds and trails that would be flooded. As the argument ensued, often played out in newspapers and other public forums, overwhelming national opinion appeared to favor the preservation of the valley. Despite this public support, a close vote in Congress led to the passage of the Raker Act, allowing the O’Shaughnessy Dam and Hetch Hetchy Reservoir to be constructed. President Woodrow Wilson signed the bill into law on December 19, 1913. The Hetch Hetchy Reservoir was completed in 1923 and still supplies water and electric power to San Francisco. In 1987, Secretary of the Interior Donald Hodel created a

Environmental Encyclopedia 3

High-solids reactor

brief controversy when he suggested tearing down O’Shaughnessy Dam. See also Economic growth and the environment; Environmental law; Environmental policy [Teresa C. Donkin]

RESOURCES BOOKS Jones, Holway R. John Muir and the Sierra Club: The Battle for Yosemite. San Francisco: Sierra Club, 1965. Nash, Roderick. “Conservation as Anxiety.” In The American Environment: Readings in the History of Conservation. 2nd ed. Reading, Mass: AddisonWesley Publishing Company, 1976.

Heterotroph A heterotroph is an organism that derives its nutritional carbon and energy by oxidizing (i.e., decomposing) organic materials. The higher animals, fungi, actinomycetes, and most bacteria are heterotrophs. These are the biological consumers that eventually decompose most of the organic matter on the earth. The decomposition products then are available for chemical or biological recycling. See also Biogeochemical cycles; Oxidation reduction reactions

High-grading (mining, forestry) The practice of high-grading can be traced back to the early days of the California gold rush, when miners would sneak into claims belonging to others and steal the most valuable pieces of ore. The practice of high-grading remains essentially unchanged today. An individual or corporation will enter an area and selectively mine or harvest only the most valuable specimens, before moving on to a new area. Highgrading is most prevalent in the mining and timber industries. It is not uncommon to walk into a forest, particularly an old-growth forest, and find the oldest and finest specimens marked for harvesting. See also Forest management; Strip mining

High-level radioactive waste High-level radioactive waste consists primarily of the byproducts of nuclear power plants and defense activities. Such wastes are highly radioactive and often decay very slowly. They may release dangerous levels of radiation for hundreds or thousands of years. Most high-level radioactive wastes have to be handled by remote control by workers who are protected by heavy shielding. They present, therefore, a serious health and environmental hazard. No entirely satisfactory method for disposing of high-level wastes has as yet

been devised. Currently, the best approach seems to involve immobilizing the wastes in a glass-like material and then burying them deep underground. See also Low-level radioactive waste; Radioactive decay; Radioactive pollution; Radioactive waste management; Radioactivity

High-solids reactor Solid waste disposal is a serious problem in the United

States and other developed countries. Solid waste can constitute valuable raw materials for commercial and industrial operations, however, and one of the challenges facing scientists is to develop an economically efficient method for utilizing it. Although the concept of bacterial waste conversion is simple, achieving an efficient method for putting the technique into practice is difficult. The main problem is that efficiency of conversion requires increasing the ratio of solids to water in the mixture, and this makes mixing more difficult mechanically. The high-solids reactor was designed by scientists at the Solar Energy Research Institute (SERI) to solve this problem. It consists of a cylindrical tube on a horizontal axis, and an agitator shaft running through the middle of it, which contains a number of Teflon-coated paddles oriented at 90 degrees to each other. The pilot reactors operated by SERI had a capacity of 2.6 gal (10 l). SERI scientists modeled the high solids reactor after similar devices used in the plastics industry to mix highly viscous materials. With the reactor, they have been able to process materials with 30–35% solids content, while existing reactors normally handle wastes with five to eight% solid content. With higher solid content, SERI reactors have achieved a yield of methane five to eight times greater than that obtained from conventional mixers. Researchers hope to be able to process wastes with solid content ranging anywhere from zero to 100%. They believe that they can eventually achieve 80% efficiency in converting biomass to methane. The most obvious application of the high-solids reactor is the processing of municipal solid wastes. Initial tests were carried out with sludge obtained from sewage treatment plants in Denver, Los Angeles, and Chicago. In all cases, conversion of solids in the sludge to methane was successful, and other applications of the reactor are also being considered. For example, it can be used to leach out uranium from mine wastes: anaerobic bacteria in the reactor will reduce uranium in the wastes and the uranium will then be absorbed on the bacteria or on ion exchange resins. The use of the reactor to clean contaminated soil is also being considered in the hope is that this will provide a desirable alternative to current processes for cleaning soil, 717

Environmental Encyclopedia 3

Hiroshima, Japan

which create large volumes of contaminated water. See also Biomass fuel; Solid waste incineration; Solid waste recycling and recovery; Solid waste volume reduction; Waste management [David E. Newton]

RESOURCES PERIODICALS “High Solids Reactor May Reduce Capital Costs.” Bioprocessing Technology (June 1990). “SERI Looking for Partners for Solar-Powered High Solids Reactor.” Waste Treatment Technology News (October 1990).

High-voltage power lines see Electromagnetic field

High-yield crops see Borlaug, Norman E.; Consultative Group on International Agricultural Research

Hiroshima, Japan Hiroshima is a beautiful modern city located near the southwestern tip of the main Japanese island of Honshu. It had been a military center with the headquarters of the Japanese southern army and a military depot prior to the end of World War II. The city is now a manufacturing center with a major university and medical school. It is most profoundly remembered because it was the first city to be exposed to the devastation of an atomic bomb. At 8:15 A.M. on the morning of August 6, 1945, a single B29 bomber flying in from Tinian Island in the Marinas released the bomb at 2,000 ft (606.6 m) above the city. The target was a “T"-shaped bridge near the city center. The only surviving building in the city center after the atomic bomb blast was a domed cement building at ground zero, just a few yards from the bridge. An experimental bomb developed by the Manhattan Project had been exploded at Alamagardo, New Mexico, only a few weeks earlier. The Alamagardo bomb had the explosive force of 15,000 tons of TNT. The Hiroshima uranium-235 bomb, with the explosive power of 20,000 tons of TNT, was even more powerful than the New Mexico bomb. The immediate effect of the bomb was to destroy by blast, winds, and fire an area of 4.4 mi2 (7 km2). Two-thirds of the city was destroyed. A portion of Hiroshima was protected from the blast by hills, and this is all that remains of the old city. Destruction of human lives was caused immediately by the blast force 718

of the bomb or by burns or radiation sickness later. Seventy-five thousand people were killed or were fatally wounded; there was an equal number of wounded survivors. Nagasaki, to the south and west of Hiroshima, was bombed on August 9, 1945, with much loss of life. The bombing of these two cities brought World War II to a close. The lessons that Hiroshima (and Nagasaki) teach are the horrors of war with its random killing of civilian men, women, and children. That is the major lesson—war is horrible with its destruction of innocent lives. The why of Hiroshima should be taken in the context of the battle for Okinawa which occurred only weeks before. America forces suffered 12,000 dead with 36,000 wounded in the battle for that small island 350 mi (563.5 km) from the mainland of Japan. The Japanese were reported to have lost 100,000 men. The determination of the Japanese to defend their homeland was well known, and it was estimated that the invasion of Japan would cost no less than 500,000 American lives. Japanese casualties were expected to be larger. It was the military judgment of the American President that a swift termination of the war would save more lives than it would cost; both American and Japanese. Whether this rationale for the atomic bombing of Hiroshima was correct, i.e., whether more people would have died if Japan was invaded, will never be known. However, it certainly is a fact that the war came to a swift end after the bombing of the two cities. The second lesson to be learned from Hiroshima is that radiation exposure is hazardous to human health and radiation damage results in radiation sickness and increased cancer risk. It had been known since the development of x rays at the turn of the century that radiation has the potential to cause cancer. However, the thousands of survivors at Hiroshima and Nagasaki were to become the largest group ever studied for radiation damage. The Atomic Bomb Casualty Commission, now referred to as the Radiation Effects Research Foundation (RERF), was established to monitor the health effects of radiation exposure and has studied these survivors since the end of World War II. The RERF has reported a 10–15 times excess of all types of leukemia among the survivors compared with populations not exposed to the bomb. The leukemia excess peaked four to seven years after exposure but still persists among the survivors. All forms of cancer tended to develop more frequently in heavily irradiated individuals, especially children under the age of 10 at the time of exposure. Thyroid cancer was also increased in these children survivors of the bomb. War is destructive to human life. The particular kind of destruction at Hiroshima, due to an atomic bomb, continues to be relentlessly destructive. The city of Hiroshima is known as a Peace City. [Robert G. McKinnell]

Environmental Encyclopedia 3

Holistic approach

A painting by Yasuko Yamagata depicting the Japanese city of Hiroshima after the atomic bomb was dropped on it. (Reproduced by permission.)

Holistic approach First formulated by Jan Smuts, holism has been traditionally defined as a philosophical theory that states that the determining factors in nature are wholes which are irreducible to the sum of their parts and that the evolution of the universe is the record of the activity and making of such wholes. More generally, it is the concept that wholes cannot be analyzed into parts or reduced to discrete elements without unexplainable residuals. Holism may also be defined by what it is not: it is not synonymous with organicism; holism does not require an entity to be alive or even a part of living processes. And neither is holism confined to spiritual mysticism, unaccessible to scientific methods or study. The holistic approach in ecology and environmental science derives from the idea proposed by Harrison Brown that “a precondition for solving [complex] problems is a realization that all of them are interlocked, with the result that they cannot be solved piecemeal.” For some scholars holism is the rationale for the very existence of ecology. As

David Gates notes, “the very definition of the discipline of ecology implies a holistic study.” The holistic approach has been successfully applied to environmental management. The United States Forest Service, for example, has implemented a multi-level approach to management that takes into account the complexity of forest ecosystems, rather than the traditional focus on isolated incidents or problems. Some people believe that a holistic approach to nature and the world will counter the effects of “reductionism"— excessive individualism, atomization, mechanistic worldview, objectivism, materialism, and anthropocentrism. Advocates of holism claim that its emphasis on connectivity, community, processes, networks, participation, synthesis, systems, and emergent properties will undo the “ills” of reductionism. Others warn that a balance between reductionism and holism is necessary. American ecologist Eugene Odum mandated that “ecology must combine holism with reductionism if applications are to benefit society.” Parts and wholes, at the macro- and micro-level, must be understood. 719

Environmental Encyclopedia 3

Homeostasis

The basic lesson of a combined and complementary partswhole approach is that every entity is both part and whole— an idea reenforced by Arthur Koestler’s concept of a holon. A holon is any entity that is both a part of a larger system and itself a system made up of parts. It is essential to recognize that holism can include the study of any whole, the entirety of any individual in all its ramifications, without implying any organic analogy other than organisms themselves. A holistic approach alone, especially in its extreme form, is unrealistic, condemning scholars to an unproductive wallowing in an unmanageable complexity. Holism and reductionism are both needed for accessing and understanding an increasingly complex world. See also Environmental ethics [Gerald L. Young Ph.D.]

RESOURCES BOOKS Bowen, W. “Reductions and Holism.” In Thinking About Nature: An Investigation of Nature, Value and Ecology. Athens: University of Georgia Press, 1988. Johnson, L. E. “Holism.” In A Morally Deep World: An Essay on Moral Significance and Environmental Ethics. Cambridge: Cambridge University Press, 1991. Savory, A. Holistic Resource Management. Covelo, CA: Island Press, 1988.

PERIODICALS Krippner, S. “The Holistic Paradigm.” World Futures 30 (1991): 133–40. Marietta Jr., D. E. “Environmental Holism and Individuals.” Environmental Ethics 10 (Fall 1988): 251–58. McCarty, D. C. “The Philosophy of Logical Wholism.” Synthese 87 (April 1991): 51–123. Van Steenbergen, B. “Potential Influence of the Holistic Paradigm on the Social Sciences.” Futures 22 (December 1990): 1071–83.

Homeostasis Humans, all other organisms, and even ecological systems, live in an environment of constant change. The persistently shifting, modulating, and changing milieu would not permit survival, if it were not for the capacity of biological systems to respond to this constant flux by maintaining a relatively stable internal environment. An example taken from mammalian biology is temperature which appears to be “fixed” at approximately 98.6°F (37°C). While humans can be exposed to extreme summer heat, and arctic mammals survive intense cold, body temperature remains constant within vary narrow limits. Homeostasis is the sum total of all the biological responses that provide internal equilibrium and assure the maintenance of conditions for survival. The human species has a greater variety of living conditions than any other organism. The ability of humans to live and reproduce in such diverse circumstances is due 720

to a combination of homeostatic mechanisms coupled with cultural (behavioral) responses. The scientific concept of homeostasis emerged from two scientists: Claude Bernard, a French physiologist, and Walter Bradford Cannon, an American physician. Bernard contrasted the external environment which surrounds an organism and the internal environment of that organism. He was, of course, aware that the external environment fluctuated considerably in contrast to the internal environment which remained remarkably constant. He is credited with the enunciation of the constancy of the internal environment ("La fixite´ du milieu inte´rieur...") in 1859. Bernard believed that the survival of an organism depended upon this constancy, and he observed it not only in temperature control but in the regulation of all of the systems that he studied. The concept of the stable “milieu inte´rieur” has been accepted and extended to the many organ systems of all higher vertebrates. This precise control of the internal environment is effected through hormones, the autonomic nervous system, endocrines, etc. The term “homeostasis,” derived from the Greek homoios meaning similar and stasis meaning to stand, suggests an internal environment which remains relatively similar or the same through time. The term was devised by Cannon in 1929 and used many times subsequently. Cannon noted that, in addition to temperature, there were complex controls involving many organ systems that maintained the internal stability within narrow limits. When those limited are exceeded, there is a reaction in the opposite direction that brings the condition back to normal, and the reactions returning the system to normal is referred to as negative feedback. Both Bernard and Cannon were concerned with human physiology. Nevertheless, the concept of homeostasis is applied to all levels of biological organization from the molecular level to ecological systems, including the entire biosphere. Engineers design self-controlling machines known as servomechanisms with feedback control by means of a sensing device, an amplifier which controls a servomotor which in turn runs the operation of the device. Examples of such devices are the thermostats which control furnace heat in a home or the more complicated automatic pilots of aircraft. While the human-made servomechanisms have similarities to biological homeostasis, they are not considered here. As indicated above, temperature is closely regulated in humans and other homeotherms (birds and mammals). The human skin has thermal receptors sensitive to heat or cold. If cold is encountered, the receptors notify an area of the brain known as the hypothalamus via a nerve impulse. The hypothalamus has both a heat-promoting center and a heat-losing center, and, with cold, it is the former which is stimulated. Thyroid-releasing hormone, produced in the

Environmental Encyclopedia 3 hypothalamus, causes the anterior pituitary to release thyroid stimulating hormone which, in turn, causes the thyroid gland to increase production of thyroxine which results in increased metabolism and therefore heat. Sympathetic nerves from the hypothalamus stimulate the adrenal medulla to secrete epinephrine and norepinephrine into the blood which also increases body metabolism and heat. Increased muscle activity will generate heat and that activity can be either voluntary (stamping the feet for instance) or involuntary (shivering). Since heat is dissipated via body surface blood vessels, the nervous system causes surface vasoconstriction to decrease that heat loss. Further, the small quantity of blood that does reach the surface of the body, where it is chilled, is reheated by countercurrent heat exchange resulting from blood vessels containing cold blood from the limbs running adjacent to blood vessels from the body core which contain warm blood. The chilled blood is prewarmed prior to returning to the body core. A little noted response to chilling is the voluntary reaching for a jacket or coat to minimize heat loss. The body responds with opposite results when excessive heat is encountered. The individual tends to shed unnecessary clothing, and activity is reduced to minimize metabolism. Vasodilation of superficial blood vessels allows for radiation of heat. Sweat is produced, which by evaporation reduces body heat. It is clear that the maintenance of body temperature is closely controlled by a complex of homeostasis mechanisms. Each step in temperature regulation is controlled by negative feedback. As indicated above, with exposure to cold the hypothalamus, through a series of steps, induces the synthesis and release of thyroxine by the thyroid gland. What was not indicated above was the fact that elevated levels of thyroxine control the level of activity of the thyroid by negative feedback inhibition of thyroid stimulating hormone. An appropriate level of thyroid hormone is thus maintained. In contrast, with inadequate thyroxine, more thyroid stimulating hormone is produced. Negative feedback controls assure that any particular step in homeostasis does not deviate too much from the normal. Historically, biologists have been particularly impressed with mammalian and human homeostasis. Lower vertebrates have received less attention. However, while internal physiology may vary more in a frog than in a human, there are mechanisms which assure the survival of frogs. For instance, when the ambient temperature drops significantly in the autumn in northern latitudes, leopard frogs move into lakes or rivers which do not freeze. Moving into lakes and rivers is a behavioral response to a change in the external environment which results in internal temperature stability. The metabolism and structure of the frog is inadequate to protect the frog from freezing, but the specific heat of the water is such that freezing does not occur except at the

Homeostasis

surface of the overwintering lake or river. Even though life at the bottom of a lake with an ice cover moves at a slower pace than during the warm summer months, a functioning circulatory system is essential for survival. In general, frog blood (not unlike crankcase oil prior to the era of multiviscosity oil) increases in viscosity with as temperature decreases. Frog blood, however, decreases in viscosity with the prolonged autumnal and winter cold temperatures, thus assuring adequate circulation during the long nights under an ice cover. This is another control mechanism that assures the survival of frogs by maintaining a relatively stable internal environment during the harsh winter. With a return of a warm external environment, northern leopard frogs leave cold water to warm up under the spring sun. Warm temperature causes frog blood viscosity to increase to summer levels. It may be that the behavioral and physiological changes do not prevent oscillations that would be unsuitable for warm blooded animals but, in the frog, the fluctuations do not interfere with survival, and in biology, that is all that is essential. There is homeostasis in ecological systems. Populations of animals in complex systems fluctuate in numbers, but the variations in numbers are generally between limits. For example, predators survive in adequate numbers as long as prey are available. If predators become too great in number, the population of prey will diminish. With fewer prey, the numbers of predators plummet through negative feedback thus permitting recovery of the preyed upon species. The situation becomes much more complex when other food sources are available to the predator. Many organisms encounter a negative feedback on growth rate with crowding. This density dependent population control has been studied in larval frogs, as well as many other organisms, where excretory products seem to specifically inhibit the crowded species but not other organisms in the same environment. Even with adequate food, high density culture of laboratory mice results in negative feedback on reproductive potential with abnormal gonad development and delayed sexual maturity. Density independent factors affecting populations are important in population control but would not be considered homeostasis. Drought is such a factor, and its effects can be contrasted with crowding. Populations of tadpoles will drop catastrophically when breeding ponds dry. Instead of fluctuating between limits (with controls), all individuals are affected the same (i.e., they die). The area must be repopulated with immigrants at a subsequent time, and the migration can be considered a population homeostatic control. The inward migration results in maintenance of population within the geographic area and aids in the survival of the species. [Robert G. McKinnell]

721

Environmental Encyclopedia 3

Homestead Act (1862)

RESOURCES BOOKS Hardy, R. N. Homeostasis. London: Edward Arnold, Ltd., 1976. Langley, L. L. Homeostasis. New York: Reinhold Publishing Co., 1965. Tortora, G. J., and N. P. Anagnostakos. Principles of Anatomy and Physiology. 5th ed. New York: Harper and Row, 1987.

Homestead Act (1862) The Homestead Act was signed into law in 1862. It was a legislative offer on a vast scale of free homesteads on unappropriated public lands. Any citizen (or alien who filed a declaration of intent to become a citizen), who had reached the age of 21, and was the head of a family could acquire title to a stretch of public land of up to 160 acres (65 ha) after living on it and farming it for five years. The only payment required was administrative fees. The settler could also obtain the land without the requirement of residence and cultivation for five years, against payment of $1.25 per acre. With the advent of machinery to mechanize farm labor, 160-acre (65 ha) tracts soon became uneconomical to operate, and Congress modified the original act to allow acquisition of larger tracts. The Homestead Act is still in effect, but good unappropriated land is scarce. Only Alaska still offers opportunities for homesteaders. The Homestead Act was designed to speed development of the United States and to achieve an equitable distribution of wealth. Poor settlers, who lacked the capital to buy land, were now able to start their own farms. Indeed, the act contributed greatly to the growth and development of the country, particularly in the period between the Civil War and World War I, and it did much to speed settlement west of the Mississippi River. In all, well over a quarter of a billion acres of land has been distributed under the Homestead Act and its amendments. However, only a small percentage of land granted under the act between 1862 and 1900 was in fact acquired by homesteaders. According to estimates, only at most 1 of every 6 acres (0.4 of every 2.4 ha) and possibly only 1 in 9 acres (0.4 in 3.6 ha) passed into the hands of family farmers. The railroad companies and land speculators obtained the bulk of the land, sometimes through gross fraud using dummy entrants. Moreover, the railroads often managed to get the best land while the homesteaders, ignorant of farming conditions on the Plains, often ended up with tracts least suitable to farming. Speculators frequently encouraged settlement on land that was too dry or had no sources of water for domestic use. When the homesteads failed, many settlers sold the land to speculators. The environmental consequences of the Homestead Act were many and serious. The act facilitated railroad devel722

opment, often in excess of transportation needs. In many instances, competing companies built lines to connect the same cities. Railroad development contributed significantly to the destruction of bison herds, which in turn led to the destruction of the way of life of the Plains Indians. Cultivation of the Plains caused wholesale destruction of the vast prairies, so that whole ecological systems virtually disappeared. Overfarming of semi-arid lands led to another environmental disaster, whose consequences were fully experienced only in the 1930s. The great Dust Bowl, with its terrifying dust storms, made huge areas of the country unlivable. The Homestead Act was based on the notion that land held no value unless it was cultivated. It has now become clear that reckless cultivation can be self-destructive. In many cases, unfortunately, the damage can no longer be undone. [William E. Larson and Marijke Rijsberman]

RESOURCES PERIODICALS Shimkin, M. N. “Homesteading on the Republican River.” Journal of the West 26 (October 1987): 58–66.

Horizon Layers in the soil develop because of the additions, losses, translocations, and transformations that take place as the soil ages. The soil layers occur as a result of water percolating through the soil and leaching substances downward. The layers are parallel to the soil surface and are called horizons. Horizons will vary from the surface to the subsoil and from one soil to the next because of the different intensities of the above processes. Soils are classified into different groups based on the characteristics of the horizons.

Horseshoe crabs The horseshoe crab (Limulus polyphemus) is the American species of a marine animal that is only a distant relation of crustaceans like crabs and lobsters. Horseshoe crabs are more closely related to spiders and scorpions. The crabs have been called “living fossils” because the genus dates back millions of years, and Limulus evolved very little over the years. Fossils found in British Columbia indicate that the ancestors of horseshoe crabs were in North America about 520 million years ago. During the late twentieth century, the declining horseshoe crab population concerned environmentalists. Horseshoe crabs are a vital food source for dozens of species of birds that migrate from South America to the Arctic Circle. Furthermore, crabs are collected for medical

Environmental Encyclopedia 3 research. After blood is taken from the crabs, they are returned to the ocean. American horseshoe crabs live along the Atlantic Ocean coastline. Crab habitat extends south from Maine to the Yucata´n in the Gulf of Mexico. Several other crab species are found in Southeast Asia and Japan. The American crab is named for its helmet-like shell that is shaped like a horseshoe. Limulus has a sharp tail shaped like a spike. The tail helps the crab move through the sand. If the crab tips over, the tail serves as a rudder so the crab can get back on its feet. The horseshoe crab is unique; its blood is blue and contains copper. The blood of other animals is red and contains iron. Mature female crabs measure up to 24 inches in length. Males are about two-thirds smaller. Horseshoe crabs can live for 19 years, and they reach sexual maturity in 10 years. The crabs come to shore to spawn in late May and early June. They spawn the during the phases of the full and new moon. The female digs nests in the sand and deposits from 200 to 300 eggs in each pit. The male crab fertilizes the eggs with sperm, and the egg clutch is covered with sand. During the spawning season, a female crab could deposit as many as 90,000 eggs. This spawning process coincides with the migration of shorebirds. Flocks of birds like the red knot and the sandpiper eat their fill of crab eggs before continuing their northbound migration. Through the years, people found a variety of uses for horseshoe crabs. During the sixteenth century, Native Americans in South Carolina attached the tails to the spears that they used to catch fish. In the nineteenth century, people ground the crabs up for use as fertilizer or food for chickens and hogs. During the twentieth century, researchers learned much about the human eye by studying the horseshoe crab’s compound eye. Furthermore, researchers discovered that the crab’s blood contained a special clotting agent that could be used to test the purity of new drugs and intravenous solutions. The agent called Limulus Amoebocyte Lysate is obtained by collecting horseshoe crabs during the spawning season. Crabs are bled and then returned to the beach. Horseshoe crabs are also used as bait. The harvesting of crabs increased sharply during the 1990s when people in the fishing industry used crabs as bait to catch eels and conch. The annual numbers of crabs harvested jumped from the thousands to the millions during the 1990s, according to environmental groups and organizations like the National Audubon Society. The declining horseshoe crab population could affect millions of migrating birds. The Audubon Society reported seeing fewer birds at the Atlantic beaches where horseshoe

Horseshoe crabs

A horseshoe crab (Limulus polyphemus). (©John M. Burnley, National Audubon Society Collection. Photo Researchers Inc. Reproduced by permission.)

crabs spawn. In the spring of 2000, scientists said that birds appeared undernourished. Observers doubted that they would complete their journey to the Arctic Circle. The Audubon Society and environmental groups have campaigned for state and federal regulations to protect horseshoe crabs. By 2002, coastal states and the Atlantic States Marine Fisheries Commission had set limits on the amount of crabs that could be harvested. The state of Virginia made bait bags mandatory when fishing with horseshoe crab bait. The mesh bag made of hard plastic holds the crab. That made it more difficult for predators to eat the crab so fewer Limuluscrabs were needed as bait. Furthermore, the federal government created a 1,500square-mile refuge for horseshoe crabs in Delaware Bay. The refuge extends from Ocean City, New Jersey to north of Ocean City, Maryland. As of March of 2002, harvesting was banned in the refuge. People who took crabs from the area faced a fine of up to $100,000, according to the National Marine Fisheries Service. As measures like those were enacted, marine biologists said that it could be several decades before the crab population increased. One reason for slow population growth was that it takes crabs 10 years to reach maturity. [Liz Swain]

723

Household waste

RESOURCES BOOKS Fortey, Richard. Trilobite!: Eyewitness to Evolution. New York: Alfred A. Knopf, 2000. Tanacredi, John, Ed. Limulus in the Limelight: 350 Million Years in the Making and in Peril? New York: Kluwer Academic Publishers, 2001.

ORGANIZATIONS Atlantic States Marine Fisheries Commission., 1444 Eye Street, NW, Sixth Floor, Washington, D.C. 20005 (202) 289-6400, Fax: (202) 289-6051, Email: [email protected], http://www.asmfc.org National Audubon Society Horseshoe Crab Campaign., 1901 Pennsylvania Avenue NW, Suite 1100, Washington, D.C. 20006 (202) 861-2242, Fax: (202) 861-4290, Email: [email protected], http://www.audubon.org/ campaign/horseshoe/contacts.htm National Marine Fisheries Service., 1315 East West Highway, SSMC3, Silver Spring, MD 20910 (301) 713-2334, Fax: (301) 713-0596, Email: [email protected], hhttp://www.nmfs.noaa.gov

Hospital wastes see Medical waste

Household waste Household waste is commonly referred to as garbage or trash. As the population of the world expands, so does the amount of waste produced. Generally, the more automated and industrialized human societies become, the more waste they produce. For example, the industrial revolution introduced new manufactured products and new manufacturing processes that added to household solid waste and industrial waste. Modern consumerism and the excess packaging of many products also contribute significantly to the increasing amount of solid waste. Much of the trash Americans produce (about 40%) is paper and paper products. Paper accounts for more than 71 million tons of garbage. Yard wastes are the next most common waste, contributing more than 31 million tons of solid waste. Metals account for more than 8% of all household waste, and plastics are close behind with another 8% or 14 million tons. America’s trash also contains about 7% glass and nearly 20 million tons of other materials like rubber, textiles, leather, wood, and inorganic wastes. Much of the waste comes from packaging materials. Other types of waste produced by consumers are durable goods such as tires, appliances, and furniture, while other household solid waste is made up of non-durable goods such as paper, disposable products, and clothing. Many of these items could be recycled and reused, so they also can be considered a non-utilized resource. In less industrialized times and even today in many developing countries, households and industries disposed of unwanted materials in bodies of water or in land dumps. 724

Environmental Encyclopedia 3 However, this practice creates undesirable effects such as health hazards and foul odors. Open dumps serve as breeding grounds for disease-carrying organisms such as rats and insects. As the first world became more alert to environmental hazards, methods for waste disposal were studied and improved. Today, however, governments, policymakers, and individuals still wrestle with the problem of how to improve methods of waste disposal, storage, and recycling. In 1976, the United States Congress passed the Resource Conservation and Recovery Act (RCRA) in an effort to protect human health and the environment from hazards associated with waste disposal. In addition, the act aims to conserve energy and natural resources and to reduce the amount of waste Americans generate. Further, the RCRA promotes methods to manage waste in an environmentally sound manner. The act covers regulation of solid waste, hazardous waste, and underground storage tanks that hold petroleum products and certain chemicals. Most household solid waste is removed from homes through community garbage collection and then taken to landfills. The garbage in landfills is buried, but it can still produce noxious odors. In addition, rainwater can seep through landfill sites and leach out pollutants from the landfill trash. These are then carried into nearby bodies of water. Pollutants can also contaminate groundwater, which in turn leads to contamination of drinking water. In order to fight this problem, sanitary landfills were developed. Clay or plastic liners are placed in the ground before garbage is buried. This helps prevent water from seeping out of the landfill and into the surrounding environment. In sanitary landfills, each time a certain amount of waste is added to the landfill, it is covered by a layer of soil. At a predetermined height the site is capped and covered with dirt. Grass and trees can be planted on top of the capped landfill to help prevent erosion and to improve the look of the site. Sanitary landfills are more expensive than open pit dumps, and many communities do not want the stigma of having a landfill near them. These factors make it politically difficult to open new landfills. Landfills are regulated by state and local governments and must meet minimum requirements set by the United States Environmental Protection Agency (EPA). Some household hazardous wastes such as paint, used motor oil, or insecticides can not be accepted at landfills and must be handled separately. Incineration (burning) of solid waste offers an alternative to disposal in landfills. Incineration converts large amounts of solid waste to smaller amounts of ash. The ash must still be disposed of, however, and it can contain toxic materials. Incineration released smoke and other possible pollutants into the air. However, modern incinerators are equipped with smokestack scrubbers that are quite effective

Environmental Encyclopedia 3 in trapping toxic emissions. Many incinerators have the added benefit of generating electricity from the trash they burn. Composting is a viable alternative to landfills and incineration for some biodegradable solid waste. Vegetable trimmings, leaves, grass clippings, straw, horse manure, wood chippings, and similar plant materials are all biodegradable and can be composted. Compost helps the environment because it reduces the amount of waste going into landfills. Correct composting also breaks down biodegradable material into a nutrient-rich soil additive that can be used in gardens or for landscaping. In this way, nutrients vital to plants are returned to the environment. To successfully compost biodegradable wastes, the process must generate high enough temperatures to kill seeds or organisms in the composted material. If done incorrectly, compost piles can give off foul odors. Families and communities can help reduce household waste by making some simple lifestyle changes. They can reduce solid waste by recycling, repairing rather than replacing durable goods, buying products with minimal packaging, and choosing packaging made from recycled materials. Reducing packaging material is an example of source reduction. Much of the responsibility for source reduction with manufacturers. Businesses need to be encouraged to find smart and cost effective ways to manufacture and package goods in order to minimize waste and reduce the toxicity of the waste created. Consumers can help by encouraging companies to create more environmentally responsible packaging through their choice of products. For example, consumers successfully pressured McDonalds to change from serving their sandwiches in non-biodegradable Styrofoam boxes to wrapping them in biodegradable paper. For individual households can reduce the amount of waste they send to landfills by recycling. Paper, aluminum, glass, and plastic containers are the most commonly recycled household materials. Strategies for household recycling vary from community to community. In some areas materials must separated by type before collection. In others, the separation occurs after collection. Recycling preserves natural resources by providing an alternative supply of raw materials to industries. It also saves energy and eliminates the emissions of many toxic gases and water pollutants. In addition, recycling helps create jobs, stimulates development of more environmentally sound technologies, and conserves resources for future generations. For recycling to be successful, there must be an end market for goods made from recycled materials. Consumers can support recycling by buying “green” products made of recycled materials. Battery recycling is also becoming increasingly common in the United States and is required by law in many

Hubbard Brook Experimental Forest

European countries. In 2001, a nonprofit organization called Rechargeable Battery Recycling Corporation (RBRC) began offering American communities cost-free recycling of portable rechargeable batteries such as those used in cell phones, camcorders, and laptop computers. These batteries contain cadmium, which is recycled back into other batteries or used in certain coatings or color pigments. Household waste disposal is an international problem that is being attacked in many ways in many countries. In Tripoli, Libya, a plant exploits household waste, converting it to organic fertilizer. The plant recycles 500 tons of household waste, producing 212 tons of fertilizer a day. In France, a country with less available space for landfills than the United States, incineration is proving a desirable alternative. The French are turning household waste into energy through combustion and are developing technologies to control the residues that occur from incineration. In the United States, education of consumers is a key to reducing the volume and toxicity of household waste. The EPA promotes four basic principles for reducing solid waste: reduce the amount of trash discarded, reuse products and containers, recycle and compost, and reconsider activities that produce waste. [Teresa G. Norris]

RESOURCES OTHER “Communities Invited to Recycle Rechargables Cost Free.” Environmental News Network. October 11, 2001 [cited July 2002]. .

ORGANIZATIONS U.S. Environmental Protection Agency Office of Solid Waste, 1200 Pennsylvania Avenue NW, Washington, DC USA 20460 (703) 412-9810, Toll Free: (800) 424-9346, ,

HRS see Hazard Ranking System

Hubbard Brook Experimental Forest The Hubbard Brook Experimental Forest is located in West Thornton, New Hampshire. It is an experimental area established in 1955 within the White Mountains National Forest in New Hamphire’s central plateau and is administered by the U.S. Forest Service. Hubbard Brook was the site of many important ecological studies beginning in the 1960s which established the extent of nutrient losses when all the trees in a watershed are cut. Hubbard Brook is a north temperate watershed covered with a mature forest, and it is still accumulating bio725

Environmental Encyclopedia 3

Hudson River

mass. In one early study, vegetation cut in a section of

Hubbard Brook was left to decay while nutrient losses were monitored in the runoff. Total nitrogen losses in the first year were twice the amount cycled in the system during a normal year. With the rise of nitrate in the runoff, concentrations of calcium, magnesium, sodium, and potassium rose. These increases caused eutrophication and pollution of the streams fed by this watershed. Once the higher plants had been destroyed, the soil was unable to retain nutrients. Early evidence from the studies indicated that total losses in the ecosystem due to the clear-cutting were a large number of the total inventory of species. The site’s ability to support complex living systems was reduced. The lost nutrients could accumulate again, but erosion of primary minerals would limit the number of plants and animals sustained in the area. Another study at the Hubbard Brook site investigated the effects of forest cutting and herbicide treatment on nutrients in the forest. All of the vegetation in one of Hubbard Brook’s seven watersheds was cut and then the area was treated with the herbicides. At the time the conclusions were startling: deforestation resulted in much larger runoffs into the streams. The pH of the drainage stream went from 5.1 to 4.3, along with a change in temperature and electrical conductivity of the stream water. A combination of higher nutrient concentration, higher water temperature, and greater solar radiation due to the loss of forest cover produced an algal bloom, the first sign of eutrophication. This signaled that a change in the ecosystem of the watershed had occurred. It was ultimately demonstrated at Hubbard Brook that the use of herbicides on a cut area resulted in their transfer to the outgoing water. Hubbard Brook Experimental Forest continues to be an active research facility for foresters and biologists. Most current research focuses on water quality and nutrient exchange. The Forest Service also maintians an acid rain monitoring station, and conducts research on old-growth forests. The results from various studies done at Hubbard Brook have shown that mature forest ecosystems have a greater ability to trap and store nutrients for recycling within the ecosystem. In addition, mature forests offer a higher degrees of biodiversity than do forests that are clear-cut. See also Aquatic chemistry; Cultural eutrophication; Decline spiral; Experimental Lakes Area; Nitrogen cyle [Linda Rehkopf]

RESOURCES BOOKS Bormann, F. H. Pattern and Process in a Forested Ecosystem: Disturbance Development and the Steady State Based on the Hubbard Brook Ecosystem Study. New York: Springer-Verlag, 1991.

726

Botkin, D. B. Forest Dynamics: An Ecological Model. New York: Oxford University Press, 1993.

PERIODICALS Miller, G. “Window Into a Water Shed.” American Forests 95 (May-June 1989): 58–61.

Hudson River Starting at Lake Tear of the Clouds, a two-acre (0.8-ha) pond in New York’s Adirondack Mountains, the Hudson River runs 315 miles (507 km) to the Battery on Manhattan Island’s southern tip, where it meets the Atlantic Ocean. Although polluted and extensively dammed for hydroelectric power, the river still contains a wealth of aquatic species, including massive sea sturgeon (Acipenser oxyrhynchus) and short-nosed sturgeon (A. brevirostrum). The upper Hudson is fast-flowing trout stream, but below the Adirondack Forest Preserve, pollution from municipal sources, paper companies, and industries degrades the water. Stretches of the upper Hudson contain so-called warm water fish, including northern pike (Esox lucius), chain pickerel (E. niger), smallmouth bass (Micropterus dolomieui), and largemouth bass (M. salmoides). These latter two fish swam into the Hudson through the Lake Erie and Lake Champlain canals, which were completed in the early nineteenth century. The Catskill Mountains dominate the mid-Hudson region, which is rich in fish and wildlife, though dairy farming, a source of runoff pollution, is strong in the region. American shad (Alosa sapidissima), historically the Hudson’s most important commercial fish, spawn on the river flats between Kingston and Coxsackie. Marshes in this region support snapping turtles (Chelydra serpentina) and, in the winter, muskrat (Ondatra zibethicus) and mink (Mustela vison). Water chestnuts (Trapa natans) grow luxuriantly in this section of the river. Deep and partly bordered by mountains, the lower Hudson resembles a fiord. The unusually deep lower river makes it suitable for navigation by ocean-going vessels for 150 miles (241 km) upriver to Albany. Because the river’s surface elevation does not drop between Albany and Manhattan, the tidal effects of the ocean are felt all the way upriver to the Federal Lock and Dam above Albany. These powerful tides make long stretches of the lower Hudson saline or brackish, with saltwater penetrating as high as 60 miles (97 km) upstream from the Battery. The Hudson contains a great variety of botanical species. Over a dozen oaks thrive along its banks, including red oaks (Quercus rubra), black oaks (Q. velutina), pin oaks (Q. palustris), and rock chestnut (Q. prinus). Numerous other trees also abound, from mountain laurel (Kalmia latifolia) and red pine (Pinus resinosa) to flowering dogwood (Cornus

Environmental Encyclopedia 3 florida), together with a wide variety of small herbaceous plants. The Hudson River is comparatively short. More than 80 American rivers are longer than it, but it plays a major role in New York’s economy and ecology. Pollution threats to the river have been caused by the discharge of industrial and municipal waste, as well as pesticides washed off the land by rain. From 1930 to 1975, one chemical company on the river manufactured approximately 1.4 billion pounds of polychlorinated biphenyls (PCBs), and an estimated 10 million pounds a year entered the environment. In all, a total of 1.3 million pounds of PCB contamination allegedly occurred during the years prior to the ban, with the pollution originating from plants at Ford Edward and Hudson Falls. A ban was put in place for a time prohibiting the possession, removal, and eating of fish from the waters of the upper Hudson River. A proposed cleanup was designated, to proceed by means of a 40-mile dredging and sifting of 2.65 million cubic yards of sediment north of Albany, with an anticipated yield of 75 tons of PCBs. In February of 2001 the U.S. Environmental Protection Agency (EPA), having invoked the Superfund law, required the chemical company to begin planning the cleanup. The company was given several weeks to present a viable plan of attack, or else face a potential $1.5 billion fine for ignoring the directive in lieu of the cost of cleanup. The cleanup cost, estimated at $500 million was presented as the preferred alternative. The engineering phase of the cleanup project was expected to take three years of planning and was to be scheduled after the offending company filed a response to the EPA. The company responded within the allotted time frame in order to placate the EPA, although the specifics of a drafted work plan remained undetermined, and the company refused to withdraw a lawsuit filed in November of 2000, which challenged the constitutionality of the so-called Superfund law that authorized the EPA to take action. The river meanwhile was ranked by one environmental watchdog group as the fourth most endangered in the United States, specifically because of the PCB contamination. Environmental groups demanded also that attention be paid to the issues of urban sprawl, noise, and other pollution, while proposals for potentially polluting projects were endorsed by industrialists as a means of spurring the area’s economy. Among these industrial projects, the construction of a cement plant in Catskill where there is easy access to a limestone quarry, and the development of a power plant along the river in Athens generated controversy, stemming from the industrial asset afforded by development along the river versus the advantages of a less fouled environment. Additionally, the power plant, which threatened to add four new smokestacks to the skyline and to aggravate pollution, was seen as potentially detrimental to

Human ecology

tourism in that area. Also in recent decades, chlorinated hydrocarbons, dieldrin, endrin, DDT, and other pollutants have been linked to the decline in populations of the once common Jefferson salamander (Ambystoma jeffersonianum), fish hawk (Pandion haliaetus), and bald eagle (Haliaeetus leucocephalus). Concerns over the condition of the lower river spread anew following a severe September 11 terrorist attack on New York City in 2001. In this coastal tri-state urban area where anti-dumping laws were put in place in the mid twentieth century to protect the river from deterioration due to pollution, new threats of pollution surfaced regarding the potential for assorted types of leakage into the river caused when the integrity of some land-based structures including seawalls and underwater tunnels was compromised by the impact of exploding commercial jetliners involved in the attack. See also Agricultural pollution; Dams; Estuary; Feedlot runoff; Fertilizer runoff; Industrial waste treatment; Sewage treatment; Wastewater [David Clarke]

RESOURCES BOOKS Boyle, R. H. The Hudson River, A Natural and Unnatural History. New York: Norton, 1979. Peirce, N. R., and J. Hagstrom. The Book of America, Inside the Fifty States Today. New York: Norton, 1983.

PERIODICALS The Scientist, March 19, 2001.

Human ecology Human ecology may be defined as the branch of knowledge concerned with relationships between human beings and their environments. Among the disciplines contributing seminal work in this field are sociology, anthropology, geography, economics, psychology, political science, philosophy, and the arts. Applied human ecology emerges in engineering, planning, architecture, landscape architecture, conservation, and public health. Human ecology, then, is an interdisciplinary study which applies the principles and concepts of ecology to human problems and the human condition. The notion of interaction—between human beings and the environment and between human beings—is fundamental to human ecology, as it is to biological ecology. Human ecology as an academic inquiry has disciplinary roots extending back as far as the 1920s. However, much work in the decades prior to the 1970s was narrowly drawn and was often carried out by a few individuals whose intellectual legacy remained isolated from the mainstream of their 727

Environmental Encyclopedia 3

Humane Society of the United States

disciplines. The work done in sociology offers an exception to the latter (but not the former) rule; sociological human ecology is traced to the Chicago school and the intellectual lineage of Robert Ezra Park, his student Roderick D. Mackenzie, and Mackenzie’s student Amos Hawley. Through the influence of these men and their school, human ecology, for a time, was narrowly identified with a sociological analysis of spatial patterns in urban settings (although broader questions were sometimes contemplated). Comprehensive treatment of human ecology is first found in the work of Gerald L. Young, who pioneered the study of human ecology as an interdisciplinary field and as a conceptual framework. Young’s definitive framework is founded upon four central themes. The first of these is interaction, and the other three are developed from it: levels of organization, functionalism (part-whole relationships), and holism. These four basic concepts form the foundation for a series of field derivatives (niche, community, and ecosystem) and consequent notions (institutions, proxemics, alienation, ethics, world community, and stress/capacitance). Young’s emphasis on linkages and process set his approach apart from other synthetic attempts in human ecology, which were largely cumbersome classificatory schemata. These were subject to harsh criticism because they tended to embrace virtually all knowledge, resolve themselves into superficial lists and mnemonic “building blocks,” and had little applicability to real-world problems. Generally, comprehensive treatment of human ecology is more advanced in Europe than it is in the United States. A comprehensive approach to human ecology as an interdisciplinary field and conceptual framework gathered momentum in several independent centers during the 1970s and 1980s. Among these have been several college and university programs and research centers, including those at the University of Go¨teborg, Sweden, and, in the United States, at Rutgers University and the University of California at Davis. Interdisciplinary programs at the undergraduate level were first offered in 1972 by the College of the Atlantic (Maine) and The Evergreen State College (Washington). The Commonwealth Human Ecology Council in the United Kingdom, the International Union of Anthropological and Ethnological Sciences’ Commission on Human Ecology, the Centre for Human Ecology at the University of Edinburgh, the Institute for Human Ecology in California, and professional societies and organizations in Europe and the United States have been other centers of development for the field. Dr. Thomas Dietz, President of the Society for Human Ecology, defined some of the priority research problems which human ecology addresses in recent testimony before the U.S. House of Representatives Subcommittee on Environment and the National Academy of Sciences Committee on Environmental Research. Among these, Dietz listed 728

global change, values, post-hoc evaluation, and science and conflict in environmental policy. Other human ecologists would include in the list such items as commons problems, carrying capacity, sustainable development, human health, ecological economics, problems of resource use and distribution, and family systems. Problems of epistemology or cognition such as environmental perception, consciousness, or paradigm change also receive attention. Our Common Future, the report of the United Nation’s World Commission on Environment and Development of 1987, has stimulated a new phase in the development of human ecology. A host of new programs, plans, conferences and agendas have been put forth, primarily to address phenomena of global change and the challenge of sustainable development. These include the Sustainable Biosphere Initiative published by the Ecological Society of America in 1991 and extended internationally; the United Nations Conference on Environment and Development; the proposed new United States National Institutes for the Environment; the Man and the Biosphere Program’s Human-Dominated Systems Program; the report of the National Research Council Committee on Human Dimensions of Global Change and the associated National Science Foundation’s Human Dimensions of Global Change Program; and green plans published by the governments of Canada, Norway, the Netherlands, the United Kingdom, and Austria. All of these programs call for an integrated, interdisciplinary approach to complex problems of human-environmental relationships. The next challenge for human ecology will be to digest and steer these new efforts and to identify the perspectives and tools they supply. [Jeremy Pratt]

RESOURCES BOOKS Jungen, B. “Integration of Knowledge in Human Ecology.” In Human Ecology: A Gathering of Perspectives, edited by R. J. Borden, et al., Selected papers from the First International Conference of the Society for Human Ecology, 1986. ———. Origins of Human Ecology. Stroudsberg, PA: Hutchinson & Ross, 1983.

PERIODICALS Young, G. L. “Human Ecology As An Interdisciplinary Concept: A Critical Inquiry.” Advances in Ecological Research 8 (1974): 1–105. ———. “Conceptual Framework For An Interdisciplinary Human Ecology.” Acta Oecologiae Hominis 1 (1989): 1–136.

Humane Society of the United States The largest animal protection organization in the United States, Humane Society of the United States (HSUS) works

Environmental Encyclopedia 3 to preserve wildlife and wilderness, save endangered species, and promote humane treatment of all animals. Formed in 1954, HSUS specializes in education, cruelty investigations and prosecutions, wildlife and nature preservation, environmental protection, federal and state legislative activities, and other actions designed to protect animal welfare and the environment. Major projects undertaken by HSUS in recent years have included campaigns to stop the killing of whales, dolphins, elephants, bears, and wolves; to help reduce the number of animals used in medical research and to improve the conditions under which they are used; to oppose the use of fur by the fashion industry; and to address the problem of pet overpopulation. The group has worked extensively to ban the use of tuna caught in a way that kills dolphins, largely eliminating the sale of such products in the United States and western Europe. It has tried to stop international airlines from transporting exotic birds into the United States. Other high priority projects have included banning the international trade in elephant ivory, especially imports into the United States, and securing and maintaining a general worldwide moratorium on commercial whaling. HSUS companion animals section works on a variety of issues affecting dogs, cats, birds, horses, and other animals commonly kept as pets, striving to promote responsible pet ownership, particularly the spaying and neutering of dogs and cats to reduce the tremendous overpopulation of these animals. HSUS works closely with local shelters and humane societies across the country, providing information, training, evaluation, and consultation. Several national and international environmental and animal protection groups are affiliated and work closely with HSUS. Humane Society International works abroad to fulfill HSUS’s mission and to institute reform and educational programs that will benefit animals. EarthKind, a global environmental protection group that emphasizes wildlife protection and humane treatment of animals, has been active in Russia, India, Thailand, Sri Lanka, the United Kingdom, Romania, and elsewhere, working to preserve forests, wetlands, wild rivers, natural ecosystems, and endangered wildlife. The National Association for Humane and Environmental Education is the youth education division of HSUS, developing and producing periodicals and teaching materials designed to instill humane values in students and young people, including KIND (Kids in Nature’s Defense) News, a newspaper for elementary school children, and KIND TEACHER, an 80-page annual full of worksheets and activities for use by teachers. The Center for Respect of Life and the Environment works with academic institutions, scholars, religious leaders

Human-powered vehicles

and organizations, arts groups, and others to foster an ethic of respect and compassion towards all creatures and the natural environment. Its quarterly publication, Earth Ethics, examines such issues as earth education, sustainable communities, ecological economics, and other values affecting our relationship with the natural world. The Interfaith Council for the Protection of Animals and Nature promotes conservation and education mainly within the religious community, attempting to make religious leaders, groups, and individuals more aware of our moral and spiritual obligations to preserve the planet and its myriad life forms. HSUS has been quite active, hard-hitting, and effective in promoting its animal protection programs, such as leading the fight against the fur industry. It accomplishes its goals through education, lobbying, grassroots organizing, and other traditional, legal means of influencing public opinion and government policies. With over 3.5 million members or “constituents” and an annual budget of over $35 million, HSUS is considered the largest and one of the most influential animal protection groups in the United States and, perhaps, the world. [Lewis G. Regenstein]

RESOURCES ORGANIZATIONS The Humane Society of the United States, 2100 L Street, NW, Washington, D.C. USA 20037 (202) 452-1100,

Humanism A perspective or doctrine that focuses primarily on the interests, capacities, and achievements of human beings. This focus on human concerns has led some to conclude that human beings have rightful dominion over the earth and that their interests and well-being are paramount and take precedence over all other considerations. Religious humanism, for instance, generally holds that God made human beings in His own image and put them in charge of His creation. Secular humanism views human beings as the source of all value or worth. Some environmentally-minded critics, such as Lynn White Jr., and David Ehrenfeld claim that much environmental destruction can be traced to “the arrogance of humanism.”

Human-powered vehicles Finding easy modes of transportation seems to be a basic human need, but finding easy and clean modes is becoming imperative. Traffic congestion, overconsumption of fossil fuels and air pollution are all direct results of automotive 729

Environmental Encyclopedia 3

Human-powered vehicles

Innovative bicycle designs. (McGraw-Hill Inc. Reproduced by permission.)

lifestyles around the world. The logical alternative is humanpowered vehicles (HPVs), perhaps best exemplified in the bicycle, the most basic HPV. New high-tech developments in HPVs are not yet ready for mass production, nor are they able to compete with cars. Pedal-propelled HPVs in the air, on land, or under the sea are still in the expensive, designand-race-for-a-prize category. But the challenge of humanpowered transport has inspired a lot of inventive thinking, both amateur and professional. Bicycles and rickshaws comprise the most basic HPVs. Of these two vehicles, bicycles are clearly the most popular, and production of these HPVs has surpassed production of automobiles in recent years. The number of bicycles in use throughout the world is roughly double that of cars; China alone contains 270 million bicycles, or one third of the total bicycles worldwide. Indeed the bicycle has overtaken the automobile as the preferred mode of transportation in many nations. There are many reasons for the popularity of the bike: it fulfills both recreational and functional needs, it is an economical alternative to automobiles, and it does not contribute to the problems facing the environment. 730

Although the bicycle provides a healthy and scenic form of recreation, people also find it useful in basic transportation. In the Netherlands, bicycle transportation accounts for 30% of work trips and 60% of school trips. One-third of commuting to work in Denmark is by bicycle. In China, the vast majority of all trips there are made via bicycle. A surge in bicycle production occurred in 1973, when in conjunction with rising oil costs, production doubled to 52 million per year. Soaring fuel prices in the 1970s inspired people to find inexpensive, economical alternatives to cars, and many turned to bicycles. Besides being efficient transportation, bikes are simply cheaper to purchase and to maintain than cars. There is no need to pay for parking or tolls, no expensive upkeep, and no high fuel costs. The lack of fuel costs associated with bicycles leads to another benefit: bicycles do not harm the environment. Cars consume fossil fuels and in so doing release more than twothirds of the United States’ smog-producing chemicals. They are furthermore considered responsible for many other environmental ailments: depletion of the ozone layer

Environmental Encyclopedia 3 through release of chlorofluorocarbons from automobile air conditioning units; cause of cancer through toxic emissions; and consumption of the world’s limited fuel resources. With human energy as their only requirement, bicycles have none of these liabilities. Nevertheless, in many cases—such as long trips or travelling in inclement weather—cars are the preferred form of transportation. Bicycles are not the optimal choice in many situations. Thus engineers and designers seek to improve on the bicycle and make machines suitable for transport under many different conditions. They are striving to produce new human-powered vehicles—HPVs that maximize air and sea currents, that have reasonable interior ergonomics, and that can be inexpensively produced. Several machines designed to fit this criteria exist. As for developments in human-powered aircraft, success is judged on distance and speed, which depend on the strength of the pedaller and the lightness of the craft. The current world record holder is Greek Olympic cyclist Kanellos Kanellopoulos who flew Daedalus 88. Daedalus 88 was created by engineer John Langford and a team of MIT engineers and funded by American corporations. Kanellopoulos flew Daedalus 88 for 3 hours and 54 minutes across the Aegean Sea between Crete and Santorini, a distance of 74 mi (119 km), in April 1988. The craft averaged 18.5 mph (29 kph) and flew 15 ft (4.6 m) above the water. Upon arrival at Santorini, however, the sun began to heat up the black sands and generate erratic shore winds and Daedalus 88 plunged into the sea. It was a few yards short of its goal, and the tailboom of the 70-lb (32-kg) vehicle was snapped by the wind. But to cheering crowds on the beach, Kanellopoulos rose from the sea with a victory sign and strode to shore. In the creation of a human-powered helicopter, students at California Polytechnic State University had been working on perfecting one since 1981. In 1989 they achieved liftoff with Greg McNeil, a member of the United States National Cycling Team, pedalling an astounding 1.0 hp. The graphite epoxy, wood, and Mylar craft, Da Vinci III, rose 7 in (17.7 cm) for 6.8 seconds. But rules for the $10,000 Sikorsky prize, sponsored by the American Helicopter Society, stipulate that the winning craft must rise nearly 10 ft, or 3 m, and stay aloft one minute. On land, recumbent vehicles, or recumbents, are wheeled vehicles in which the driver pedals in a semi-recumbent position, contained within a windowed enclosure. The world record was set in 1989 by American Fred Markham at the Michigan International Speedway in an HPV named Goldrush. Markham pedalled more than 44 mph (72 kph). Unfortunately, the realities of road travel cast a long shadow over recumbent HPVs. Crews discovered that they tended to be unstable in crosswinds, distracted other drivers

Human-powered vehicles

and pedestrians, and lacked the speed to correct course safely in the face of oncoming cars and trucks. In the sea, being able to maneuver at your own pace and be in control of your vehicle—as well as being able to beat a fast retreat undersea—are the problems faced by HPV submersible engineers. Human-powered subs are not a new idea. The Revolutionary War created a need for a bubble sub that was to plant an explosive in the belly of a British ship in New York Harbor. (The naval officer, breathing one-half hour’s worth of air, failed in his night mission, but survived). The special design problems of modern two-person HP-subs involve controlling buoyancy and ballast, pitch and yaw (nose up/down/sideways), reducing drag, increasing thrust, and positioning the pedaller and the propulsor in the flooded cockpit (called “wet") in ways that maximize air intake from scuba tanks and muscle power from arms and legs. Depending on the design, the humans in HP-subs lie prone, foot to head or side by side, or sit, using their feet to pedal and their hands to control the rudder through the underwater currents. Studies by the United States Navy Experimental Dive Unit indicate that a well-trained athlete can sustain 0.5 hp for 10 minutes underwater. On the surface of the water, fin-propelled watercraft— lightweight inflatables that are powered by humans kicking with fins—are ideal for fishermen whom maneuverability, not speed, is the goal. Paddling with the legs, which does not disturb fish, leaves the hands free to cast. In most designs, the fisherman sits on a platform between tubes, his feet in the water. Controllability is another matter, however: in open windy water, the craft is at the mercy of the elements in its current design state. Top speed is about 50 yd (46 m) in three minutes. Finally, over the surface of the water, the first humanpowered hydrofoil, Flying Fish, with national track sprinter Bobby Livingston, broke a world record in September 1989 when it traveled 100 m over Lake Adrian, Michigan, at 16.1 knots (18.5 mph). A vehicle that pedalled like a bicycle, resembled a model airplane with a two-blade propeller and a 6-ft (1.8-m) carbon graphite wing, Flying Fish sped across the surface of the lake on two pontoons. [Stephanie Ocko and Andrea Gacki]

RESOURCES BOOKS Lowe, M. “Bicycle Production Outpaces Autos.” In Vital Signs 1992: The Trends That Are Shaping Our Future, edited by L. R. Brown, C, Flavin, and H. Hane. New York: Norton, 1992.

731

Humus

PERIODICALS Banks, R. “Sub Story.” National Geographic World (July 1992): 8–11. Blumenthal, T. “Outer Limits.” Bicycling, December 1989, 36. Britton, P. “Muscle Subs.” Popular Science, June 1989, 126–129. ———. “Technology Race Beneath the Waves.” Popular Science, June 1991, 48–54. Horgan, J. “Heli-Hopper: Human-powered Helicopter Gets Off the Ground.” Scientific American 262 (March 1990): 34. Kyle, C. R. “Limits of Leg Power.” Bicycling, October 1990, 100–101. Langley, J. “Those Flying Men and Their Magnificent Machines.” Bicycling, April 1992, 74–76. “Man-Powered Helicopter Makes First Flight.” Aviation Week and Space Technology, December 1989, 115. Martin, S. “Cycle City 2000.” Bicycling, March 1992, 130–131.

Humus Humus is essentially decomposed organic matter in soil. Humus can vary in color but is often dark brown. Besides containing valuable nutrients, there are many other benefits of humus: it stabilizes soil mineral particles into aggregates, improves pore space relationships and aids in air and water movement, aids in water holding capacity, and influences the absorption of hydrogen ions as a pH regulator.

Hunting and trapping Wild animals are a potentially renewable natural resource. This means that they can be harvested in a sustainable fashion, as long as then birth rate is greater than the rate of exploitation by humans. In the sense meant here, “harvesting” refers to the killing of wild animals as a source of meat, fur, antlers, or other useful products, or as an outdoor sport. The harvesting can involve trapping, or hunting using guns, bows-and-arrows, or other weapons. (Fishing is also a kind of hunting, but it is not dealt with here). From the ecological perspective, it is critical that the exploitation is undertaken in a sustainable fashion; otherwise, serious damages are caused to the resource and to ecosystems more generally. Unfortunately, there have been numerous examples in which wild animals have been harvested at grossly unsustainable rates, which caused their populations to decline severely. In a few cases this caused species to become extinct—they no longer occur anywhere on Earth. For example, commercial hunting in North America resulted in the extinctions of the great auk (Pinguinnis impennis), passenger pigeon (Ectopistes migratorius), and Steller’s sea cow (Hydrodamalis stelleri). Unsustainable commercial hunting also brought other species to the brink of extinction, including the Eskimo curlew (Numenius borealis), northern right whale (Eubalaena glacialis), northern fur seal (Callorhinus ursinus), grey 732

Environmental Encyclopedia 3 whale (Eschrichtius robustus), and American bison or buffalo (Bison bison). Fortunately, these and many other examples of overexploitation of wild animals by humans are regrettable cases from the past. Today, the exploitation of wild animals in North America is undertaken with a view to the longerterm conservation of their stocks, that is, an attempt is made to manage the harvesting in a sustainable fashion. This means that trapping and hunting are much more closely regulated than they used to be. If harvests of wild animals are to be undertaken in a sustainable manner, it is critical that harvest levels are determined using the best available understanding of population-level productivity and stock sizes. It is also essential that harvest quotas are respected by trappers and hunters and that illegal exploitation (or poaching) does not compromise what might otherwise be a sustainable activity. The challenge of modern wildlife management is to ensure that good conservation science is sensibly integrated with effective monitoring and management of the rates of exploitation. Ethics of trapping and hunting From the strictly ecological perspective, sustainable trapping and hunting of wild animals is no more objectionable than the prudent harvesting of timber or agricultural crops. However, people have widely divergent attitudes about the killing of wild (or domestic) animals for meat, sport, or profit. At one end of the ethical spectrum are people who see no problem with the killing wild animals as a source of meat or cash. At the other extreme are individuals with a profound respect for the rights of all animals, and who believe that killing any sentient creature is ethically wrong. Many of these latter people are animal-rights activists, and some of them are involved in organizations that undertake high-profile protests and other forms of advocacy to prevent or restrict trapping and hunting. In essence, these people object to the lethal exploitation of wild animals, even under closely regulated conditions that would not deplete their populations. Most people, of course, have attitudes that are intermediate to those just described. Trapping The fur trade was one a very important commercial activity during the initial phase of the colonization of North America by Europeans. During those times, as now, furs were a valuable commodity that could be obtained from nature and could be sold at a great profit in urban markets. In fact, the quest for furs was the most important reason for much of the early exploration of the interior of North America, as fur traders penetrated all of the continent’s great rivers seeking new sources of pelts and profit. Most furbearing animals are harvested by a form of hunting known as trapping.

Environmental Encyclopedia 3 Until recently, most trapping involved leg-hold traps, a relatively crude method that results in many animals enduring cruel and lingering deaths. Fortunately, other, more humane alternatives now exist in which most trapped animals are killed quickly and do not suffer unnecessarily. In large part, the movement towards more merciful trapping methods has occurred in response to effective, high-profile lobbying by organizations that oppose trapping, and the trapping industry has responded by developing and using more humane methods of killing wild furbearers. Various species of furbearers are trapped in North America, particularly in relatively remote, wild areas, such as the northern and montane forests of the continental United States, Alaska, and Canada. Among the most valuable furbearing species are beaver (Castor canadensis), muskrat (Ondatra zibethicus), mink (Mustela vison), river otter (Enhydra lutris), bobcat (Lynx rufus), lynx (Lynx canadensis), red fox (Vulpes vulpes), wolf (Canis lupus), and coyote (Canis latrans). The hides of other species are also valuable, such as black bear (Ursus americanus), white-tailed deer (Odocoileus virginianus), and moose (Alces alces), but these species are not hunted primarily for their pelage. Some species of seals are hunted for their fur, although this is largely done by shooting, clubbing, or netting, rather than by trapping. The best examples of this are the harp seal (Phoca groenlandica) of the northwestern Atlantic Ocean and the northern fur seal (Callorhinus ursinus) of the Bering Sea. Many seal pups are killed by commercial hunters in the spring when their coats are still white and soft. This harvest has been highly controversial and is the subject of intense opposition from animal rights groups. Game Mammals Hunting is a popular sport in North America, enjoyed by about 16 million people each year, most of them men. In 1991, hunting contributed more than $12 billion to the United States economy, about half of which was spent by big-game hunters. Various species of terrestrial animals are hunted in large numbers. This is mostly done by stalking the animals and shooting them with rifles, although shotguns and bowand-arrow are sometimes used. Some hunting is done for subsistence purposes, that is, the meat of the animals is used to feed the family or friends of the hunters. Subsistence hunting is especially important in remote areas and for aboriginal hunters. Commercial or market hunts also used to be common, but these are no longer legal in North America (except under exceptional circumstances) because they have generally proven to be unsustainable. However, illegal, semicommercial hunting (or poaching) still takes place in many remote areas where game animals are relatively abundant and where there are local markets for wild meat.

Hunting and trapping

In addition, many people hunt as a sport, that is, for the excitement and accomplishment of tracking and killing wild animals. In such cases, using the meat of the hunted animals may be only a secondary consideration, and in fact the hunter may only seek to retain the head, antlers, or horns of the prey as a trophy (although the meat may be kept by the hunter’s guide). Big-game hunting is an economically important activity in North America, with large amounts of money being spent on the equipment, guides, and transportation necessary to undertake this sport. The most commonly hunted big-game mammal in North America is the white-tailed deer. Other frequently hunted ungulates include the mule deer (Odocoileus hemionus), moose, elk or wapiti (Cervus canadensis), caribou (Rangifer tarandus), and pronghorn antelope (Antilocapra americana). Black bear and grizzly bear (Ursus arctos) are also hunted, as are bighorn sheep (Ovis canadensis) and mountain goat (Oreamnos americanus). Commonly hunted small-game species include various species of rabbits and hares, such as the cottontail rabbit (Sylvilagus floridanus), snowshoe hare (Lepus americanus), and jackrabbit (Lepus californicus), as well as the grey or black squirrel (Sciurus carolinensis) and woodchuck (Marmota monax). Wild boar (Sus scrofa) are also hunted in some regions—these are feral animals descended from escaped domestic pigs. Game birds Various larger species of birds are hunted in North America for their meat and for sport. So-called upland game birds are hunted in terrestrial habitats and include ruffed grouse (Bonasa umbellus), willow ptarmigan (Lagopus lagopus), bobwhite quail (Colinus virginianus), wild turkey (Meleagris gallopava), mourning dove (Zanaidura macroura), and woodcock (Philohela minor). Several introduced species of upland gamebirds are also commonly hunted, particularly ring-necked pheasant (Phasianus colchicus) and Hungarian or grey partridge (Perdix perdix). Much larger numbers of waterfowl are hunted in North America, including millions of ducks and geese. The most commonly harvested species of waterfowl are mallard (Anas platyrhynchos), wood duck (Aix sponsa), Canada goose (Branta canadensis), and snow and blue goose (Chen hyperborea), but another 35 or so species in the duck family are also hunted. Other hunted waterfowl include coots (Fulica americana) and moorhens (Gallinula chloropus). [Bill Freedman Ph.D.]

RESOURCES BOOKS Halls, L. K., ed. White-tailed Deer: Ecology and Management. Harrisburg: Stackpole Books, 1984.

733

Environmental Encyclopedia 3

Hurricane

Novak, M., et al. Wild Furbearer Management and Conservation in North America. North Bay: Ontario Trappers Association, 1987. Phillips, P. C. The Fur Trade (2 vols.). Norman: University of Oklahoma, 1961. Robinson, W. L., and E. G. Bolen. Wildlife Ecology and Management. 3rd ed. New York: Macmillan, 1996.

PERIODICALS Freedman, B. Environmental Ecology. 2nd ed. San Diego: Academic Press, 1995.

Hurricane Hurricanes, called typhoons or tropical cyclones in the Far East, are intense cyclonic storms which form over warm tropical waters, and generally remain active and strong only while over the oceans. Their intensity is marked by a distinct spiraling pattern of clouds, very low atmospheric pressure at the center, and extremely strong winds blowing at speeds greater than 74 mph (120 kph) within the inner rings of clouds. Typically when hurricanes strike land and move inland, they immediately start to disintegrate, though before they do they bring widespread destruction of property and loss of life. The radius of such a storm can be 100 mi (160 km) or greater. Thunderstorms, hail, and tornados frequently are imbedded in hurricanes. Hurricanes occur in every tropical ocean except the South Atlantic, and with greater frequency from August through October than any other time of year. The center of a hurricane is called the eye. It is an area of relative calm, few clouds and higher temperatures, and represents the center of the low pressure pattern. Hurricanes usually move from east to west near the tropics, but when they migrate poleward to the mid-latitudes they can get caught up in the general west to east flow pattern found in that region of the earth. See also Tornado and cyclone

George Evelyn Hutchinson 1991)

(1903 –

American ecologist Born January 30, 1903, in Cambridge, England, Hutchinson was the son of Arthur Hutchinson, a professor of mineralogy at Cambridge University, and Evaline Demeny Shipley Hutchinson, an ardent feminist. He demonstrated an early interest in flora and fauna and a basic understanding of the scientific method. In 1918, at the age of 15, he wrote a letter to the Entomological Record and Journal of Variation about a grasshopper he had seen swimming in a pond. He described an experiment he performed on the insect and included it for taxonomic identification. 734

In 1924, Hutchinson earned his bachelor’s degree in zoology from Emmanuel College at Cambridge University, where he was a founding member of the Biological Tea Club. He then served as an international education fellow at the Stazione Zoologica in Naples from 1925 until 1926, when he was hired as a senior lecturer at the University of Witwatersrand in Johannesburg, South Africa. He was apparently fired from this position two years later by administrators who never imagined that in 1977 the university would honor the ecologist by establishing a research laboratory in his name. Hutchinson earned his master’s degree from Emmanuel College in absentia in 1928 and applied to Yale University for a fellowship so he could pursue a doctoral degree. He was instead appointed to the faculty as a zoology instructor. He was promoted to assistant professor in 1931 and became an associate professor in 1941, the year he obtained his United States citizenship. He was made a full professor of zoology in 1945, and between 1947 and 1965 he served as director of graduate studies in zoology. Hutchinson never did receive his doctoral degree, though he amassed an impressive collection of honorary degrees during his lifetime. Hutchinson was best known for his interest in limnology, the science of freshwater lakes and ponds. He spent most of his life writing the four-volume Treatise on Limnology, which he completed just months before his death. The research that led to the first volume—covering geography, physics, and chemistry—earned him a Guggenheim Fellowship in 1957. The second volume, published in 1967, covered biology and plankton. The third volume, on water plants, was published in 1975, and the fourth volume, about invertebrates, appeared posthumously in 1993. The Treatise on Limnology was among the nine books, nearly 150 research papers, and many opinion columns which Hutchinson penned. He was an influential writer whose scientific papers inspired many students to specialize in ecology. Hutchinson’s greatest contribution to the science of ecology was his broad approach, which became known as the “Hutchinson school.” His work encompassed disciplines as varied as biochemistry, geology, zoology, and botany. He pioneered the concept of biogeochemistry, which examines the exchange of chemicals between organisms and the environment. His studies in biogeochemistry focused on how phosphates and nitrates move from the earth to plants, then animals, and then back to the earth in a continuous cycle. His holistic approach influenced later environmentalists when they began to consider the global scope of environmental problems. In 1957, Hutchinson published an article entitled “Concluding Remarks,” considered his most inspiring and intriguing work, as part of the Cold Spring Harbor Symposia on Quantitative Biology. Here, he introduced and described

Environmental Encyclopedia 3 the ecological niche, a concept which has been the source of much research and debate ever since. The article was one of only three in the field of ecology chosen for the 1991 collection Classics in Theoretical Biology. Hutchinson won numerous major awards for his work in ecology. In 1950, he was elected to the National Academy of Science. Five years later, he earned the Leidy Medal from the Philadelphia Academy of Natural Sciences. He was awarded the Naumann Medal from the International Association of Theoretical and Applied Limnology in 1959. This is a global award, granted only once every three years, which Hutchinson earned for his contributions to the study of lakes in the first volume of his treatise. In 1962, the Ecological Society of America chose him for its Eminent Ecologist Award. Hutchinson’s research often took him out of the country. In 1932, he joined a Yale expedition to Tibet, where he amassed a vast collection of organisms from high-altitude lakes. He wrote many scientific articles about his work in North India, and the trip also inspired his 1936 travel book, The Clear Mirror. Other research projects drew Hutchinson to Italy, where, in the sediment of Lago di Monterosi, a lake north of Rome, he found evidence of the first case of artificial eutrophication, dating from around 180 B.C. Hutchinson was devoted to the arts and humanities, and he counted several musicians, artists, and writers among his friends. The most prominent of his artistic friends was English author Rebecca West. He served as her literary executor, compiling a bibliography of her work which was published in 1957. He was also the curator of a collection of her papers at Yale’s Beinecke Library. Hutchinson’s writing reflected his diverse interests. Along with his scientific works and his travel book, he also wrote an autobiography and three books of essays, The Itinerant Ivory Tower (1953), The Enchanted Voyage and Other Studies (1962), and The Ecological Theatre and the Evolutionary Play (1965). For 12 years, beginning in 1943, Hutchinson wrote a regular column titled “Marginalia” for the American Scientist. His thoughtful columns examined the impact on society of scientific issues of the day. Hutchinson’s skill at writing, as well as his literary interests, was recognized by Yale’s literary society, the Elizabethan Club, which twice elected him president. He was also a member of the Connecticut Academy of Arts and Sciences and served as its president in 1946. While Hutchinson built his reputation on his research and writing, he also was considered an excellent teacher. His teaching career began with a wide range of courses including beginning biology, entomology, and vertebrate embryology. He later added limnology and other graduate courses to his areas of expertise. He was personable as well as innovative, giving his students illustrated note sheets, for

George Evelyn Hutchinson

example, so they could concentrate on his lectures without worrying about taking their own notes. Leading oceanographer Linsley Pond was among the students whose careers were changed by Hutchinson’s teaching. Pond enrolled in Yale’s doctoral program with the intention of becoming an experimental embryologist. But after one week in Hutchinson’s limnology class, he had decided to do his dissertation research on a pond. Hutchinson loved Yale. He particularly cherished his fellowship in the residential Saybrook College. He was also very active in several professional associations, including the American Academy of Arts and Sciences, the American Philosophical Society, and the National Academy of Sciences. He served as president of the American Society of Limnology and Oceanography in 1947, the American Society of Naturalists in 1958, and the International Association for Theoretical and Applied Limnology from 1962 until 1968. Hutchinson retired from Yale as professor emeritus in 1971, but continued his writing and research for 20 more years, until just months before his death. He produced several books during this time, including the third volume of his treatise, as well as a textbook titled An Introduction to Population Ecology (1978), and memoirs of his early years, The Kindly Fruits of the Earth (1979). He also occasionally returned to his musings on science and society, writing about several topical issues in 1983 for the American Scientist. Here, he examined the question of nuclear disarmament, speculating that “it may well be that total nuclear disarmament would remove a significant deterrent to all war.” In the same article, he also philosophized on differences in behavior between the sexes: “On the whole, it would seem that, in our present state of evolution, the less aggressive, more feminine traits are likely to be of greater value to us, though always endangered by more aggressive, less useful tendencies. Any such sexual difference, small as it may be, is something on which perhaps we can build.” Several of Hutchinson’s most prestigious honors, including the Tyler Award, came during his retirement. Hutchinson earned the $50,000 award, often called the Nobel Prize for conservation, in 1974. That same year, the National Academy of Sciences gave him the Frederick Garner Cottrell Award for Environmental Quality. He was awarded the Franklin Medal from the Franklin Institute in 1979, the Daniel Giraud Elliot Medal from the National Academy of Sciences in 1984, and the Kyoto Prize in Basic Science from Japan in 1986. Having once rejected a National Medal of Science because it would have been bestowed on him by President Richard Nixon, he was awarded the medal posthumously by President George Bush in 1991. Hutchinson’s first marriage, to Grace Evelyn Pickford, ended with a divorce in 1933. During the six weeks residence 735

Environmental Encyclopedia 3

Hybrid vehicles

the state of Nevada then required to grant divorces, he studied the lakes near Reno and wrote a major paper on freshwater ecology in arid climates. Later that year, Hutchinson married Margaret Seal, who died in 1983 from Alzheimer’s disease. Hutchinson cared for her at home during her illness. In 1985, he married Anne Twitty Goldsby, whose care enabled him to travel extensively and continue working in spite of his failing health. When she died unexpectedly in December 1990, the ailing widower returned to his British homeland. He died in London on May 17, 1991, and was buried in Cambridge. [Cynthia Washam]

RESOURCES BOOKS Hutchinson, George A. A Preliminary List of the Writings of Rebecca West. Yale University Library, 1957. ———. A Treatise on Limnology. Wiley, Vol. 1, 1957; Vol. 2, 1967; Vol. 3, 1979; Vol. 4, 1993. ———. An Introduction to Population Ecology. Yale University Press, 1978. ———. The Clear Mirror. Cambridge University Press, 1937. ———. The Ecological Theater and the Evolutionary Play. Yale University Press, 1965. ———. The Enchanted Voyage and Other Studies. Yale University Press, 1962. ———. The Itinerant Ivory Tower. Yale University Press, 1952. ———. The Kindly Fruits of the Earth. Yale University Press, 1979.

PERIODICALS ———. “Marginalia.” American Scientist (November–December 1983): 639–644. Edmondson, Y. H., ed. “G. Evelyn Hutchinson Celebratory Issue.” Limnology and Oceanography 16 (1971): 167–477. Edmondson, W. T. “Resolution of Respect.” Bulletin of the Ecological Society of America 72 (1991): 212–216. Hutchinson, George E. “A Swimming Grasshopper.” Entomological Record and Journal of Variation 30 (1918): 138. ———. “Marginalia.” American Scientist 31 (1943): 270. ———. “Concluding Remarks.” Bulletin of Mathematical Biology 53 (1991): 193–213. ———. “Lanula: An Account of the History and Development of the Lago di Monterosi, Latlum, Italy.” Transactions of the American Philosophical Society 64 (1970): part 4.

Hybrid vehicles The roughly 200 million automobiles and light trucks currently in use in the United States travel approximately 2.4 trillion miles every year, and consume almost two-thirds of the U.S. oil supply. They also produce about two-thirds of the carbon monoxide, one-third of the lead and nitrogen oxides, and a quarter of all volatile organic compounds (VOCs). More efficient transportation energy use could have dramatic effects on environmental quality as well as 736

saving billions of dollars every year in our payments to foreign governments. In response to high gasoline prices in the 1970s and early 1980s, automobile gas-mileage averages in the United States more than doubled from 13 mpg in 1975 to 28.8 mpg in 1988. Unfortunately, cheap fuel prices and the popularity of sport utility vehicles (SUVs) and light trucks in the 1990s caused fuel efficiency to slide back below where it was 25 years earlier. By 2002, the average mileage for U.S. cars and trucks was only 27.6 mpg. Amory B. Lovins of the Rocky Mountain Institute in Colorado estimated that raising the average fuel efficiency of the United States car and light truck fleet by one mile per gallon would cut oil consumption about 295,000 barrels per day. In one year, this would equal the total amount the Interior Department hopes to extract from the Arctic National Wildlife Refuge (ANWR) in Alaska. It isn’t inevitable that we consume and pollute so much. A number of alternative transportation options already are available. Of course the lowest possible fossil fuel consumption option is to walk, skate, ride a bicycle, or other forms of human-powered movement. Many people, however, want or need the comfort and speed of a motor vehicle. Several models of battery-powered electric automobiles have been built, but the batteries are heavy, expensive, and require more frequent recharging than most customers will accept. Even though 90% of all daily commutes are less than 50 mi (80 km), most people want the capability to take a long road trip of several hundred miles without needing to stop for fuel or recharging. An alternative that appears to have much more customer appeal is the hybrid gas-electric vehicle. The first hybrid to be marketed in the United States was the twoseat Honda Insight. A 3-cylinder, 1.0 liter gas engine is the main power source for this sleek, lightweight vehicle. A 7hp (horsepower) electric motor helps during acceleration and hill climbing. When the small battery pack begins to run down, it is recharged by the gas engine, so that the vehicle never needs to be plugged in. More electricity is captured during “regenerative” braking further increasing efficiency. With a streamlined lightweight plastic and aluminum body, the Insight gets about 75 mpg (33.7 km/l) in highway driving and has low-enough emissions to qualify as a “super low emission vehicle.” It meets the most strict air quality standards anywhere in the United States. Quick acceleration and nimble handling make the Insight fun to drive. Current cost is about $20,000. Perhaps the biggest drawback to the Insight is its limited passenger and cargo capacity. Although the vast majority of all motor vehicle trips made in the United States involve only a single driver, most people want the ability to have more than one passenger or several suitcases at least

Environmental Encyclopedia 3 occasionally. To meet this need, Honda introduced a hybridengine version of its popular Civic line in 2002. With four doors and ample space for four adults plus a reasonable amount of luggage. The 5-speed manual version of the Civic hybrid gets 48 mpg in both city and highway driving. With a history of durability and consumer satisfaction in other Honda models, and a 10-year warranty on its battery and drive train, the hybrid Civic appears to offer the security that consumers will want in adopting this new technology. Toyota also has introduced a hybrid vehicle called the Prius. Similar in size to the Honda Civic, the Prius comes in a four-door model with enough room for the average American family. During most city driving, it depends only on its quiet, emission-free, electric motor. The batteries needed to drive the 40-hp are stacked up behind the back seat providing a surprisingly large trunk for luggage. The 70-hp, 1.5 liter gas engine kicks in to help accelerate or when the batteries need recharging. Getting about 52 mpg (22 km/l) in city driving, the Prius is one of the most efficient cars on the road and can travel more than 625 mi (1,000 km) without refueling. Some drivers are unnerved by the noiseless electric motor. Sitting at a stoplight, it makes no sound at all. You might think it was dead, but when the light changes, you glide off silently and smoothly. Introduced in Japan in 1997, the Prius sells in the United States for about the same price as the Honda hybrids. The Sierra Club estimates that in 100,000 mi (160,000 km), a Prius will generate 27 tons of CO2, a Ford Taurus will generate 64 tons, while the Ford Excursion SUV will produce 134 tons. In 1999, the Sierra Club awarded both the Insight and the Prius an “excellence in engineering” award, the first time this organization has ever endorsed commercial products. Both Ford and General Motors (GM) have announced intentions to build hybrid engines for their popular sport utility vehicles and light trucks. This program may be more for public relations, however, than to save fuel or reduce pollution. The electrical generators coupled to engines of these vehicles will produce only 12 volts of power. This is far less than the 42 volts needed to provide drive the wheels. Instead, the electricity generated by the gasoline-burning engine will only be used to power accessories such as video recorders, computers, on-board refrigerators, and the like. Having this electrical power available will probably actually increase fuel consumption rather than reduce it. For uncritical consumers, however, it provides a justification for continuing to drive huge, inefficient vehicles. In 2002, President G. W. Bush announced he was abandoning the $1.5 billion government-subsidized project to develop high-mileage gasoline-fueled vehicles started with great fanfare eight years earlier by the Clinton/Gore administration. Instead, Bush was throwing his support behind a

Hydrocarbons

plan to develop hydrogen-based fuel cells to power the automobiles of the future. Fuel cells use a semi-permeable film or electrolyte that allows the passage of charged atoms, called ions, but is impermeable to electrons to generate an electrical current between an anode and cathode. A fuel cell run on pure oxygen and hydrogen produces no waste products except drinkable water and radiant heat. Fossil fuels can be used as the source for the hydrogen, but some pollutants are released (most commonly carbon dioxide) in the process of hydrogen generation. Currently, the fuel cells available need to be quite large to provide enough energy for a vehicle. Fuel cell-powered buses and vans that have space for a large power system are currently being tested, but a practical, family vehicle appears to be years away. While they agree that fuel cells offer a wonderful option for cars of the future, many environmentalists regard putting all our efforts into this one project to be misguided at best. It probably will be at least a decade before a fuelcell vehicle is commercially available. [William P. Cunningham Ph.D.]

RESOURCES BOOKS Hodkinson, Ron, and John Fenton. Lightweight Electric/Hybrid Vehicle Design. Warrendale, PA: Society of Automotive Engineers, 2001. Jurgen., Ronald K., ed. Electric and Hybrid-electric Vehicles. Warrendale, PA: Society of Automotive Engineers, 2002. Koppel, Tom. Powering the Future: The Ballard Fuel Cell and the Race to Change the World. New York: John Wiley & Sons, 1999.

PERIODICALS “Dark Days For Detroit—The Big Three’s Gravy Train in Recent Years— Fat Profits from Trucks—is Being Derailed by a New Breed of Hybrid Vehicles from Europe and Japan.” Business Week, January 28, 2002, 61. Ehsani, M., K. M. Rahman, and H. A. Toliyat. “Propulsion System Design of Electric and Hybrid Vehicles.” IEEE Transactions on Industrial Electronics 44 (1997): 19. Hermance, David, and Shoichi Sasaki. “Special Report on Electric Vehicles—Hybrid Electric Vehicles take to the Streets.” IEEE Spectrum 35 (1998): 48. Jones, M. “Hybrid Vehicles—The Best of Both Worlds?” Chemistry and Industry 15 (1995): 589. Maggetto, G. and J. Van Mierlo. “Fuel cells: Systems and applications— Electric vehicles, hybrid vehicles and fuel cell electric vehicles: State of the art and perspectives.” Annales de chimie—science des mate´riaux. 26 (2001): 9.

Hydrocarbons Any compound composed of elemental carbon and hydrogen, hydrocarbons may also contain chlorine, oxygen, nitrogen, and other atoms. Hydrocarbons are classified according to the arrangement of carbon atoms and the types of chemical bonds. The major classes include aromatic or carbon ring 737

Environmental Encyclopedia 3

Hydrochlorofluorocarbons

compounds, alkanes (also called aliphatic or paraffin) compounds with straight or branched chains and single bonds, and alkenes and alkynes with double and triple bonds, respectively. Most hydrocarbon fuels are a mixture of many compounds. Gasoline, for example, includes several hundred hydrocarbon compounds, including paraffins, olefins, and aromatic compounds, and consequently exhibits a host of possible environmental effects. All of the fossil fuels, including crude oils and petroleum, as well as many other compounds important to industries, are hydrocarbons. Hydrocarbons are environmentally important for several reasons. First, hydrocarbons give off greenhouse gases, especially carbon dioxide, when burned and are important contributers to smog. In addition, many aromatic hydrocarbons and hydrocarbons containing halogens are toxic or carcinogenic.

Hydrochlorofluorocarbons The term hydrochlorofluorocarbon (HCFC) refers to halogenated hydrocarbons that contain chlorine and/or fluorine in place of some hydrogen atoms in the molecule. They are chemical cousins of the chlorofluorocarbons (CFCs), but differ from them in that they have less chlorine. A special subgroup of the HCFCs is the hydrofluorocarbons (HFCs), which contain no chlorine at all. A total of 53 HCFCs and HFCs are possible. The HCFCs and HFCs have become commercially and environmentally important since the 1980s. Their growing significance has resulted from increasing concerns about the damage being done to stratospheric ozone by CFCs. Significant production of the CFCs began in the late 1930s. At first, they were used almost exclusively as refrigerants. Gradually other applications—especially as propellants and blowing agents—were developed. By 1970, the production of CFCs was growing by more than 10% per year, with a worldwide production of well over 662 million lb (300 million kg) of one family member alone, CFC-11. Environmental studies began to show, however, that CFCs decompose in the upper atmosphere. Chlorine atoms produced in this reaction attack ozone molecules (O3), converting them to normal oxygen (O2). Since stratospheric ozone provides protection for humans against solar ultraviolet radiation, this finding was a source of great concern. By 1987, 31 nations had signed the Montreal Protocol, agreeing to cut back significantly on their production of CFCs. The question became how nations were to find substitutes for the CFCs. The problem was especially severe in developing nations where CFCs are widely used in refrigeration and air-conditioning systems. Countries like China and India refused to take part in the CFC-reduction plan unless 738

CHF3 CHCl2CF3 CH2FCClF2 CH3CHClF

HFC-23 HCFC-123 HCFC-133b HCFC-151a

developed nations helped them switch over to an equally satisfactory substitute. Scientists soon learned that HCFCs were a more benign alternative to the CFCs. They discovered that compounds with less chlorine than the amount present in traditional CFCs were less stable and often decomposed before they reached the stratosphere. By mid 1992, the United States Environmental Protection Agency (EPA) had selected 11 chemicals that they considered to be possible replacements for CFCs. Nine of those compounds are HFCs and two are HCFCs. The HCFC-HFC solution is not totally satisfactory, however. Computer models have shown that nearly all of the proposed substitutes will have at least some slight effect on the ozone layer and the greenhouse effect. In fact, the British government considered banning one possible substitute for CFCs, HCFC-22, almost as soon as the compound was developed. In addition, one of the most promising candidates, HCFC-123, was found to be carcinogenic in rats. Finally, the cost of replacing CFCs with HCFCs and HFCs is expected to be high. One consulting firm, Metroeconomica, has estimated that CFC substitutes may be six to 15 times as expensive as CFCs themselves. See also Aerosol; Air pollution; Air pollution control; Air quality; Carcinogen; Ozone layer depletion; Pollution; Pollution control [David E. Newton]

RESOURCES PERIODICALS Johnson, J. “CFC Substitutes Will Still Add to Global Warming.” New Scientist 126 (April 14, 1990): 20. MacKenzie, D. “Cheaper Alternatives for CFCs.” New Scientist 126 (June 30, 1990): 39–40. Pool, R. “Red Flag on CFC Substitute.” Nature 352 (July 11, 1991): 352. Stone, R. “Ozone Depletion: Warm Reception for Substitute Coolant.” Science 256 (April 3, 1992): 22.

Hydrogen The lightest of all chemical elements, hydrogen has a density about one-fourteenth that of air. It has a number of special chemical and physical properties. For example, hydrogen

Environmental Encyclopedia 3 has the second lowest boiling and freezing points of all elements. The combustion of hydrogen produces large quantities of heat, with water as the only waste product. From an environmental standpoint, this fact makes hydrogen a highly desirable fuel. Many scientists foresee the day when hydrogen will replace fossil fuels as our most important source of energy.

Hydrogeology Sometimes called groundwater hydrology or geohydrology, this branch of hydrology is concerned with the relationship of subsurface water and geologic materials. Of primary interest is the saturated zone of subsurface water, called groundwater, which occurs in rock formations and in unconsolidated materials such as sands and gravels. Groundwater is studied in terms of its occurrence, amount, flow, and quality. Historically, much of the work in hydrogeology centered on finding sources of groundwater to supply water for drinking, irrigation, and municipal uses. More recently, groundwater contamination by pesticides, chemical fertilizers, toxic wastes, and petroleum and chemical spills have become new areas of concern for hydrogeologists.

Hydrologic cycle The natural circulation of water on the earth is called the hydrologic cycle. Water cycles from bodies of water, via evaporation to the atmosphere, and eventually returns to the oceans as precipitation, runoff from streams and rivers, and groundwater flow. Water molecules are transformed from liquid to vapor and back to liquid within this cycle. On land, water evaporates from the soil or is taken up by plant roots and eventually transpired into the atmosphere through plant leaves; the sum of evaporation and transpiration is called evapotranspiration. Water is recycled continuously. The molecules of water in a glass used to quench your thirst today, at some point in time may have dissolved minerals deep in the earth as groundwater flow, fallen as rain in a tropical typhoon, been transpired by a tropical plant, been temporarily stored in a mountain glacier, or quenched the thirst of people thousands of years ago. The hydrologic cycle has no real beginning or end but is a circulation of water that is sustained by solar energy and influenced by the force of gravity. Because the supply of water on the earth is fixed, there is no net gain or loss of water over time. On an average annual basis, global evaporation must equal global precipitation. Likewise, for any

Hydrologic cycle

body of land or water, changes in storage must equal the total inflow minus the total outflow of water. This is the hydrologic or water balance. At any point in time, water on the earth is either in active circulation or in storage. Water is stored in icecaps, soil, groundwater, the oceans, and other bodies of water. Much of this water is only temporarily stored. The residence time of water storage in the atmosphere is several days and is only about 0.04% of the total freshwater on the earth. For rivers and streams, residence time is weeks; for lakes and reservoirs, several years; for groundwater, hundreds to thousands of years; for oceans, thousands of years; and for icecaps, tens of thousands of years. As the driving force of the hydrologic cycle, solar radiation provides the energy necessary to evaporate water from the earth’s surface, almost threequarters of which is covered by water. Nearly 86% of global precipitation originates from ocean evaporation. Energy consumed by the conversion of liquid water to vapor cools the temperature of the evaporating surface. This same energy, the latent heat of vaporization, is released when water vapor changes back to liquid. In this way, the hydrologic cycle globally redistributes heat energy as well as water. Once in the atmosphere, water moves in response to weather circulation patterns and is transported often great distances from where it was evaporated. In this way, the hydrologic cycle governs the distribution of precipitation and hence, the availability of fresh water over the earth’s surface. About 10% of atmospheric water falls as precipitation each day and is simultaneously replaced by evaporation. This 10% is unevenly distributed over the earth’s surface and, to a large extent, determines the types of ecosystems that exist at any location on the earth and likewise governs much of the human activity that occurs on the land. The earliest civilizations on the earth settled in close proximity to fresh water. Subsequently, and for centuries, humans have been striving to correct, or cope with, this uneven distribution of water. Historically, we have extracted stored water or developed new storages in areas of excess, or during periods of excess precipitation, so that water could be available where and when it is most needed. Understanding processes of the hydrologic cycle can help us develop solutions to water problems. For example, we know that precipitation occurs unevenly over the earth’s surface because of many complex factors that trigger precipitation. For precipitation to occur, moisture must be available and the atmosphere must become cooled to the dew point, the temperature at which air becomes saturated with water vapor. This cooling of the atmosphere occurs along storm fronts or in areas where moist air masses move into mountain ranges and are pushed up into colder air. However, atmospheric particles must be present for the moisture to con739

Hydrologic cycle

dense upon, and water droplets must coalesce until they are large enough to fall to the earth under the influence of gravity. Recognizing the factors that cause precipitation has resulted in efforts to create conditions favorable for precipitation over land surfaces via cloud seeding. Limited success has been achieved by seeding clouds with particles, thus promoting the condensation-coalescence process. Precipitation has not always increased with cloud seeding and questions of whether cloud seeding limits precipitation in other downwind areas is of both economic and environmental concern. Parts of the world have abundant moisture in the atmosphere, but it occurs as fog because the mechanisms needed to transform this moisture into precipitation do not exist. In dry coastal areas, for example, some areas have no measurable precipitation for years, but fog is prevalent. By placing huge sheets of plastic mesh along coastal areas, fog is intercepted, condenses on the sheets, and provides sufficient drinking water to supply small villages. Total rainfall alone does not necessarily indicate water abundance or scarcity. The magnitude of evapotranspiration compared to precipitation determines to some extent whether water is abundant or in short supply. On a continent basis, evapotranspiration represents from 56 to 80% of annual precipitation. For individual watersheds within continents, these%ages are more extreme and point to the importance of evapotranspiration in the hydrologic cycle. Weather circulation patterns responsible for water shortages in some parts of the world are also responsible for excessive precipitation, floods, and related catastrophes in other parts of the world. Precipitation that falls on land, but that is not stored, evaporated or transpired, becomes excess water. This excess water eventually reaches groundwater, streams, lakes, or the ocean by surface and subsurface flow. If the soil surface is impervious or compacted, water flows over the land surface and reaches stream channels quickly. When surface flow exceeds a channel’s capacity, flash flooding is the result. Excessive precipitation can saturate soils and cause flooding no matter what the pathway of flow. For example, in 1988 catastrophic flooding and mudslides in Thailand caused over 500 fatalities or missing persons, nearly 700 people were injured, 4,952 homes were lost, and 221 roads and 69 bridges were destroyed. A three-day rainfall of over nearly 40 in (1,000 mm) caused hillslopes to become saturated. The effects of heavy rainfall were exacerbated by the removal of natural forest cover and conversion to rubber plantations and agricultural crops. Although floods and mudslides occur naturally, many of the pathways of water flow that contribute to such occurrences can be influenced by human activity. Any time vegeta740

Environmental Encyclopedia 3 tive cover is severely reduced and soil exposed to direct rainfall, surface water flow and soil erosion can degrade watershed systems and their aquatic ecosystems. The implications of global warming or greenhouse effects on the hydrologic cycle raise several questions. The possible changes in frequency and occurrence of droughts and floods are of major concern, particularly given projections of population growth. Global warming can result in some areas becoming drier while others may experience higher precipitation. Globally, increased temperature will increase evaporation from oceans and ultimately result in more precipitation. The pattern of precipitation changes over the earth’s surface, however, cannot be predicted at the present time. The hydrologic cycle influences nutrient cycling of ecosystems, processes of soil erosion and transport of sediment, and the transport of pollutants. Water is an excellent liquid solvent; minerals, salts, and nutrients become dissolved and transported by water flow. The hydrologic cycle is an important driving mechanism of nutrient cycling. As a transporting agent, water moves minerals and nutrients to plant roots. As plants die and decay, water leaches out nutrients and carries them downstream. The physical action of rainfall on soil surfaces and the forces of running water can seriously erode soils and transport sediments downstream. Any minerals, nutrients, and pollutants within the soil are likewise transported by water flow into groundwater, streams, lakes, or estuaries. Atmospheric moisture transports and deposits atmospheric pollutants, including those responsible for acid rain. Sulfur and nitrogen oxides are added to the atmosphere by the burning of fossil fuels. Being an excellent solvent, water in the atmosphere forms acidic compounds that become transported via the atmosphere and deposited great distances from their original site. Atmospheric pollutants and acid rain have damaged freshwater lakes in the Scandinavian countries and terrestrial vegetation in eastern Europe. In 1983, such pollution caused an estimated $1.2 billion loss of forests in the former West Germany alone. Once pollutants enter the atmosphere and become subject to the hydrologic cycle, problems of acid rain have little chance for resolution. However, programs that reduce atmospheric emissions in the first place provide some hope. An improved understanding of the hydrologic cycle is needed to better manage water resources and our environment. Opportunities exist to improve our global environment, but better knowledge of human impacts on the hydrologic cycle is needed to avoid unwanted environmental effects. See also Estuary; Leaching [Kenneth N. Brooks]

Environmental Encyclopedia 3

Hydroponics

The hydrologic or water cycle. (McGraw-Hill Inc. Reproduced by permission.)

RESOURCES BOOKS Committee on Opportunities in the Hydrologic Sciences, Water Sciences Technology Board. Opportunities in the Hydrologic Sciences. National Research Council. Washington, DC: National Academy Press, 1991. Lee, R. Forest Hydrology. New York: Columbia University Press, 1980. Postel, S. “Air Pollution, Acid Rain, and the Future of Forests.” Worldwatch Paper 58. Washington, DC: Worldwatch Institute, 1984. Van der Leeden, F., F. L. Troise, and D. K. Todd. The Water Encyclopedia. 2nd ed. Chelsea, MI: Lewis Publishers, 1990.

PERIODICALS Nash, N. C. “Chilean Engineers Find Water for Desert by Harvesting Fog in Nets.” New York Times, July 14, 1992, B5.

OTHER

monly, hydrology encompasses the study of the amount, distribution, circulation, timing, and quality of water. It includes the study of rainfall, snow accumulation and melt, water movement over and through the soil, the flow of water in saturated, underground geologic materials (groundwater), the flow of water in channels (called streamflow), evaporation and transpiration, and the physical, chemical and biological characteristics of water. Solving problems concerned with water excesses, flooding, water shortages, and water pollution are in the domain of hydrologists. With increasing concern about water pollution and its effects on humans and on aquatic ecosystems, the practice of hydrology has expanded into the study and management of chemical and biological characteristics of water.

Rao, Y. S. “Flash Floods in Southern Thailand.” Tiger Paper 15 (1988): 1– 2. Regional Office for Asia and the Pacific (RAPA), Food and Agricultural Organization of the United Nations. Bangkok.

Hydroponics

Hydrology The science and study of water, including its physical and chemical properties and its occurrence on earth. Most com-

Hydroponics is the practice of growing plants in water as opposed to soil. It comes from the Greek hydro ("water") and ponos ("labor"), implying “water working.” The essential 741

Hydroponics

macro- and micro- (trace) nutrients needed by the plants are supplied in the water. Hydroponic methods have been used for more than 2,000 years, dating back to the Hanging Gardens of Babylon. More recently, it has been used by plant physiologists to discover which nutrients are essential for plant growth. Unlike soil, where nutrient levels are unknown and variable, precise amounts and kinds of minerals can be added to deionized water, and removed individually, to find out their role in plant growth and development. During World War II hydroponics was used to grow vegetable crops by U.S. troops stationed on some Pacific islands. Today, hydroponics is becoming a more popular alternative to conventional agriculture in locations with low or inaccessible sources of water or where land available for farming is scarce. For example, islands and desert areas like the American Southwest and the Middle East are prime regions for hydroponics. Plants are typically grown in greenhouses to prevent water loss. Even in temperate areas where fresh water is readily available, hydroponics can be used to grow crops in greenhouses during the winter months. Two methods are traditionally used in hydroponics. The original technique is the water method, where plants are supported from a wire mesh or similar framework so that the roots hang into troughs which receive continuous supplies of nutrients. A recent modification is a nutrient-film technique (NFT), also called the nutrient-flow method, where the trough is lined with plastic. Water flows continuously over the roots, decreasing the stagnant boundary layer surrounding each root, and thus enhances nutrient uptake. This provides a versatile, lightweight, and inexpensive system. In the second method, plants are supported in a growing medium such as sterile sand, gravel, crushed volcanic rock, vermiculite, perlite, sawdust, peatmoss, or rice hulls. The nutrient solution is supplied from overhead or underneath holding tanks either continuously or semi-continuously using a drip method. The nutrient solution is usually not reused. On some Caribbean Islands like St. Croix, hydroponics is being used in conjunction with intensive fish farms (e.g., tilapia) which use recirculated water (a practice is more recently known as aquaponics). This is a “win-win” situation because the nitrogenous wastes, which are toxic to the fish, are passed through large greenhouses with hydroponicallygrown plants like lettuce. The plants remove the nutrients and the water is returned to the fish tanks. There is a sensitive balance between stocking density of fish and lettuce production. Too high a ratio of lettuce plants to fish results in lower lettuce production due to nutrient limitation. Too low a ratio also results in low vegetable production, but this time as a result of the buildup of toxic chemicals. The optimum yield came from a ratio of 1.9 lettuce plants to 1 fish. One pound (0.45 kg) of feed per day was appropriate to feed 33 742

Environmental Encyclopedia 3 lb (15 kg) of tilapia fingerlings, which sustained 189 lettuce plants and produced nearly 3,300 heads of lettuce annually. When integrated systems (fish-hydroponic recirculating units) are compared to separate production systems, the results clearly favor the former. The combined costs and chemical requirements of the separate production systems was nearly two to three times greater than that of the recirculating system to produce the same amount of lettuce and fish. However, there are some drawbacks that must be considered—disease outbreaks in plants and/or fish; the need to critically maintain proper nutrient (especially trace element), plant, and fish levels; uncertainties in fish and market prices; and the need for highly-skilled labor. The integrated method can be adapted to grow other types of vegetables like strawberries, ornamental plants like roses, and other types of animals such as shellfish. Some teachers have even incorporated this technique into their classrooms to illustrate ecological as well as botanical and culture principles. Some proponents of hydroponic gardening make fairly optimistic claims and state that a sophisticated unit is no more expensive than an equivalent parcel of farmed land. They also argue that hydroponic units (commonly called “hydroponicums") require less attention than terrestrial agriculture. Some examples of different types of “successful” hydroponicums are: a person in the desert area of southern California has used the NFT system for over 18 years and grows his plants void of substate in water contained in open cement troughs that cover 3 acres (7.5 ha); a hydroponicum in Orlando, Florida, utilizes the Japanese system of planting seedlings on styrofoam boards that float on the surface of a nutrient bath which is constantly aerated; an outfit in Queens, New York, uses the Israeli Ein-Gedi system which allows plant roots to hang free inside a tube which is sprayed regularly with a nutrient solution, yielding 150,000 lbs (68,000 kg) of tomatoes, 100,000 lb (45,500 kg) of cucumbers, and one million heads of lettuce per acre (0.4 ha) each year; and finally, a farmer in Blooming Prairie, Minnesota, uses the NFT system in a greenhouse to grow Bibb and leafy lettuce year-round so he can sell his produce to area hospitals, some supermarkets, and a few produce warehouses. Most people involved in hydroponics agree that the main disadvantage is the high cost for labor, lighting, water, and energy. Root fungal infections can also be easily spread. Advantages include the ability to grow crops in arid regions or where land is at a premium; more controlled conditions, such as the ability to grow plants indoors, and thus minimize pests and weeds; greater planting densities; and constant supply of nutrients. Hydroponic gardening is becoming more popular for home gardeners. It may also be a viable option to growing crops in some developing countries. Overall, the future looks bright for hydroponics. [John Korstad]

Environmental Encyclopedia 3 RESOURCES BOOKS Resh, H. M. Hydroponic Food Production: A Definitive Guidebook for the Advanced Home Gardener and Commercial Hydroponci Grower, 5th ed. Santa Barbara: Woodbridge Press, 1995. Saffell, H. L. How to Start on a Shoestring and Make a Profit with Hydroponics. Franklin, TN: Mayhill Press, 1994.

PERIODICALS Nicol, E. “Hydroponics and Aquaculture in the High School Classroom.” The American Biology Teacher 52 (1990): 182–4. Rakocy, J. E. “Hydroponic Lettuce Production in a Recirculating Fish Culture System.” Island Perspectives 3 (1988–89): 5–10.

Hydropower see Aswan High Dam; Dams (environmental effects); Glen Canyon Dam; James Bay hydropower project; Low-head hydropower; Tellico Dam; Tennessee Valley Authority; Three Gorges Dam

Hydrothermal vents

rock from below the earth’s crust. Water temperatures of higher than 660°F. have been recorded at some vents. Water flowing from vents contains minerals such as iron, copper, and zinc. The minerals fall like rain and settle on the ocean floor. Over time, the mineral deposits build up and form a chimney around the vent. The first hydrothermal vents were discovered in 1977 by scientists aboard the submersible Alvin. The scientists found the vents near the Gala´ pagos Islands in the eastern Pacific Ocean. Other vents were discovered in the Pacific, Atlantic, and Indian oceans. In 2000, scientists discovered a field of hydrothermal vents in the Atlantic Ocean. The area called the “Lost City” contained 180-feet tall chimneys. These were the largest known chimneys. Hydrothermal vents are located at ocean depths of 8,200 to 10,000 feet. The area near a hydrothermal vent is home to unique animals. They exist without sunlight and live in mineral-levels that would poison animals living on land. These unique animals include 10-foot-long tube worms, 1-foot-long clams, and shrimp. [Liz Swain]

Hydrothermal vents Hydrothermal vents are hot springs located on the ocean floor. The vents spew out water heated by magma, molten

Hypolimnion: Lakes see Great Lakes

743

This Page Intentionally Left Blank

I

IAEA see International Atomic Energy Agency

Ice age Ice age usually refers to the Pleistocene epoch, the most recent occurrence of continental glaciation. Beginning several million years ago in Antarctica, it is marked by at least four major advances and retreats (excluding Antarctica). Ice ages occur during times when more snow falls during the winter than is lost by melting, evaporation, and loss of ice chunks in water during the summer. Alternating glacial and interglacial stages are best explained by a combination of Earth’s orbital cycles and changes in carbon dioxide levels. These cycles operate on time scales of tens of millennia. By contrast, global warming projections involve decades, a far more imminent concern for humankind.

Ice age refugia The series of ice ages that occurred between 2.4 million and 10,000 years ago had a dramatic effect on the climate and the life forms in the tropics. During each glacial period the tropics became both cooler and drier, turning some areas of tropical rain forest into dry seasonal forest or savanna. For reasons associated with local topography, geography, and climate, some areas of forest escaped the dry periods, and acted as refuges (refugia) for forest biota. During subsequent interglacials, when humid conditions returned to the tropics, the forests expanded and were repopulated by plants and animals from the species-rich refugia. Ice age refugia today correspond to present day areas of tropical forest that typically receive a high rainfall and often contain unusually large numbers of species, including a high proportion of endemic species. These species-rich refugia are surrounded by relatively species-poor areas of

forest. Refugia are also centers of distribution for obligate forest species (such as the gorilla [Gorilla gorilla]) with a present day narrow and disjunct distribution best explained by invoking past episodes of deforestation and reforestation. The location and extent of the forest refugia have been mapped in both Africa and South America. In the African rain forests there are three main centers of species richness and endemism recognized for mammals, birds, reptiles, amphibians, butterflies, freshwater crabs, and flowering plants. These centers are in Upper Guinea, Cameroon, and Gabon, and the eastern rim of the Zaire basin. In the Amazon Basin more than 20 refugia have been identified for different groups of animals and plants in Peru, Columbia, Venezuela, and Brazil. The precise effect of the ice ages on biodiversity in tropical rain forests is currently a matter of debate. Some have argued that the repeated fluctuations between humid and arid phases created opportunities for the rapid evolution of certain forest organisms. Others have argued the opposite — that the climatic fluctuations resulted in a net loss of species diversity through an increase in the extinction rate. It has also been suggested that refugia owe their species richness not to past climate changes but to other underlying causes such as a favorable local climate, or soil. The discovery of centers of high biodiversity and endemism within the tropical rain forest biome has profound implications for conservation biology. A “refuge rationale” has been proposed by conservationists, whereby ice age refugia are given high priority for preservation, since this would save the largest number of species, (including many unnamed, threatened, and endangered species), from extinction. Since refugia survived the past dry-climate phases, they have traditionally supplied the plants and animals for the restocking of the new-growth forests when wet conditions returned. Modern deforestation patterns, however, do not take into account forest history or biodiversity, and both forest refugia and more recent forests are being destroyed equally. For the first time in millions of years, future tropical 745

Environmental Encyclopedia 3

Impervious material

forests which survive the present mass deforestation episode could have no species-rich centers from which they can be restocked. See also Biotic community; Deciduous forest; Desertification; Ecosystem; Environment; Mass extinction

diminish biodiversity. An improvement cut is the initial step to prepare a neglected or unmanaged stand for future harvest. See also Clear-cutting; Forest management; Selection cutting

[Neil Cumberlidge Ph.D.]

In situ mining RESOURCES BOOKS Collins, Mark, ed. The Last Rain Forests. London: Mitchell Beazley Publishers, 1990. Kingdon, Jonathan. Island Africa: The Evolution of Africa’s Rare Animals and Plants. Princeton: Princeton University Press, 1989. Sayer, Jeffrey A., et al., eds. The Conservation Atlas of Tropical Forests. New York: Simon and Schuster, 1992. Whitmore, T. C. An Introduction to Tropical Rain Forests. Oxford, England: Clarenden Press, 1990. Wilson, E. O., ed. Biodiversity. Washington DC: National Academy Press, 1988.

see Bureau of Mines

Inbreeding

“Biological Diversification in the Tropics.” Proceedings of the Fifth International Symposium of the Association for Tropical Biology, at Caracas, Venezuela, February 8-13, 1979, edited by Ghillean T. Prance. New York: Columbia University Press, 1982.

Inbreeding occurs when closely related individuals mate with one another. Inbreeding may happen in a small population or due to other isolating factors; the consequence is that little new genetic information is added to the gene pool. Thus recessive, deleterious alleles become more plentiful and evident in the population. Manifestations of inbreeding are known as inbreeding depression. A general loss of fitness often results and may cause high infant mortality, and lower birth weights, fecundity, and longevity. Inbreeding depression is a major concern when attempting to protect small populations from extinction.

Impervious material

Incidental catch

OTHER

As used in hydrology, this term refers to rock and soil material that occurs at the earth’s surface or within the subsurface which does not permit water to enter or move through them in any perceptible amounts. These materials normally have small-sized pores or have pores that have become clogged (sealed) which severely restrict water entry and movement. At the ground surface, rock outcrops, road surfaces, or soil surfaces that have been severely compacted would be considered impervious. These areas shed rainfall easily, causing overland flow or surface runoff which pick up and transport soil particles and cause excessive soil erosion. Soils or geologic strata beneath the earth’s surface are considered impervious, or impermeable, if the size of the pores is small and/or if the pores are not connected.

Improvement cutting Removal of crooked, forked, or diseased trees from a forest in which tree diameters are 5 in (13 cm) or larger. In forests where trees are smaller, the same process is called cleaning or weeding. Both have the objective of improving species composition, stem quality and/or growth rate of the forest. Straight, healthy, vigorous trees of the desired species are favored. By discriminating against certain tree species and eliminating trees with cavities or insect problems, improvement cuts can reduce the variety of habitats and thereby 746

see Bycatch

Incineration As a method of waste management, incineration refers to the burning of waste. It helps reduce the volume of landfill material and can render toxic substances non-hazardous, provided certain strict guidelines are followed. There are two basic types of incineration: municipal and hazardous waste incineration. Municipal waste incineration The process of incineration involves the combination of organic compounds in solid wastes with oxygen at high temperature to convert them to ash and gaseous products. A municipal incinerator consists of a series of unit operations which include a loading area under slightly negative pressure to avoid the escape of odors, a refuse bin which is loaded by a grappling bucket, a charging hopper leading to an inclined feeder and a furnace of varying type—usually of a horizontal burning grate type—a combustion chamber equipped with a bottom ash and clinker discharge, followed by a gas flue system to an expansion chamber. If byproduct stream is to be produced either for heating or power generation purposes, then the downstream flue system includes heat exchanger tubing as well. After the heat has been exchanged, the flue gas proceeds to a series of gas cleanup

Environmental Encyclopedia 3

Incineration

Diagram of a municipal incinerator. (McGraw-Hill Inc. Reproduced by permission.)

systems which neutralizes the acid gases (sulfur dioxide and hydrochloric acid, the latter resulting from burning chlorinated plastic products), followed by gas scrubbers and then solid/gas separation systems such as baghouses before dischargement to tall stacks. The stack system contains a variety of sensing and control devices to enable the furnace to operate at maximum efficiency consistent with minimal particulate emissions. A continuous log of monitoring systems is also required for compliance with county and state environmental quality regulations. There are several products from a municipal incinerator system: items which are removed before combustion such as large metal pieces; grate or bottom ash (which is usually watersprayed after removal from the furnace for safe storage); fly (or top ash) which is removed from the flue system generally mixed with products from the acid neutralization process; and finally the flue gases which are expelled to the environment. If the system is operating optimally, the flue gases will meet emission requirements, and the heavy metals from the wastes will be concentrated in the fly ash. (Typically these heavy metals, which originate from volatile metallic constituents, are lead and arsenic.) The fly ash typically is then stored

in a suitable landfill to avoid future problems of leaching of heavy metals. Some municipal systems blend the bottom ash with the top ash in the plant in order to reduce the level of heavy metals by dilution. This practice is undesirable from an ultimate environmental viewpoint. There are many advantages and disadvantages to municipal waste incineration. Some of the advantages are as follows: 1) The waste volume is reduced to a small fraction of the original. 2) Reduction is rapid and does not require semi-infinite residence times in a landfill. 3) For a large metropolitan area, waste can be incinerated on site, minimizing transportation costs. 4) The ash residue is generally sterile, although it may require special disposal methods. 5) By use of gas clean-up equipment, discharges of flue gases to the environment can meet stringent requirements and be readily monitored. 6) Incinerators are much more compact than landfills and can have minimal odor and vermin problems if properly designed. 7) Some of the costs of operation can be reduced by heat-recovery techniques such as the sale of steam to municipalities or electrical energy generation. There are disadvantages to municipal waste incineration as well. For example: 1) Generally the capital cost is 747

Environmental Encyclopedia 3

Indicator organism

high and is escalating as emission standards change. 2) Permitting requirements are becoming increasingly more difficult to obtain. 3) Supplemental fuel may be required to burn municipal wastes, especially if yard waste is not removed prior to collection. 4) Certain items such as mercury-containing batteries can produce emissions of mercury which the gas cleanup system may not be designed to remove. 5) Continuous skilled operation and close maintenance of process control is required, especially since stack monitoring equipment reports any failure of the equipment which could result in mandated shut down. 6) Certain materials are not burnable and must be removed at the source. 7) Traffic to and from the incinerator can be a problem unless timing and routing are carefully managed. 8) The incinerator, like a landfill, also has a limited life, although its lifetime can be increased by capital expenditures. 9) Incinerators also require landfills for the ash. The ash usually contains heavy metals and must be placed in a specially-designed landfill to avoid leaching. Hazardous waste incineration For the incineration of hazardous waste, a greater degree of control, higher temperatures, and a more rigorous monitoring system are required. An incinerator burning hazardous waste must be designed, constructed, and maintained to meet Resource Conservation and Recovery Act (RCRA) standards. An incinerator burning hazardous waste must achieve a destruction and removal efficiency of at least 99.99 percent for each principal organic hazardous constituent. For certain listed constituents such as polychlorinated biphenyl (PCB), mass air emissions from an incinerator are required to be greater than 99.9999%. The Toxic Substances Control Act requires certain standards for the incineration of PCBs. For example, the flow of PCB to the incinerator must stop automatically whenever the combustion temperature drops below the specified value; there must be continuous monitoring of the stack for a list of emissions; scrubbers must be used for hydrochloric acid control; among others. Recently medical wastes have been treated by steam sterilization, followed by incineration with treatment of the flue gases with activated carbon for maximum absorption of organic constituents. The latter system is being installed at the Mayo Clinic in Rochester, Minnesota, as a model medical disposal system. See also Fugitive emissions; Solid waste incineration; Solid waste volume reduction; Stack emissions [Malcolm T. Hepworth]

RESOURCES BOOKS Brunner, C. R. Handbook of Incineration Systems. New York: McGrawHill, 1991.

748

Edwards, B. H., et al. Emerging Technologies for the Control of Hazardous Wastes. Park Ridge, NJ: Noyes Data Corporation, 1983. Hickman Jr., H. L., et al. Thermal Conversion Systems for Municipal Solid Waste. Park Ridge, NJ: Noyes Publications, 1984. Vesilind, R. A., and A. E. Rimer. Unit Operations in Resource Recovery Engineering. Englewood Cliffs, NJ: Prentice-Hall, 1981. Wentz, C. A. Hazardous Waste Management. New York: McGraw-Hill, 1989.

Incineration, solid waste see Solid waste incineration

Indicator organism Indicator organisms, sometimes called bioindicators, are plant or animal species known to be either particularly tolerant or particularly sensitive to pollution. The health of an organism can often be associated with a specific type or intensity of pollution, and its presence can then be used to indicate polluted conditions relative to unimpacted conditions. Tubificid worms are an example of organisms that can indicate pollution. Tubificid worms live in the bottom sediments of streams and lakes, and they are highly tolerant of sewage. In a river polluted by wastewater discharge from a sewage treatment plant, it is common to see a large increase in the numbers of tubificid worms in stream sediments immediately downstream. Upstream of the discharge, the numbers of tubificid worms are often much lower or almost absent, reflecting cleaner conditions. The number of tubificid worms also decreases downstream, as the discharge is diluted. Pollution-intolerant organisms can also be used to indicate polluted conditions. The larvae of mayflies live in stream sediments and are known to be particularly sensitive to pollution. In a river receiving wastewater discharge, mayflies will show the opposite pattern of tubificid worms. The mayfly larvae are normally present in large numbers above the discharge point; they decrease or disappear at the discharge point and reappear further downstream as the effects of the discharge are diluted. Similar examples of indicator organisms can be found among plants, fish, and other biological groups. Giant reedgrass (Phragmites australis) is a common marsh plant that is typically indicative of disturbed conditions in wetlands. Among fish, disturbed conditions may be indicated by the disappearance of sensitive species like trout which require clear, cold waters to thrive. The usefulness of indicator organisms is limited. While their presence or absence provides a reliable general picture of polluted conditions, they are often little help in

Environmental Encyclopedia 3 identifying the exact sources of pollution. In the sediments of New York Harbor, for example, pollution-tolerant insect larvae are overwhelmingly dominant. However, it is impossible to attribute the large larval populations to just one of the sources of pollution there, which include ship traffic, sewage and industrial discharge, and storm runoff. The U.S. Environmental Protection Agency (EPA) is working diligently to find reliable predictors of aquatic ecosystem health using indicator species. Recently, the EPA has developed standards for the usefulness of species as ecological indicator organisms. A potential indicator species for use in evaluating watershed health must successfully pass four phases of evaluation. First, a potential indicator organism should provide information that is relevant to societal concerns about the environment, not simply academically interesting information. Second, use of a potential indicator organism should be feasible. Logistics, sampling costs, and timeframe for information gathering are legitimate considerations in deciding whether an organism is a potential indicator species or not. Thirdly, enough must be known about a potential species before it may be effectively used as an indicator organism. Sufficient knowledge regardin! g the natural variations to environmental flux should exist before incorporating a species as a true watershed indicator species. Lastly, the EPA has set a fourth criterion for evaluation of indicator species. A useful indicator should provide information that is easily interpreted by policy makers and the public, in addition to scientists. Additionally, in an effort to make indicator species information more reliable, the creation of indicator species indices are being investigated. An index is a formula or ratio of one amount to another that is used to measure relative change. The major advantage of developing an indicator organism index that is somewhat universal to all aquatic environments is that it can be tested using statistics. Using mathematical statistical methods, it may be determined whether a significant change in an index value has occurred. Furthermore, statistical methods allow for a certain level of confidence that the measured values repres! ent what is actually happening in nature. For example, a study was conducted to evaluate the utility of diatoms (a kind of microscopic aquatic algae) as an index of aquatic system health. Diatoms meet all four criteria mentioned above, and various species are found in both fresh and salt water. An index was created that was calculated using various measurable characteristics of diatoms that could then be evaluated statistically over time and among varying sites. It was determined that the diatom index was sensitive enough to reliably reflect three categories of the health of an aquatic ecosystem. The diatom index showed that values obtained from areas impacted by human activities had greater variability over time than diatom indices obtained from less disturbed loca-

Indigenous peoples

tions. Many such indices are being developed using different species, and multiple species in an effort to create reliable information from indicator organisms. As more is learned about the physiology and life history of indicator organisms and their individual responses to different types of pollution, it may be possible to draw more specific conclusions. See also Algal bloom; Nitrogen cycle; Water pollution [Terry Watkins]

RESOURCES BOOKS Browder, J. A., ed. Aquatic Organisms As Indicators of Environmental Pollution. Bethesda, MD: American Water Resources Association, 1988. Connell, D. W., and G. J. Miller. Chemistry and Ecotoxicology of Pollution. New York: Wiley-Interscience, 1984.

Indigenous peoples Cultural or ethnic groups living in an area where their culture developed or where their people have existed for many generations. Most of the world’s indigenous peoples live in remote forests, mountains, deserts, or arctic tundra, where modern technology, trade, and cultural influence are slow to penetrate. Many had much larger territories historically but have retreated to, or been forced into, small, remote areas by the advance of more powerful groups. Indigenous groups, also sometimes known as native or tribal peoples, are usually recognized in comparison to a country’s dominant cultural group. In the United States the dominant, non-indigenous cultural groups speak English, has historic roots in Europe, and maintain strong economic, technological, and communication ties with Europe, Asia, and other parts of the world. Indigenous groups in the United States, on the other hand, include scores of groups, from the southern Seminole and Cherokee to the Inuit and Yupik peoples of the Arctic coast. These groups speak hundreds of different languages or dialects, some of which have been on this continent for thousands of years. Their traditional economies were based mainly on small-scale subsistence gathering, hunting, fishing, and farming. Many indigenous peoples around the world continue to engage in these ancient economic practices. It is often difficult to distinguish who is and who is not indigenous. European-Americans and Asian-Americans are usually not considered indigenous even if they have been here for many generations. This is because their cultural roots connect to other regions. On the other hand, a German residing in Germany is also not usually spoken of as indigenous, even though by any strict definition she or he is indigenous. This is because the term is customarily reserved to denote economic or political minorities—groups that are relatively powerless within the countries where they live. 749

Environmental Encyclopedia 3

Indonesian forest fires

Historically, indigenous peoples have suffered great losses in both population and territory to the spread of larger, more technologically advanced groups, especially (but not only) Europeans. Hundreds of indigenous cultures have disappeared entirely just in the past century. In recent decades, however, indigenous groups have begun to receive greater international recognition, and they have begun to learn effective means to defend their lands and interests—including attracting international media attention and suing their own governments in court. The main reason for this increased attention and success may be that scientists and economic development organizations have recently become interested in biological diversity and in the loss of world rain forests. The survival of indigenous peoples, of the world’s forests, and of the world’s gene pools are now understood to be deeply interdependent. Indigenous peoples, who know and depend on some of the world’s most endangered and biologically diverse ecosystems, are increasingly looked on as a unique source of information, and their subsistence economies are beginning to look like admirable alternatives to large-scale logging, mining, and conversion of jungles to monocrop agriculture. There are probably between 4,000 and 5,000 different indigenous groups in the world; they can be found on every continent (except Antarctica) and in nearly every country. The total population of indigenous peoples amounts to between 200 million and 600 million (depending upon how groups are identified and their populations counted) out of a world population just over 6.2 billion. Some groups number in the millions; others comprise only a few dozen people. Despite their world-wide distribution, indigenous groups are especially concentrated in a number of “cultural diversity hot spots,” including Indonesia, India, Papua New Guinea, Australia, Mexico, Brazil, Zaire, Cameroon, and Nigeria. Each of these countries has scores, or even hundreds, of different language groups. Neighboring valleys in Papua New Guinea often contain distinct cultural groups with unrelated languages and religions. These regions are also recognized for their unusual biological diversity. Both indigenous cultures and rare species survive best in areas where modern technology does not easily penetrate. Advanced technological economies involved in international trade consume tremendous amounts of land, wood, water, and minerals. Indigenous groups tend to rely on intact ecosystems and on a tremendous variety of plant and animal species. Because their numbers are relatively small and their technology simple, they usually do little long-lasting damage to their environment despite their dependence on the resources around them. The remote areas where indigenous peoples and their natural environment survive, however, are also the richest remaining reserves of natural resources in most countries. Frequently state governments claim all timber, mineral, 750

water, and land rights in areas traditionally occupied by tribal groups. In Indonesia, Malaysia, Burma (Myanmar), China, Brazil, Zaire, Cameroon, and many other important cultural diversity regions, timber and mining concessions are frequently sold to large or international companies that can quickly and efficiently destroy an ecological area and its people. Usually native peoples, because they lack political and economic clout, have no recourse to losing their homes. Generally they are relocated, attempts are made to integrate them into mainstream culture, and they join laboring classes in the general economy. Indigenous rights have begun to strengthen in recent years. As long as international media attention continues to give them the attention they need—especially in the form of international economic and political pressure on state governments—and as long as indigenous leaders are able to continue developing their own defense strategies and legal tactics, the survival rate of indigenous peoples and their environments may improve significantly. [Mary Ann Cunningham Ph.D.]

RESOURCES BOOKS Redford, K. H., and C. Padoch. Conservation of Neotropical Forests: Working from Traditional Resources Use. New York: Columbia University Press, 1992.

OTHER Durning, A. T. “Guardians of the Land: Indigenous Peoples and the Health of the Earth.” Worldwatch Paper 112. Washington, DC: Worldwatch Institute, 1992.

Indonesian forest fires For several months in 1997 and 1998, a thick pall of smoke covered much of Southeast Asia. Thousands of forest fires burning simultaneously on the Indonesian islands of Kalimantan (Borneo) and Sumatra, are thought to have destroyed about 8,000 mi2 (20,000 km2) of primary forest, or an area about the size of New Jersey. The smoke generated by these fires spread over eight countries and 75 million people, covering an area larger than Europe. Hazy skies and the smell of burning forests could be detected in Hong Kong, nearly 2,000 mi (3,200 km) away. The air quality in Singapore and the city of Kuala Lumpur, Malaysia, just across the Strait of Malacca from Indonesia, was worse than any industrial region in the world. In towns such as Palembang, Sumatra, and Banjarmasin, Kalimantan, in the heart of the fires, the air pollution index frequently passed 800, twice the level classified in the United States as an air quality emergency, hazardous to human health. Automobiles had to drive with their headlights on, even at noon. People

Environmental Encyclopedia 3 groped along smoke-darkened streets unable to see or breathe normally. At least 20 million people in Indonesia and Malaysia were treated for illnesses such as bronchitis, eye irritation, asthma, emphysema, and cardiovascular diseases. It’s thought that three times that many who couldn’t afford medical care went uncounted. The number of extra deaths from this months-long episode is unknown, but it seems likely to have been hundreds of thousands, mostly elderly or very young children. Unable to see through the thick haze, several boats collided in the busy Straits of Malacca, and a plane crashed on Sumatra, killing 234 passengers. Cancelled airline flights, aborted tourist plans, lost workdays, medical bills, and ruined crops are estimated to have cost countries in the afflicted area several billion dollars. Wildlife suffered as well. In addition to the loss of habitat destroyed by fires, breathing the noxious smoke was as hard on wild species as it was on people. At the Pangkalanbuun Conservation Reserve, weak and disoriented orangutans were found suffering from respiratory diseases much like those of humans. Geographical isolation on the 16,000 islands of the Indonesian archipelago has allowed evolution of the world’s richest collection of biodiversity. Indonesia has the second largest expanse of tropical forest and the highest number of endemic species anywhere. This makes destruction of Indonesian plants, animals, and their habitat of special concern. The dry season in tropical Southeast Asia has probably always been a time of burning vegetation and smoky skies. Farmers practicing traditional slash and burn agriculture start fires each year to prepare for the next growing season. Because they generally burn only a hectare or two at a time, however, these shifting cultivators often help preserve plant and animal species by opening up space for early successional forest stages. Globalization and the advent of large, commercial plantations, however, have changed agricultural dynamics. There is now economic incentive for clearing huge tracts of forestland to plant oil palms, export foods such as pineapples and sugar cane, and fast-growing eucalyptus trees. Fire is viewed as the only practical way remove biomass and convert wild forest to into domesticated land. While it can cost the equivalent of $200 to clear a hectare of forest with chainsaws and bulldozers, dropping a lighted match into dry underbrush is essentially free. In 1997 to 1998, the Indonesian forest was unusually dry. A powerful El Nin˜o/Southern Oscillation weather pattern caused the most severe droughts in 50 years. Forests that ordinarily stay green and moist even during the rainless season became tinder dry. Lightning strikes are thought to have started many forest fires, but many people took advantage of the drought for their own purposes. Although the government blamed traditional farmers for setting most of

Indonesian forest fires

the fires, environmental groups claimed that the biggest fires were caused by large agribusiness conglomerates with close ties to the government and military. Some of these fires were set to cover up evidence of illegal logging operations. Others were started to make way for huge oil-palm plantations and fast-growing pulpwood trees, Neil Byron of the Center for International Forestry Research was quoted as saying that “fire crews would go into an area and put out the fire, then come back four days later and find it burning again, and a guy standing there with a petrol can.” According to the World Wide Fund for Nature, 37 plantations in Sumatra and Kalimantan were responsible for a vast majority of the forest burned on those islands. The plantation owners were politically connected to the ruling elite, however, and none of them was ever punished for violation of national forest protection laws. Indonesia has some of the strongest land-use management laws of any country in the world, but these laws are rarely enforced. In theory, more than 80% of its land is in some form of protected status, either set aside as national parks or classified as selective logging reserves where only a few trees per hectare can be cut. The government claims to have an ambitious reforestation program that replants nearly 1.6 million acres (1 million hectares) of harvested forest annually, but when four times that amount is burned in a single year, there’s not much to be done but turn it over to plantation owners for use as agricultural land. Aquatic life, also, is damaged by these forest fires. Indonesia, Malaysia, and the Philippines have the richest coral reef complexes in the world. More than 150 species of coral live in this area, compared with only about 30 species in the Caribbean. The clear water and fantastic biodiversity of Indonesia’s reefs have made it an ultimate destination for scuba divers and snorkelers from around the world. Unfortunately, soil eroded from burned forests clouds coastal waters and smothers reefs. Perhaps one of the worst effects of large tropical forest fires is that they may tend to be self-reinforcing. Moist tropical forests store huge amounts of carbon in their standing biomass. When this carbon is converted into CO2 by fire and released to the atmosphere, it acts as a greenhouse gas to trap heat and cause global warming. All the effects of human-caused global climate change are still unknown, but we stronger climatic events such as severe droughts may make further fires even more likely. Alarmed by the magnitude of the Southeast Asia fires and the potential they represent for biodiversity losses and global climate change, world leaders have proposed plans for international intervention to prevent A recurrence. Fears about imposing on national sovereignty, however, have made it difficult to come up with a plan for how to cope with this growing threat. [William P. Cunningham Ph.D.]

751

Indoor air quality

RESOURCES BOOKS Glover, David, and Timothy Jessup, eds. Indonesia’s Fires and Haze: The Cost of Catastrophe. Singapore: International Development Research Centre, 2002.

PERIODICALS Aditama, Tjandra Yoga. “Impact of Haze from Forest Fire to Respiratory Health: Indonesian Experience.” Respirology (2000): 169–174. Chan, C. Y., et al. “Effects of 1997 Indonesian forest fires on tropospheric ozone enhancement, radiative forcing, and temperature change over the Hong Kong region” Journal of Geophysical Research-Atmospheres 106 (2001):14875-14885. Davies, S. J. and L. Unam. “Smoke-haze from the 1997 Indonesian forest fires: effects on pollution levels, local climate, atmospheric CO2 concentrations, and tree photosynthesis.” Forest Ecology & Management 124(1999):137-144. Murty, T. S., D. Scott, and W. Baird. “The 1997 El Nin˜o, Indonesian Forest Fires and the Malaysian Smoke Problem: A Deadly Combination of Natural and Man-Made Hazard” Natural Hazards 21 (2000): 131–144. Tay, Simon. “Southeast Asian Fires: The Challenge Over Sustainable Environmental Law and Sustainable Development.” Peace Research Abstracts 38 (2001): 603–751.

Indoor air quality An assessment of air quality in buildings and homes based on physical and chemical monitoring of contaminants, physiological measurements, and/or psychosocial perceptions. Factors contributing to the quality of indoor air include lighting, ergonomics, thermal comfort, tobacco smoke, noise, ventilation, and psychosocial or work-organizational factors such as employee stress and satisfaction. “Sick building syndrome” (SBS) and “building-related illness” (BRI) are responses to indoor air pollution commonly described by office workers. Most symptoms are nonspecific; they progressively worsen during the week, occur more frequently in the afternoon, and disappear on the weekend. Poor indoor air quality (IAQ) in industrial settings such as factories, coal mines, and foundries has long been recognized as a health risk to workers and has been regulated by the U.S. Occupational Safety and Health Administration (OSHA). The contaminant levels in industrial settings can be hundreds or thousands of times higher than the levels found in homes and offices. Nonetheless, indoor air quality in homes and offices has become an environmental priority in many countries, and federal IAQ legislation has been introduced in the U.S. Congress for the past several years. However, none has yet passed, and currently the U.S. Environmental Protection Agency (EPA) has no enforcement authority in this area. Importance of IAQ The prominence of IAQ issues has risen in part due to well-publicized incidents involving outbreaks of Legionnaires’ disease, Pontiac fever, sick building syndrome, mul752

Environmental Encyclopedia 3 tiple chemical sensitivity, and asbestos mitigation in pub-

lic buildings such as schools. Legionnaire’s disease, for example, caused twenty-nine deaths in 1976 in a Philadelphia hotel due to infestation of the building’s air conditioning system by a bacterium called Legionella pneumophila. This microbe affects the gastrointestinal tract, kidneys, and central nervous system. It also causes the non-fatal Pontiac fever. IAQ is important to the general public for several reasons. First, individuals typically spend the vast majority of their time—80–90%—indoors. Second, an emphasis on energy conservation measures, such as reducing air exchange rates in ventilation systems and using more energy efficient but synthetic materials, has increased levels of air contaminants in offices and homes. New “tight” buildings have few cracks and openings so minimal fresh air enters such buildings. Low ventilation and exchange rates can increase indoor levels of carbon monoxide, nitrogen oxides, ozone, volatile organic compounds, bioaerosols, and pesticides and maintain high levels of second-hand tobacco smoke generated inside the building. Thus, many contaminants are found indoors at levels that greatly exceed outdoor levels. Third, an increasing number of synthetic chemicals— found in building materials, furnishing, cleaning and hygiene products—are used indoors. Fourth, studies show that exposure to indoor contaminants such as radon, asbestos, and tobacco smoke pose significant health risks. Fifth, poor IAQ is thought to adversely affect children’s development and lower productivity in the adult population. Demands for indoor air quality investigations of “sick” and problem buildings have increased rapidly in recent years, and a large fraction of buildings are known or suspected to have IAQ problems. Indoor contaminants Indoor air contains many contaminants at varying but generally low concentration levels. Common contaminants include radon and radon progeny from the entry of soil gas and groundwater and from concrete and other mineralbased building materials; tobacco smoke from cigarette and pipe smoking; formaldehyde from polyurethane foam insulation and building materials; volatile organic compounds (VOCs) emitted from binders and resins in carpets, furniture, or building materials, as well as VOCs used in dry cleaning processes and as propellants and constituents of personal use and cleaning products, like hair sprays and polishes; pesticides and insecticides; carbon monoxide, nitrogen oxides, and other combustion productions from gas stoves, appliances, and vehicles; asbestos from high temperature insulation; and biological contaminants including viruses, bacteria, molds, pollen, dust mites, and indoor and outdoor biota. Many or most of these contaminants are present at low levels in all indoor environments.

Environmental Encyclopedia 3

Indoor air quality

Some major indoor air pollutants. (Wadsworth Inc. Reproduced by permission.)

The quality of indoor air can change rapidly in time and from room to room. There are many diverse sources that emit various physical and chemical forms of contaminants. Some releases are slow and continuous, such as outgassing associated with building and furniture materials, while others are nearly instantaneous, like the use of cleaners and aerosols. Many building surfaces demonstrate significant interactions with contaminants in the form of sorptiondesorption processes. Building-specific variation in air exchange rates, mixing, filtration, building and furniture surfaces, and other factors alter dispersion mechanisms and contaminant lifetimes. Most buildings employ filters that can remove particles and aerosols. Filtration systems do not effectively remove very small particles and have no effect on gases, vapors, and odors. Ventilation and air exchange units designed into the heating and cooling systems of buildings are designed to diminish levels of these contaminants by dilution. In most buildings, however, ventilation systems are turned off at night after working hours, leading to an increase in contaminants through the night. Though operation and maintenance issues are estimated to cause the bulk of indoor air quality problems, deficiencies in the design of the heating,

ventilating and air conditioning (HVAC) system can cause problems as well. For example, locating a building’s fresh air intake near a truck loading dock will bring diesel fumes and other noxious contaminants into the building. Health impacts Exposures to indoor contaminants can cause a variety of health problems. Depending on the pollutant and exposure, health problems related to indoor air quality may include non-malignant respiratory effects, including mucous membrane irritation, allergic reactions, and asthma; cardiovascular effects; infectious diseases such as Legionnaires’ disease; immunologic diseases such as hypersensitivity pneumonitis; skin irritations; malignancies; neuropsychiatric effects; and other non-specific systemic effects such as lethargy, headache, and nausea. In addition indoor air contaminants such as radon, formaldehyde, asbestos, and other chemicals are suspected or known carcinogens. There is also growing concern over the possible effects of low level exposures on suppressing reproductive and growth capabilities and impacting the immune, endocrine, and nervous systems. 753

Environmental Encyclopedia 3

Industrial waste treatment

Solving IAQ problems Acute indoor air quality problems can be greatly eliminated by identifying, evaluating, and controlling the sources of contaminants. IAQ control strategies include the use of higher ventilation and air exchange rates, the use of lower emission and more benign constituents in building and consumer products (including product use restriction regulations), air cleaning and filtering, and improved building practices in new construction. Radon may be reduced by inexpensive subslab ventilation systems. New buildings could implement a day of “bake-out,” which heats the building to temperatures over 90°F (32°C) to drive out volatile organic compounds. Filters to remove ozone, organic compounds, and sulfur gases may be used to condition incoming and recirculated air. Copy machines and other emission sources should have special ventilation systems. Building designers, operators, contractors, maintenance personnel, and occupants are recognizing that healthy buildings result from combined and continued efforts to control emission sources, provide adequate ventilation and air cleaning, and good maintenance of building systems. Efforts toward this direction will greatly enhance indoor air quality. [Stuart Batterman]

RESOURCES BOOKS Godish, T. Indoor Air Pollution Control. Chelsea, MI: Lewis, 1989. Kay, J. G., et al. Indoor Air Pollution: Radon, Bioaerosols and VOCs. Chelsea, MI: Lewis, 1991. Samet, J. M., and J. D. Spengler. Indoor Air Pollution: A Health Perspective. Baltimore: Johns Hopkins University Press, 1991.

PERIODICALS Kreiss, K. “The Epidemiology of Building-Related Complaints and Illnesses.” Occupational Medicine: State of the Art Reviews 4 (1989): 575–92.

Industrial waste treatment Many different types of solid, liquid, and gaseous wastes are discharged by industries. Most industrial waste is recycled, treated and discharged, or placed in a landfill. There is no one means of managing industrial wastes because the nature of the wastes varies widely from one industry to another. One company might generate a waste that can be treated readily and discharged to the environment (direct discharge) or to a sewer in which case final treatment might be accomplished at a publicly owned treatment works (POTW). Treatment at the company before discharge to a sewer is referred to as pretreatment. Another company might generate a waste which is regarded as hazardous and therefore requires special management procedures related to storage, transportation and final disposal. 754

The pertinent legislation governing to what extent wastewaters need to be treated before discharge is the 1972 Clean Water Act (CWA). Major amendments to the CWA were passed in 1977 and 1987. The Environmental Protection Agency (EPA) was also charged with the responsibility of regulating the priority pollutants under the CWA. The CWA specifies that toxic and nonconventional pollutants are to be treated with the Best Available Technology (BAT). Gaseous pollutants are regulated under the Clean Air Act (CAA), promulgated in 1970 and amended in 1977 and 1990. An important part of the CAA consists of measures to attain and maintain National Ambient Air Quality Standards (NAAQS). Hazardous air pollutant (HAP) emissions are to be controlled through Maximum Achievable Control Technology (MACT) which can include process changes, material substitutions and/or air pollution control equipment. The “cradle to grave” management of hazardous wastes is to be performed in accordance with the Resource Conservation and Recovery Act (RCRA) of 1976 and the Hazardous and Solid Waste Amendments (HSWA) of 1984. In 1990, the United States, through the Pollution Prevention Act, adopted a program designed to reduce the volume and toxicity of waste discharges. Pollution prevention (P2) strategies might involve changing process equipment or chemistry, developing new processes, eliminating products, minimizing wastes, recycling water or chemicals, trading wastes with another company, etc. In 1991, the EPA instituted the 33/50 program which was to result in an overall 33% reduction of 17 high priority pollutants by 1992 and a 50% reduction of the pollutants by 1995. Both goals were surpassed. Not only has this program been successful, but it sets an important precedence because the participating companies volunteered. Additionally, P2 efforts have led industries to rigorously think through product life cycles. A Life Cycle Analysis (LCA) starts with consideration for acquiring raw materials, moves through the stages related to processing, assembly, service and reuse, and ends with retirement/disposal. The LCA therefore reveals to industry the costs and problems versus the benefits for every stage in the life of a product. In designing a waste management program for an industry, one must think first in terms of P2 opportunities, identify and characterize the various solid, liquid and gaseous waste streams, consider relevant legislation, and then design an appropriate waste management system. Treatment systems that rely on physical (e.g., settling, floatation, screening, sorption, membrane technologies, air stripping) and chemical (e.g., coagulation, precipitation, chemical oxidation and reduction, pH adjustment) operations are referred to as physicochemical, whereas systems in which microbes are cultured to metabolize waste constituents are known as biologi-

Environmental Encyclopedia 3

INFORM

cal processes (e.g., activated sludge, trickling filters, biotowers, aerated lagoons, anaerobic digestion, aerobic digestion, composting). Oftentimes, both physicochemical and biological systems are used to treat solid and liquid waste streams. Biological systems might be used to treat certain gas streams, but most waste gas streams are treated physicochemically (e.g., cyclones, electrostatic precipitators, scrubbers, bag filters, thermal methods). Solids and the sludges or residuals that result from treating the liquid and gaseous waste streams are also treated by means of physical, chemical, and biological methods. In many cases, the systems used to treat wastes from domestic sources are also used to treat industrial wastes. For example, municipal wastewaters often consist of both domestic and industrial waste. The local POTW therefore may be treating both types of wastes. To avoid potential problems caused by the input of industrial wastes, municipalities commonly have pretreatment programs which require that industrial wastes discharged to the sewer meet certain standards. The standards generally include limits for various toxic agents such as metals, organic matter measured in terms of biochemical oxygen demand (bod) or chemical oxygen demand, nutrients such as nitrogen and phosphorus, pH and other contaminants that are recognized as having the potential to impact on the performance of the POTW. At the other end of the spectrum, there are wastes that need to be segregated and managed separately in special systems. For example, an industry might generate a hazardous waste that needs to be placed in barrels and transported to an EPA approved treatment, storage or disposal facility (TSDF). Thus, it is not possible to simply use one train of treatment operations for all industrial waste streams, but an effective, generic strategy has been developed in recent years for considering the waste management options available to an industry. The basis for the strategy is to look for P2 opportunities and to consider the life cycle of a product. An awareness of waste stream characteristics and the potential benefits of stream segregation is then melded with the knowledge of regulatory compliance issues and treatment system capabilities/performance to minimize environmental risks and costs. [Gregory D. Boardman]

RESOURCES BOOKS Freeman, H. M. Industrial Pollution Prevention Handbook. New York: McGraw-Hill, Inc., 1995. Haas, C.N., and R.J. Vamos. Hazardous and Industrial Waste Treatment. Englewood Cliffs: Prentice Hall, Inc., 1995. LaGrega, M. D., P. L. Buckingham, and J. C. Evans. Hazardous Waste Management. New York: McGraw-Hill, Inc., 1994.

Metcalf and Eddy, Inc. Wastewater Engineering Treatment, Disposal and Reuse. Revised by G. Tchobanoglous and F. Burton. New York: McGrawHill, Inc., 1991. Nemerow, N. L., and Dasgupta, A. Industrial and Hazardous Waste Treatment. New York: Van Nostrand Reinhold, 1991. Peavy, H. S., D. R. Rowe, and G. Tchobanoglous. Environmental Engineering. New York: McGraw-Hill Book, 1995. Tchobanoglous, G., et al. Integrated Solid Waste Management Engineering Principles and Management Issues. New York: McGraw-Hill, Inc., 1993.

PERIODICALS Romanow, S., and T. E. Higgins. “Treatment of Contaminated Groundwater from Hazardous Waste Sites—Three Case Studies.” Presented at the 60th Water Pollution Control Federation Conference, Philadelphia (October 58, 1987).

Inertia see Resistance (inertia)

Infiltration In hydrology, infiltration refers to the maximum rate at which a soil can absorb precipitation. This is based on the initial moisture content of the soil or on the portion of precipitation that enters the soil. In soil science, the term refers to the process by which water enters the soil, generally by downward flow through all or part of the soil surface. The rate of entry relative to the amount of water being supplied by precipitation or other sources determines how much water enters the root zone and how much runs off the surface. See also Groundwater; Soil profile; Water table

INFORM INFORM was founded in 1973 by environmental research specialist Joanna Underwood and two colleagues. Seriously concerned about air pollution, the three scientists decided to establish an organization that would identify practical ways to protect the environment and public health. Since then, their concerns have widened to include hazardous waste, solid waste management, water pollution, and land, energy, and water conservation. The group’s primary purpose is “to examine business practices which harm our air, water, and land resources” and pinpoint “specific ways in which practices can be improved.” INFORM’s research is recognized throughout the United States as instrumental in shaping environmental policies and programs. Legislators, conservation groups and business leaders use INFORM’s authority as an acknowledged basis for research and conferences. Source reduction has become one of INFORM’s most important projects. A decrease in the amount and/or toxicity of waste entering the waste stream, source reduction includes any activity by an 755

Environmental Encyclopedia 3

INFOTERRA (U.N. Environment Program)

individual, business, or government that lessens the amount of solid waste—or garbage—that would otherwise have to be recycled or incinerated. Source reduction does not include recycling, municipal solid waste composting, household hazardous waste collection, or beverage container deposit and return systems. The first priority in source reduction strategies is elimination; the second, reuse. Public education is a crucial part of INFORM’s program. To this end INFORM has published Making Less Garbage: A Planning Guide for Communities. This book details ways to achieve source reduction including buying reusable, as opposed to disposable, items; buying in bulk; and maintaining and repairing products to extend their lives. INFORM’s outreach program goes well beyond its source reduction project. The staff of over 25 full-time scientists and researchers and 12 volunteers and interns makes presentations at national and international conferences and local workshops. INFORM representatives have also given briefings and testimony at Congressional hearings and produced television and radio advertisements to increase public awareness on environmental issues. The organization also publishes a quarterly newsletter, INFORM Reports. [Cathy M. Falk]

RESOURCES ORGANIZATIONS

analyzes the replies. INFOTERRA is used by governments, industries, and researchers in 177 countries. [Linda Rehkopf]

RESOURCES ORGANIZATIONS UNEP-Infoterra/USA, MC 3404 Ariel Rios Building, 1200 Pennsylvania Avenue, Washington, D.C. USA 20460 Fax: (202) 260-3923, Email: [email protected]

Injection well Injection wells are used to dispose waste into the subsurface zone. These wastes can include brine from oil and gas wells, liquid hazardous wastes, agricultural and urban runoff, municipal sewage, and return water from air-conditioning. Recharge wells can also be used for injecting fluids to enhance oil recovery, injecting treated water for artificial aquifer recharge, or enhancing a pump-and-treat system. If the wells are poorly designed or constructed, or if the local geology is not sufficiently studied, injected liquids can enter an aquifer and cause groundwater contamination. Injection wells are regulated under the Underground Injection Control Program of the Safe Drinking Water Act. See also Aquifer restoration; Deep-well injection; Drinking-water supply; Groundwater monitoring; Groundwater pollution; Water table

INFORM, Inc., 120 Wall Street, New York, NY USA 10005 (212) 361-2400, Fax: (212) 361-2412, Email: [email protected],

Inoculate

INFOTERRA (U.N. Environment Program) INFOTERRA has its international headquarters in Kenya and is a global information network operated by the Earthwatch program of the United Nations Environment Program (UNEP). Under INFOTERRA, participating nations designate institutions to be national focal points, such as the Environmental Protection Agency (EPA) in the United States. Each national institution chosen as a focal point, prepares a list of its national environmental experts and selects what it considers the best sources for inclusion in INFOTERRA’s international directory of experts. INFOTERRA initially used its directory only to refer questioners to the nearest appropriate experts, but the organization has evolved into a central information agency. It consults sources, answers public queries for information, and 756

To inoculate involves the introduction of microorganisms into a new environment. Originally the term referred to the insertion of a bud or shoot of one plant into the stem or trunk of another to develop new strains or hybrids. These hybrid plants would be resistant to botanic disease or they would allow greater harvests or range of climates. With the advent of vaccines to prevent human and animal disease, the term inoculate has come to represent injection of a serum to prevent, cure, or make immune from disease. Inoculation is of prime importance in that the introduction of specific microorganism species into specific macroorganisms may establish a symbiotic relationships where each organism benefits. For example, the introduction of mycorrhiza fungus to plants improves the plants’ ability to absorb nutrients from the soil. See also Symbiosis

Insecticide see Pesticide

Environmental Encyclopedia 3

Integrated pest management Integrated pest management (IPM) is a newer science that aims to give the best possible pest control while minimizing damage to human health or the environment. IPM means either using fewer chemicals more effectively or finding ways, both new and old, that substitute for pesticide use. Technically, IPM is the selection, integration and implementation of pest control based on predicted economic, ecological and sociological consequences. IPM seeks maximum use of naturally occurring pest controls, including weather, disease agents, predators and parasites. In addition, IPM utilizes various biological, physical, and chemical control and habitat modification techniques. Artificial controls are imposed only as required to keep a pest from surpassing intolerable population levels which are predetermined from assessments of the pest damage potential and the ecological, sociological, and economic costs of the control measures. Farmers have come to understand that the presence of a pest species does not necessarily justify action for its control. In fact, tolerable infestations may be actually desirable, providing food for important beneficial insects. Why this change in farming practices? The introduction of synthetic organic pesticides such as the insecticide DDT, and the herbicide 2,4-D (half the formula in Agent Orange) after World War II began a new era in pest control. These products were followed by hundreds of synthetic organic fungicides, nematicides, rodenticides and other chemical controls. These chemical materials were initially very effective and very cheap. Synthetic chemicals eventually became the primary means of pest control in productive agricultural regions, providing season-long crop protection against insects and weeds. They were used in addition to fertilizers and other treatments. The success of modern pesticides led to widespread acceptance and reliance upon them, particularly in this country. Of all the chemical pesticides applied worldwide in agriculture, forests, industry and households, one-third to one-half were used in the United States. Herbicides have been used increasingly to replace hand labor and machine cultivation for control of weeds in crops, in forests, on the rights-of-way of highways, utility lines, railroads and in cities. Agriculture consumes perhaps 65% of the total quantity of synthetic organic pesticides used in the United States each year. In addition, chemical companies export an increasingly larger amount to Third World countries. Pesticides banned in the United States such as DDT, EDB and chlordane, are exported to countries where they are applied to crops imported by the United States for consumption. For more than a decade, problems with pesticides have become increasingly apparent. Significant groups of pests have evolved with genetic resistance to pesticides. The

Integrated pest management

increase in resistance among insect pests has been exponential, following extensive use of chemicals in the last forty years. Ticks, insects and spider mites (nearly 400 species) are now especially resistant, and the creation of new insecticides to combat the problem is not keeping pace with the emergence of new strains of resistant insect pests. Despite the advances in modern chemical control and the dramatic increase in chemical pesticides used on U.S. cropland, annual crop losses from all pests appear to have remained constant or to have increased. Losses caused by weeds have declined slightly, but those caused by insects have nearly doubled. The price of synthetic organic pesticides has increased significantly in recent years, placing a heavy financial burden on those who use large quantities of the materials. As farmers and growers across the United States realize the limitations and human health consequences of using artificial chemical pesticides, interest in the alternative approach of integrated pest management grows. Integrated pest management aims at management rather than eradication of pest species. Since potentially harmful species will continue to exist at tolerable levels of abundance, the philosophy now is to manage rather than eradicate the pests. The ecosystem is the management unit. (Every crop is in itself a complex ecological system.) Spraying pesticides too often, at the wrong time, or on the wrong part of the crop may destroy the pests’ natural enemies ordinarily present in the ecosystem. Knowledge of the actions, reactions, and interactions of the components of the ecosystems is requisite to effective IPM programs. With this knowledge, the ecosystem is manipulated in order to hold pests at tolerable levels while avoiding disruptions of the system. The use of natural controls is maximized. IPM emphasizes the fullest practical utilization of the existing regulating and limiting factors in the form of parasites, predators, and weather, which check the pests’ population growth. IPM users understand that control procedures may produce unexpected and undesirable consequences, however. It takes time to change over and determination to keep up the commitment until the desired results are achieved. An interdisciplinary systems approach is essential. Effective IPM is an integral part of the overall management of a farm, a business or a forest. For example, timing plays an important role. Certain pests are most prevalent at particular times of the year. By altering the date on which a crop is planted, serious pest damage can be avoided. Some farmers simultaneously plant and harvest, since the procedure prevents the pests from migrating to neighboring fields after the harvest. Others may plant several different crops in the same field, thereby reducing the number of pests. The variety of crops harbor greater numbers of natural enemies and make it more difficult for the pests to locate and colonize their 757

Environmental Encyclopedia 3

Integrated pest management

host plants. In Thailand and China, farmers flood their fields for several weeks before planting to destroy pests. Other farmers turn the soil, so that pests are brought to the surface and die in the sun’s heat. The development of specific IPM program depends on the pest complex, resources to be protected, economic values, and availability of personnel. It also depends upon adequate funding for research and to train farmers. Some of the techniques are complex, and expert advice is needed. However, while it is difficult to establish absolute guidelines, there are general guidelines that can apply to the management of any pest group. Growers must analyze the “pest” status of each of the reputedly injurious organisms and establish economic thresholds for the “real” pests. The economic threshold is, in fact, the population level, and is defined as the density of a pest population below which the cost of applying control measures exceeds the losses caused by the pest. Economic threshold values are based on assessments of the pest damage potential and the ecological, sociological, and economic costs associated with control measures. A given crop, forest area, backyard, building, or recreational area may be infested with dozens of potentially harmful species at any one time. For each situation, however, there are rarely more than a few pest species whose populations expand to intolerable levels at regular and fairly predictable intervals. Key pests recur regularly at population densities exceeding economic threshold levels and are the focal point for IPM programs. Farmers must also devise schemes for lowering equilibrium positions of key pests. A key pest will vary in severity from year to year, but its average density, known as the equilibrium position, usually exceeds its economic threshold. IPM efforts manipulate the environment in order to reduce a pest’s equilibrium position to a permanent level below the economic threshold. This reduction can be achieved by deliberate introduction and establishment of natural enemies (parasites, predators, and diseases) in areas where they did not previously occur. Natural enemies may already occur in the crop in small numbers or can be introduced from elsewhere. Certain microorganisms, when eaten by a pest, will kill it. Newer chemicals show promise as alternatives to synthetic chemical pesticides. These include insect attractant chemicals, weed and insect disease agents and insect growth regulators or hormones. A pathogen such as Bacillus thuringiensis (BT), has proven commercially successful. Since certain crops have an inbuilt resistance to pests, pest-resistant or pest-free varieties of seed, crop plants, ornamental plants, orchard trees, and forest trees can be used. Growers can also modify the pest environment to increase the effectiveness of the pest’s biological control agents, to destroy its breeding, feeding, or shelter habitat or otherwise render it harmless. 758

This includes crop rotation, destruction of crop harvest residues and soil tillage, and selective burning or mechanical removal of undesirable plant species and pruning, especially for forest pests. While nearly permanent control of key insect and plant disease pests of agricultural crops has been achieved, emergencies will occur, and all IPM advocates acknowledge this. During those times, measures should be applied that create the least ecological destruction. Growers are urged to utilize the best combination of the three basic IPM components: natural enemies, resistant varieties and environmental modification. However, there may be a time when pesticides may be the only recourse. In that case, it is important to coordinate the proper pesticide, the dosage and the timing in order to minimize the hazards to nontarget organisms and the surrounding ecosystems. Pest management techniques have been known for many years and were used widely before World War II. They were deemphasized by insect and weed control scientists and by corporate pressures as the synthetic chemicals became commercially available after the war. Now there is a renewed interest in the early control techniques and in new chemistry. Reports detailing the success of IPM are emerging at a rapid rate as thousands of farmers yearly join the ranks of those who choose to eliminate chemical pesticides. Sustainable agricultural practice increases the richness of the soil by replenishing the soil’s reserves of fertility. IPM does not produce secondary problems such as pest resistance or resurgence. It also diminishes soil erosion, increases crop yields and saves money over the long haul. Organic foods are reported to have better cooking quality, better flavor and greater longevity in the storage bins. And with less pesticide residue, our food is clearly more healthy to eat. See also Sustainable agriculture [Liane Clorfene Casten]

RESOURCES BOOKS Baker, R. R., and P. Dunn. New Directions in Biological Control: Alternatives for Supressing Agricultural Pests and Diseases. New York: Wiley, 1990. Burn, A. J., et al. Integrated Pest Management. New York: Academic Press, 1988. DeBach, P., and D. Rosen. Biological Control By Natural Enemies. 2nd ed. Cambridge, MA: Cambridge University Press, 1991. Pimentel, D. The Pesticide Question: Environment, Economics and Ethics. New York: Chapman & Hall, 1992.

PERIODICALS Bottrell, D. G., and R. F. Smith. “Integrated Pest Management.” Environmental Science & Technology 16 (May 1982): 282A–288A.

Environmental Encyclopedia 3

Intergenerational justice One of the key features of an environmental ethic or perspective is its concern for the health and well-being of future generations. Questions about the rights of future people and the responsibilities of those presently living are central to environmental theory and practice and are often asked and analyzed under the term intergenerational justice. Most traditional accounts or theories of justice have focused on relations between contemporaries: What distribution of scarce goods is fairest or optimally just? Should such goods be distributed on the basis of merit or need? These and other questions have been asked by thinkers from Aristotle through John Rawls. Recently, however, some philosophers have begun to ask about just distributions over time and across generations. The subject of intergenerational justice is a key concern for environmentally-minded thinkers for at least two reasons. First, human beings now living have the power to permanently alter or destroy the planet (or portions thereof) in ways that will affect the health, happiness, and well-being of people living long after we are all dead. One need only think, for example, of the radioactive wastes generated by nuclear power plants which will be intensely “hot” and dangerous for many thousands of years. No one yet knows how to safely store such material for a hundred, much less many thousands, of years. Considered from an intergenerational perspective then, it would be unfair—that is, unjust— for the present generation to enjoy the benefits of nuclear power, passing on to distant posterity the burdens and dangers caused by our (in)action. Second, we not only have the power to affect future generations, but we know that we have it. And with such knowledge comes the moral responsibility to act in ways that will prevent harm to future people. For example, since we know about the health effects of radiation on human beings, our having that knowledge imposes upon us a moral obligation not to needlessly expose anyone—now or in the indefinite future—to the harms or hazards of radioactive wastes. Many other examples of intergenerational harm or hazard exist: global warming, topsoil erosion, disappearing tropical rain forests, depletion and/or pollution of aquifers, among others. But whatever the example, the point of the intergenerational view is the same: the moral duty to treat people justly or fairly applies not only to people now living, but to those who will live long after we are gone. To the extent that our actions produce consequences that may prove harmful to people who have not harmed (and in the nature of the case cannot harm) us is, by any standard, unjust. And yet it seems quite clear that we in the present generation are in many respects acting unjustly toward distant posterity. This is true not only for harms or hazards bequeathed to

Intergenerational justice

future people, but the point applies also to deprivations of various kinds. Consider, for example, the present generation’s profligate use of fossil fuels. Reserves of oil and natural gas are both finite and nonreplaceable; once burned (or turned into plastic or some other petroleum-based material), a gallon of oil is gone forever; every drop or barrel used now is therefore unavailable for future people. As Wendell Berry observed, the claim that fossil fuel energy is cheap rests on a simplistic and morally doubtful assumption about the rights of the present generation: “We were able to consider [fossil fuel energy] “cheap” only by a kind of moral simplicity: the assumption that we had a “right” to as much of it as we could use. This was a “right” made solely by might. Because fossil fuels, however abundant they once were, were nevertheless limited in quantity and not renewable, they obviously did not “belong” to one generation more than another. We ignored the claims of posterity simply because we could, the living being stronger than the unborn, and so worked the “miracle” of industrial progress by the theft of energy from (among others) our children.” And that, Berry adds, “is the real foundation of our progress and our affluence. The reason that we are a rich nation is not that we have earned so much wealth — you cannot, by any honest means, earn or deserve so much. The reason is simply that we have learned, and become willing, to market and use up in our own time the birthright and livelihood of posterity.” These and other considerations have led some environmentally-minded philosophers to argue for limits on presentday consumption, so as to save a fair share of scarce resources for future generations. John Rawls, for instance, constructs a just savings principle according to which members of each generation may consume no more than their fair share of scarce resources. The main difficulty in arriving at and applying any such principle lies in determining what counts as fair share. As the number of generations taken into account increases, the share available to any single generation then becomes smaller; and as the number of generations approaches infinity, any one generation’s share approaches zero. Other objections have been raised against the idea of intergenerational justice. These objections can be divided into two groups, which we can call conceptual and technological. One conceptual criticism is that the very idea of intergenerational justice is itself incoherent. The idea of justice is tied with that of reciprocity or exchange; but relations of reciprocity can exist only between contemporaries; therefore the concept of justice is inapplicable to relations between existing people and distant posterity. Future people are in no position to reciprocate; therefore people now living cannot be morally obligated to do anything for them. An759

Intergovernmental Panel on Climate Change (IPCC)

other conceptual objection to the idea of intergenerational justice is concerned with rights. Briefly, the objection runs as follows: future people do not (yet) exist; only actually existing people have rights, including the right to be treated justly; therefore future people do not have rights which we in the present have a moral obligation to respect and protect. Critics of this view counter that it not only rests on a toorestrictive conception of rights and justice, but that it also paves the way for grievous intergenerational injustices. Several arguments can be constructed to counter the claim that justice rests on reciprocity (and therefore applies only to relations between contemporaries) and the claim that future people do not have rights, including the right to be treated justly by their predecessors. Regarding reciprocity: since we acknowledge in ethics and recognize in law that it is possible to treat an infant or a mentally disabled or severely retarded person justly or unjustly, even though they are in no position to reciprocate, it follows that the idea of justice is not necessarily connected with reciprocity. Regarding the claim that future people cannot be said to have rights that require our recognition and respect: one of the more ingenious arguments against this view consists of modifying John Rawls’s imaginary veil of ignorance. Rawls argues that principles of justice must not be partisan or favor particular people but must be blind and impartial. To ensure impartiality in arriving at principles of justice, Rawls invites us to imagine an original position in which rational people are placed behind a veil of ignorance wherein they are unaware of their age, race, sex, social class, economic status, etc. Unaware of their own particular position in society, rational people would arrive at and agree upon impartial and universal principles of justice. To ensure that such impartiality extends across generations, one need only thicken the veil by adding the proviso that the choosers be unaware of the generation to which they belong. Rational people would not accept or agree to principles under which predecessors could harm or disadvantage successors. Some critics of intergenerational justice argue in technological terms. They contend that existing people need not restrict their consumption of scarce or nonrenewable resources in order to save some portion for future generations. For, they argue, substitutes for these resources will be discovered or devised through technological innovations and inventions. For example, as fossil fuels become scarcer and more expensive, new fuels—gasohol or fusion-derived nuclear fuel—will replace them. Thus we need never worry about depleting any particular resource because every resource can be replaced by a substitute that is as cheap, clean, and accessible as the resource it replaces. Likewise, we need not worry about generating nuclear wastes that we do not yet know how to store safely. Some solution is bound to be devised sometime in the future. 760

Environmental Encyclopedia 3

Environmentally-minded critics of this technological line of argument claim that it amounts to little more than wishful thinking. Like Charles Dickens’s fictional character Mr. Micawber, those who place their faith in technological solutions to all environmental problems optimistically expect that “something will turn up.” Just as Mr. Micawber’s faith was misplaced, so too, these critics contend, is the optimism of those who expect technology to solve all problems, present and future. Of course such solutions may be found, but that is a gamble and not a guarantee. To wager with the health and well-being of future people is, environmentalists argue, immoral. There are of course many other issues and concerns raised in connection with intergenerational justice. Discussions among and disagreements between philosophers, economists, environmentalists, and others are by no means purely abstract and academic. How these matters are resolved will have a profound effect on the fate of future generations. [Terence Ball]

RESOURCES BOOKS Auerbach, B. E. Unto the Thousandth Generation: Conceptualizing Intergenerational Justice. New York: Peter Lang, 1995. Ball, T. Transforming Political Discourse. Oxford, England: Blackwell, 1988. Barry, B., and R. I. Sikora, eds. Obligations to Future Generations. Philadelphia: Temple University Press, 1978. Barry, B. Theories of Justice. Berkeley: University of California Press, 1988. Berry, W. The Gift of Good Land. San Francisco: North Point Press, 1981. De-Shalit, A. Why Posterity Matters: Environmental Policies and Future Generations. London and New York: Routledge, 1995. Fishkin, J., and P. Laslett, eds. Justice Between Age Groups and Generations. New Haven, CT: Yale University Press, 1991. MacLean, D., and P. G. Brown, eds. Energy and the Future. Totawa, NJ: Rowman & Littlefield, 1983. Partridge, E., ed. Responsibilities to Future Generations. Buffalo, NY: Prometheus Books, 1981. Rawls, J. A Theory of Justice. Cambridge: Harvard University Press, 1971. Wenz, P. S. Environmental Justice. Albany: State University of New York Press, 1988.

Intergovernmental Panel on Climate Change (IPCC) The Intergovernmental Panel on Climate Change (IPCC) was established in 1988 as a joint project of the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO). The primary mission of the IPCC is to bring together the world’s leading experts on the earth’s climate to gather, assess, and disseminate scientific information about climate change, with a view to informing international and national policy makers. The

Environmental Encyclopedia 3

Intergovernmental Panel on Climate Change (IPCC)

IPCC has become the highest-profile and best-regarded international agency concerned with the climatic consequences of “greenhouse gases,” such as carbon dioxide and methane, that are a byproduct of the combustion of fossil fuels. The IPCC is an organization that has been and continues to be at the center of a great deal of controversy. The IPCC was established partly in response to Nobel Laureate Mario Molina’s 1985 documentation of chemical processes which occur when human-made chemicals deplete the earth’s atmospheric ozone shield. Ozone depletion is likely to result in increased levels of ultraviolet radiation reaching the earth’s surface, producing a host of health, agricultural, and environmental problems. Molina’s work helped to persuade most of the industrialized nations to ban chlorofluorocarbons and several other ozone-depleting chemicals. It also established a context in which national and international authorities began to pay serious attention to the global environmental consequences of atmospheric changes resulting from industrialization and reliance on fossil fuels. Continuing to operate under the auspices of the United Nations and headquartered in Geneva, Switzerland, the IPCC is organized into three working groups and a task force, and meets about once a year. The first group gathers scientific data and analyzes the functioning of the climate system with special attention to the detection of potential changes resulting from human activity. The second group’s assignment is to assess the potential socioeconomic impacts and vulnerabilities associated with climate change. It is also charged with exploring options for humans to adapt to potential climate change. The third group focuses on ways to reduce greenhouse gas emissions and to stop or reduce climate change. The task force is charged with maintaining inventories of greenhouse emissions for all countries. The IPCC has published its major findings in “Full Assessment” Reports, first issued in 1990 and 1995. The Tenth Session of the IPCC (Nairobi, 1994) directed that future full assessments should be prepared approximately every five years. The Third Assessment Report was entitled “Climate Change 2001". Special reports and technical papers are also published as the panel identifies issues. The IPCC has drawn a great deal of criticism virtually from its inception. Massive amounts of money are at stake in policy decisions which might seek to limit greenhouse gas emissions, and much of the criticism directed at the IPCC tends to come from lobbying and research groups mostly funded by industries that either produce or use large quantities of fossil fuels. Thus, a lobbying group sponsored by energy, transportation, and manufacturing interests called the Global Climate Coalition attacked parts of the 1995 report as unscientific. At the core of the controversy was Chapter Eight of the report, “Detection of Climate

Change and Attribution of Causes". Although the IPCC was careful to hedge its conclusions in various ways, acknowledging difficulties in measurement, disagreements over methodologies for interpreting data, and general uncertainty about the conclusions of its findings, it nevertheless suggested a connection between greenhouse gas emissions and global warming. Not satisfied with such caveats, the Global Climate Coalition charged that the IPCC’s conclusions had been presented as far less debatable than they actually were. This cast a cloud of uncertainty over the report, at least for some United States policymakers. However, other leaders took the report more seriously. The Second Assessment Report provided important input to the negotiations that led to the development of the Kyoto Protocol in 1997, a treaty aimed at reducing the global output of greenhouse gases. In the summer of 1996, results of new studies of the upper atmosphere were published which provided a great deal of indirect support for the IPCC’s conclusions. Investigators found significant evidence of cooling in the upper atmosphere and warming in the lower atmosphere, with this effect being especially pronounced in the southern hemisphere. These findings confirmed the predictions of global warming models such as those employed by the IPCC. Perhaps emboldened by this confirmation, but still facing a great deal of political opposition, the IPCC released an unequivocal statement about global warming and its causes in November 1996. The IPCC declared that “the balance of evidence suggests that there is a discernible human influence on global climate". The statement made clear that a preponderance of evidence and a majority of scientific experts indicated that observable climate change was a result of human activity. The IPCC urged that all nations limit their use of fossil fuels and develop more energy-efficient technologies. These conclusions and recommendations provoked considerable criticism from less-developed countries. Leaders of the less-industrialized areas of the world tend to view potential restrictions on the use of fossil fuels as unfair hindrance of their efforts to catch up with the United States and Western Europe in industry, transportation, economic infrastructure, and standards of living. The industrialized nations, they point out, were allowed to develop without any such restrictions and now account for the vast majority of the world’s energy consumption and greenhouse gas emissions. These industrialized nations therefore should bear the brunt of any efforts to protect the global climate, substantially exempting the developing world from restrictions on the use of fossil fuels. The IPCC’s conclusions and recommendations have also drawn strong opposition from industry groups in the United States, such as the American Petroleum Institute, 761

Environmental Encyclopedia 3

Internalizing costs

and conservative Republican politicians. These critics charge that the IPCC’s new evidence is only fashionable but warmed-over theory, and that no one has yet proven conclusively that climate change is indeed related to human influence. In view of the likely massive economic impact of any aggressive program aimed at the reduction of emissions, there is no warrant for following the IPCC’s dangerous and ill-considered advice. Under Republican leadership, Congress slashed funds for Environmental Protection Agency and Department of Energy programs concerned with global warming and its causes, as well as funds for researching alternative and cleaner sources of energy. These funding cuts and the signals they sent created foreign relations problems for the Clinton Administration. The United States was unable to honor former President Bush’s 1992 pledge (at the Rio de Janeiro Earth Summit) to reduce the country’s emission of carbon dioxide and methane to 1990 levels by the year 2000. Indeed, owing in part to low oil prices and a strong domestic economy, the United States was consuming more energy and emitting more greenhouse gases than ever before by 2000. In the summer of 2001, the IPCC released its strongest statement to date on the problem of global warming, in its Third Assessment Report. The report, “Climate Change 2001", provides further evidence for global warming and its cause—the widescale burning of fossil fuels by humans. The report projects that global mean surface temperatures on earth will increase by 2.5–10.4°F (1.5–5.9°C) by the year 2100, unless greenhouse gas emissions are reduced well below current levels. The report also notes that this warming trend will represent the fasting warming of the earth in 10,000 years, with possible dire consequences to human society and the environment. In the early 2000s, the administration of President George W. Bush, a former oilman, was resistant to the ideas of global warming and reducing greenhouse gas emissions. The administration strongly opposed the Kyoto Treaty and domestic pollution reduction laws, claiming such measures would cost jobs and reduce the standard of living, and that the scientific evidence was inconclusive. In June 2001, a National Academy of Science (NAS) panel reported to President Bush that the IPCC’s studies on global warming were scientifically valid. In April 2002, with pressure from the oil industry, the Bush administration forced the removal of IPCC Chairman Robert Watson, an American atmospheric scientist who had been outspoken over the issue of climate change and the need for greenhouse gas reduction in industrialized countries. The IPCC elected Dr. Rajendra K. Pachauri as its next Chairman at its nineteenth session in Geneva. Dr. Pachauri, a citizen of India, is a well-known world-class 762

expert in economics and technology, with a strong commitment to the IPCC process and to scientific integrity. [Lawrence J. Biskowski and Douglas Dupler]

RESOURCES BOOKS McKibbin, Warwick J., and Peter Wilcoxen. Climate Change Policy After Kyoto: Blueprint for a Realistic Approach. Washington DC: The Brookings Institution Press, 2002.

PERIODICALS McKibben, Bill. “Climate Change 2001: Third Assessment Report.” New York Review of Books, July 5, 2001, 35. Trenberth, Kevin E. “Stronger Evidence of Human Influences on Climate: The 2001 IPCC Assessment.” Environment, May 2001, 8.

OTHER Intergovernmental Panel on Climate Change Home Page. [cited July 2002]. . Union of Concerned Scientists Global Warming Web Page. [cited July 2002]. . World Meteorological Organization Home Page. [cited July 2002]. .

ORGANIZATIONS IPCC Secretariat, C/O World Meteorological Organization, 7bis Avenue de la Paix, C.P. 2300, CH- 1211, Geneva, Switzerland 41-22-730-8208, Fax: 41-22-730-8025, Email: [email protected],

Internal costs see Internalizing costs

Internalizing costs Private market activities create so-called externalities. An example of a negative externality is air pollution. It occurs when a producer does not bear all the costs of an activity in which he or she engages. Since external costs do not enter into the calculations producers make, they will make few attempts to limit or eliminate pollution and other forms of environmental degradation. Negative externalities are a type of market defect all economists believe is appropriate to try to correct. Milton Friedman refers to such externalities as “neighborhood effects,” (although it must be kept in mind that some forms of pollution have an all but local effect). The classic neighborhood effect is pollution. The premise of a free market is that when two people voluntarily make a deal, they both benefit. If society gives everyone the right to make deals, society as a whole will benefit. It becomes richer from the aggregation of the many mutually beneficial deals that are made. However, what happens if in making mutually beneficial deals there is a waste product that the parties release

Environmental Encyclopedia 3 into the environment and that society must either suffer from or clean up? The two parties to the deal are better off, but society as a whole has to pay the costs. Friedman points out that individual members of a society cannot appropriately charge the responsible parties for external costs or find other means of redress. Friedman’s answer to this dilemma is simple: society, through government, must charge the responsible parties the costs of the clean-up. Whatever damage they generate must be internalized in the price of the transaction. Polluters can be forced to internalize environmental costs through pollution taxes and discharge fees, a method generally favored by economists. When such taxes are imposed, the market defect (the price of pollution which is not counted in the transaction) is corrected. The market price then reflects the true social costs of the deal, and the parties have to adjust accordingly. They will have an incentive to decrease harmful activities and develop less environmentally damaging technology. The drawback of such a system is that society will not have direct control over pollution levels, although it will receive monetary compensation for any losses it sustains. However, if the government imposed a tax or charge on the polluting parties, it would have to place a monetary value on the damage. In practice, this is difficult to do. How much for a human life lost to pollution? How much for a vista destroyed? How much for a plant or animal species brought to extinction? Finally, the idea that pollution is all right as long as the polluter pays for it is unacceptable to many people. In fact, the government has tried to control activities with associated externalities through regulation, rather than by supplementing the price system. It has set standards for specific industries and other social entities. The standards are designed to limit environmental degradation to acceptable levels and are enforced through the Environmental Protection Agency (EPA). They prohibit some harmful activities, limit others, and prescribe alternative behaviors. When market actors do not adhere to these standards they are subject to penalties. In theory, potential polluters are given incentives to reduce and treat their waste, manufacture less harmful products, develop alternative technologies, and so on. In practice, the system has not worked as well as it was hoped in the 1960s and 1970s, when much of the environmental legislation presently in force was enacted. Enforcement has been fraught with political and legal difficulties. Extensions on deadlines are given to cities for not meeting clean air standards and to the automobile industry for not meeting standards on fuel economy of new cars, for instance. It has been difficult to collect fines from industries found to have been in violation. Many cases are tied up in the courts through a lengthy appeals process. Some companies simply declare bankruptcy to evade fines. Others continue

International Atomic Energy Agency

polluting because they find it cheaper to pay fines than to develop alternative production processes. Alternative strategies presently under debate include setting up a trade in pollution permits. The government would not levy a tax on pollution but would issue a number of permits that altogether set a maximum acceptable pollution level. Buyers of permits can either use them to cover their own polluting activities or resell them to the highest bidder. Polluters will be forced to internalize the environmental costs of their activities so that they will have an incentive to reduce pollution. The price of pollution will then be determined by the market. The disadvantage of this system is that the government will have no control over where pollution takes place. It is thinkable that certain regions will have high concentrations of industries using the permits, which may result in local pollution levels that are unacceptably high. Whether marketable pollution permits address present pollution problems more satisfactorily than does regulation alone has yet to be seen. See also Environmental economics [Alfred A. Marcus and Marijke Rijsberman]

RESOURCES BOOKS Friedman, M. Capitalism and Freedom. Chicago: University of Chicago Press, 1962. Marcus, A. Business and Society: Ethics, Government, and the World Economy. Homewood, IL: Irwin Press, 1993.

International Atomic Energy Agency The first decade of research on nuclear weapons and nuclear reactors was characterized by extreme secrecy, and the few nations that had the technology carefully guarded their information. In 1954, however, that philosophy changed, and the United States, in particular, became eager to help other nations use nuclear energy for peaceful purposes. A program called “Atoms for Peace” brought foreign students to the United States for the study of nuclear sciences and provided enriched uranium to countries wanting to build their own reactors, encouraging interest in nuclear energy throughout much of the world. But this program created a problem. It increased the potential diversion of nuclear information and nuclear materials for the construction of weapons, and the threat of nuclear proliferation grew. The United Nations created the International Atomic Energy Agency (IAEA) in 1957 to address this problem. The agency had two primary objectives: to encourage and assist with the development of peaceful applications of nuclear power throughout the world and to prevent the diversion of nuclear materials to weapons research and development. 763

International Cleaner Production Cooperative

The first decade of IAEA’s existence was not marked by much success. In fact, the United States was so dissatisfied with the agency’s work that it began signing bilateral nonproliferation treaties with a number of countries. Finally, the 1970 nuclear non-proliferation treaty more clearly designated the IAEA’s responsibilities for the monitoring of nuclear material. Today the agency is an active division of the United Nations Educational, Scientific, and Cultural Organization (UNESCO), and its headquarters are in Vienna. The IAEA operates with a staff of more than 800 professional workers, about 1,200 general service workers, and a budget of about $150 million. To accomplish its goal of extending and improving the peaceful use of nuclear energy, IAEA conducts regional and national workshops, seminars, training courses, and committee meetings. It publishes guidebooks and manuals on related topics and maintains the International Nuclear Information System, a bibliographic database on nuclear literature that includes more than 1.2 million records. The database is made available on magnetic tape to its 42-member states. The IAEA also carries out a rigorous program of inspection. In 1987, for example, it made 2,133 inspections at 631 nuclear installations in 52 non-nuclear weapon nations and four nuclear weapon nations. In a typical year, IAEA activities include conducting safety reviews in a number of different countries, assisting in dealing with accidents at nuclear power plants, providing advice to nations interested in building their own nuclear facilities, advising countries on methods for dealing with radioactive wastes, teaching nations how to use radiation to preserve foods, helping universities introduce nuclear science into their curricula, and sponsoring research on the broader applications of nuclear science. [David E. Newton]

RESOURCES ORGANIZATIONS International Atomic Energy Agency, P.O. Box 100, Wagramer Strasse 5, Vienna, Austria A-1400 (413) 2600-0, Fax: (413) 2600-7, Email: [email protected]

International Cleaner Production Cooperative The International Cleaner Production Cooperative is an Internet resource () that was implemented to provide access to globally relevant information about cleaner production and pollution prevention to the international community. The site is hosted by the U.S. Environmental Protection Agency and 764

Environmental Encyclopedia 3 gives access to a consortium of World Wide Web sites that provides information to businesses, professional, and local, regional, national, and international agencies that are striving for cleaner production. The cooperative provides links to people and businesses involved with cleaner production and pollution prevention, and to sources of technical assistance and information on international policy. The United Nations Environment Programme (UNEP) is one of the primary members of the cooperative. [Marie H. Bundy]

International Convention for the Regulation of Whaling (1946) The International Whaling Commission (IWC) was established in 1949 following the inaugural International Convention for the Regulation of Whaling, which took place in Washington, D.C., in 1946. Many nations have membership in the IWC, which primarily sets quotas for whales. The purpose of these quotas is twofold: they are intended to protect the whale species from extinction while allowing a limited whaling industry. In recent times, however, the IWC has come under attack. The vast majority of nations in the Commission have come to oppose whaling of any kind and object to the IWC’s practice of establishing quotas. Furthermore, some nations—principally Iceland, Japan, and Norway—wish to protect their traditional whaling industries and are against the quotas set by the IWC. With two such divergent factions opposing the IWC, its future is as doubtful as that of the whales. Since its inception, the Commission has had difficulty implementing its regulations and gaining approval for its recommendations. In the meantime whale populations have continued to dwindle. In its original design, the IWC consisted of two sub-committees, one scientific and the other technical. Any recommendation that the scientific committee put forth was subject to the politicized technical committee before final approval. The technical committee evaluated the recommendation and changed it if it was not politically or economically viable; essentially, the scientific committee’s recommendations have often been rendered powerless. Furthermore, any nation that has decided an IWC recommendation was not in its best interest could have dismissed it by simply registering an objection. In the 1970s this gridlock and inaction attracted public scrutiny; people objected to the IWC’s failure to protect the world’s whales. Thus in 1972 the United Nations Conference on the Human Environment voted overwhelmingly to stop commercial whaling. Nevertheless, the IWC retained some control over the whaling industry. In 1974 the Commission attempted to bring scientific research to management strategies in its

Environmental Encyclopedia 3 “New Management Procedure.” The IWC assessed whale populations with finer resolution, scrutinizing each species to see if it could be hunted and not die out. It classified whales as either “initial management stocks” (harvestable), “sustained management stocks” (harvestable), or “protection stocks” (unharvestable). While these classifications were necessary for effective management, much was unknown about whale population ecology, and quota estimates contained high levels of uncertainty. Since the 1970s, public pressure has caused many nations in the IWC to oppose whale hunting of any kind. At first, one or two nations proposed a whaling moratorium each year. Both pro- and anti-whaling countries began to encourage new IWC members to vote for their respective positions, thus dividing the Commission. In 1982, the IWC enacted a limited moratorium on commercial whaling, to be in effect from 1986 until 1992. During that time it would thoroughly assess whale stocks and afterward allow whaling to resume for selected species and areas. Norway and Japan, however, attained special permits for whaling for scientific research: they continued to catch approximately 400 whales per year, and the meat was sold to restaurants. Then in 1992—the year when whaling was supposed to have resumed—many nations voted to extend the moratorium. Iceland, Norway, and Japan objected strongly to what they saw as an infringement on their traditional industries and eating customs. Iceland subsequently left the IWC, and Japan and Norway have threatened to follow. These countries intend to resume their whaling programs. Members of the IWC are torn between accommodating these nations in some way and protecting the whales, and amid such controversy it is unlikely that the Commission can continue in its present mission. Although the IWC has not been able to marshall its scientific advances or enforce its own regulations in managing whaling, it is broadening its original mission. The Commission may begin to govern the hunting of small cetaceans such as dolphins and porpoises, which are believed to suffer from overhunting. [David A. Duffus and Andrea Gacki]

RESOURCES BOOKS Burton, R. The Life and Death of Whales. London: Andre Deutsch Ltd., 1980. Kellog, R. The International Whaling Commission. International Technical Conference on Conservation of Living Resources of the Sea. New York: United Nations Publications, 1955.

PERIODICALS Holt, S. J. “Let’s All Go Whaling.” The Ecologist 15 (1985): 113–124. Pollack, A. “Commission to Save Whales Endangered, Too.” The New York Times, May 18, 1993, B8.

International Geosphere-Biosphere Programme

International Council for Bird Preservation see BirdLife International

International Geosphere-Biosphere Programme (U.N. Environmental Programme) Research scientists from all countries have always interacted with each other closely. But in recent decades, a new type of internationalism has begun to evolve, in which scientists from all over the world work together on very large projects concerning the planet. An example is research on global change. A number of scientists have come to believe that human activities, such as the use of fossil fuels and deforestation of tropical rain forests, may be altering the earth’s climate. To test that hypothesis, a huge amount of meteorological data must be collected from around the world, and no single institution can possibly obtain and analyze it all. A major effort to organize research on important, worldwide scientific questions such as climate change was begun in the early 1980s. Largely through the efforts of scientists from two United States organizations, the National Aeronautics and Space Administration (NASA) and the National Research Council, a proposal was developed for the creation of an International Geosphere-Biosphere Programme (IGBP). The purpose of the IGBP was to help scientists from around the world focus on major issues about which there was still too little information. Activity funding comes from national governments, scientific societies, and private organizations. IGBP was not designed to be a new organization, with new staff, new researchers, and new funding problems. Instead, it was conceived of as a coordinating program that would call on existing organizations to attack certain problems. The proposal was submitted in September 1986 to the General Assembly of the International Council of Scientific Unions (ICSU), where it received enthusiastic support. Within two years, more than 20 nations agreed to cooperate with IGBP, forming national committees to work with the international office. A small office, administered by Harvard oceanographer James McCarthy, was installed at the Royal Swedish Academy of Sciences in Stockholm. IGBP has moved forward rapidly. It identified existing programs that fit the Programme’s goals and developed new research efforts. Because many global processes are gradual, a number of IGBP projects are designed with time frames of ten to twenty years. 765

International Institute for Sustainable Development

By the early 1990s, IGBP had defined a number of projects, including the Joint Global Ocean Flux Study, the Land-Ocean Interactions in the Coastal Zone study, the Biospheric Aspects of the Hydrological Cycle research, Past Global Changes, Global Analysis, Interpretation and Modeling, and Global Change System for Analysis, Research and Training. [David E. Newton]

RESOURCES BOOKS Kupchella, C. E. Environmental Science: Living within the System of Nature. Boston: Allyn and Bacon, Inc., 1986.

PERIODICALS Edelson, E. “Laying the Foundation.” Mosaic (Fall/Winter 1988): 4–11. Perry, J. S. “International Institutions for the Global Environment.” MTS Journal (Fall 1991): 27–8.

OTHER International Geosphere-Biosphere Programme. [cited June 2002]. .

International Institute for Sustainable Development The International Institute for Sustainable Development (IISD) is a nonprofit organization that serves as an information and resources clearinghouse for policy makers promoting sustainable development. IISD aims to promote sustainable development in decision making worldwide by assisting with policy analysis, providing information about practices, measuring sustainability, and building partnerships to further sustainability goals. It serves businesses, governments, communities, and individuals in both developing and industrialized nations. IISD’s stated aim is to “create networks designed to move sustainable development from concept to practice.” Founded in 1990 and based in Winnipeg, Canada, IISD is funded by foundations, governmental organizations, private sector sources, and revenue from publications and products. IISD works in seven program areas. The Business Strategies program focuses on improving competitiveness, creating jobs, and protecting the environment through sustainability. Projects include several publications and the EarthEnterprise program, which offers entrepreneurial and employment strategies. IISD’s Trade and Sustainable Development program works on building positive relationships between trade, the environment, and development. It examines how to make international accords, such as those made by the World Trade Organization, compatible with the goals of sustainable development. The Community Adapta766

Environmental Encyclopedia 3

tion and Sustainable Livelihoods program identifies adaptive

strategies for drylands in Africa and India, and it examines the influences of policies and new technology on local ways of life. The Great Plains program works with community, farm, government, and industry groups to assist communities in the Great Plains region of North America with sustainable development. It focuses on government policies in agriculture, such as the Western Grain Transportation Act, as well as loss of transportation subsidies, the North American Free Trade Agreement (NAFTA), soil salination and loss of wetlands, job loss, and technological advances. Measurement and Indicators aims to set measurable goals and progress indicators for sustainable development. As part of this, IISD offers information about the successful uses of taxes and subsidies to encourage sustainability worldwide. Common Security focuses on initiatives of peace and consensusbuilding. IISD’s Information and Communications program offers several publications and Internet sites featuring information on sustainable development issues, terms, events, and media coverage. This includes Earth Negotiations Bulletin, which provides on-line coverage of major environmental and development negotiations (especially United Nations conferences), and IISDnet, with information about sustainable development worldwide. IISD also publishes more than 50 books, monographs, and discussion papers, including Sourcebook on Sustainable Development, which lists organizations, databases, conferences, and other resources. IISD produces five journals that include Developing Ideas, published bimonthly both in print and electronically, and featuring articles on sustainable development terms, issues, resources, and recent media coverage. Earth Negotiations Bulletin reports on conferences and negotiations meetings, especially United Nations conferences. IISD’s Internet journal, /linkages/journal/, is a bimonthly electronic multi-media subscription magazine focusing on global negotiations. Its reporting service, Sustainable Developments, reports on environmental and development negotiations for meetings and symposia via the Internet. IISD also operates IISDnet (http://iisd1.iisd.ca/), an Internet information site featuring research, new trends, global activities, contacts, information on IISD’s activities and projects, including United Nations negotiations on environment and development, corporate environmental reporting, and information on trade issues. IISD is the umbrella organization for Earth Council, an international nongovernmental organization (NGO) created in 1992 as a result of the United Nations Earth Summit. The organization creates measurements for achieving sustainable development and assesses practices and economic measures for their effects on sustainable development. Earth Council coordinated the Rio+5 Forum in Rio de Janeiro in March 1997, which assessed progress towards

Environmental Encyclopedia 3

International Primate Protection League

sustainable development since the Earth Summit in 1992. IISD has produced two publications, Trade and Sustainable Development and Guidelines for the Practical Assessment of Progress Toward Sustainable Development. [Carol Steinfeld]

RESOURCES ORGANIZATIONS International Institute for Sustainable Development, 161 Portage Avenue East, 6th Floor, Winnipeg, ManitobaCanada R3B 0Y4 (204) 958-7700, Fax: (204) 958-7710, Email: [email protected],

International Joint Commission The International Joint Commission (IJC) is a permanent, independent organization of the United States and Canada formed to resolve trans-boundary ecological concerns. Founded in 1912 as a result of provisions under the Boundary Waters Treaty of 1909, the IJC was patterned after an earlier organization, the Joint Commission, which was formed by the United States and Britain. The IJC consists of six commissioners, with three appointed by the President of the United States, and three by the Governor-in-Council of Canada, plus support personnel. The commissioners and their organizations generally operate freed from direct influence or instruction from their national governments. The IJC is frequently cited as an excellent model for international dispute resolution because of its history of successfully and objectively dealing with natural resources and environmental disputes between friendly countries. The major activities of the IJC have dealt with apportioning, developing, conserving, and protecting the binational water resources of the United States and Canada. Some other issues, including transboundary air pollution, have also been addressed by the Commission. The power of the IJC comes from its authority to initiate scientific and socio-economic investigations, conduct quasi-judicial inquiries, and arbitrate disputes. Of special concern to the IJC have been issues related to the Great Lakes. Since the early 1970s, IJC activities have been substantially guided by provisions under the 1972 and 1978 Great Lakes Water Quality Agreement plus updated protocols. For example, it is widely acknowledged, and well documented, that environmental quality and ecosystem health have been substantially degraded in the Great Lakes. In 1985, the Water Quality Board of the IJC recommended that states and provinces with Great Lakes boundaries make a collective commitment to address this communal problem, especially with respect to pollution. These governments agreed to develop and implement remedial ac-

tion plans (RAPs) towards the restoration of environmental health within their political jurisdictions. Forty-three areas of concern have been identified on the basis of environmental pollution, and each of these will be the focus of a remedial action plan. An important aspect of the design and intent of the overall program, and of the individual RAPs, will be developing a process of integrated ecosystem management. Ecosystem management involves systematic, comprehensive approaches toward the restoration and protection of environmental quality. The ecosystem approach involves consideration of interrelationships among land, air, and water, as well as those between the inorganic environment and the biota, including humans. The ecosystem approach would replace the separate, more linear approaches that have traditionally been used to manage environmental problems. These conventional attempts have included directed programs to deal with particular resources such as fisheries, migratory birds, land use, or point sources and area sources of toxic emissions. Although these non-integrated methods have been useful, they have been limited because they have failed to account for important inter-relationships among environmental management programs, and among components of the ecosystem. [Bill Freedman Ph.D.]

RESOURCES ORGANIZATIONS International Joint Commission, 1250 23rd Street, NW, Suite 100, Washington, D.C. USA 20440 (202) 736-9000, Fax: (202) 735-9015, ,

International Primate Protection League Founded in 1974 by Shirley McGreal, International Primate Protection League (IPPL) is a global conservation organization that works to protect nonhuman primates, especially monkeys and apes (chimpanzees, orangutans, gibbons, and gorillas). IPPL has 30,000 members, branches in the United Kingdom, Germany, and Australia, and field representatives in 31 countries. Its advisory board consists of scientists, conservationists, and experts on primates, including the world-renowned primatologist Jane Goodall, whose famous studies and books are considered the authoritative texts on chimpanzees. Her studies have also heightened public interest and sympathy for chimpanzees and other nonhuman primates. 767

International Register of Potentially Toxic Chemicals

IPPL runs a sanctuary and rehabilitation center at its Summerville, South Carolina headquarters, which houses two dozen gibbons and other abandoned, injured, or traumatized primates who are refugees from medical laboratories or abusive pet owners. IPPL concentrates on investigating and fighting the multi-million dollar commercial trafficking in primates for medical laboratories, the pet trade, and zoos, much of which is illegal trade and smuggling of endangered species protected by international law. IPPL is considered the most active and effective group working to stem the cruel and often lethal trade in primates. IPPL’s work has helped to save the lives of literally tens of thousands of monkeys and apes, many of which are threatened or endangered species. For example, the group was instrumental in persuading the governments of India and Thailand to ban or restrict the export of monkeys, which were being shipped by the thousands to research laboratories and pet stores across the world. The trade in primates is especially cruel and wasteful, since a common way of capturing them is by shooting the mother, which then enables poachers to capture the infant. Many captured monkeys and apes die enroute to their destinations, often being transported in sacks, crates, or hidden in other devices. IPPL often undertakes actions and projects that are dangerous and require a good deal of skill. In 1992, its investigations have led to the conviction of a Miami, Florida, animal dealer for conspiring to help smuggle six baby orangutans captured in the jungles of Borneo. The endangered orangutan is protected by the Convention on International Trade in Endangered Species of Fauna and Flora (CITES), as well as by the United States Endangered Species Act. In retaliation, the dealer unsuccessfully sued McGreal, as did a multi-national corporation she once criticized for its plan to capture chimpanzees and use them for hepatitis research in Sierra Leone. A more recent victory for IPPL occurred in April 2002. In 1997, Chicago O’Hare airport received two shipments from Indonesia, each of which contained more than 250 illegally imported monkeys. Included in the shipments were dozens of unweaned baby monkeys. After several years of pursuing the issue, the U.S. Fish and Wildlife Service and the U.S. Federal prosecutors charged the LABS Company (a breeder of monkeys for research based in the United States) and several of its employees, including its former president, on eight felonies and four misdemeanors. IPPL publishes IPPL News several times a year and sends out periodic letters alerting members of events and issues that affect primates. [Lewis G. Regenstein] 768

Environmental Encyclopedia 3

RESOURCES ORGANIZATIONS International Primate Protection League, P.O. Box 766, Summerville, SC USA 29484 (843) 871-2280, Fax: (843) 871-7988, Email: [email protected],

International Register of Potentially Toxic Chemicals (U. N. Environment Programme) The International Register of Potentially Toxic Chemicals is published by the United Nations Environment Programme (UNEP). Part of UNEP’s three-pronged Earthwatch program, the register is an international inventory of chemicals that threaten the environment. Along with the Global Environment Monitoring System and INFOTERRA, the register monitors and measures environmental problems worldwide. Information from the register is routinely shared with agencies in developing countries. Third World countries have long been the toxic dumping grounds for the world, and they still use many chemicals that have been banned elsewhere. Environmental groups regularly send information from the register to toxic chemical users in developing countries as part of their effort to stop the export of toxic pollution. RESOURCES ORGANIZATIONS International Register of Potentially Toxic Chemicals, Chemin des Ane´mones 15, Gene`ve, Switzerland CH-1219 +41-22-979 91 11, Fax: +41-22-979 91 70, Email: [email protected]

International Society for Environmental Ethics The International Society for Environmental Ethics (ISEE) is an organization that seeks to educate people about the environmental ethics and philosophy concerning nature. An environmental ethic is the philosophy that humans have a moral duty to sustain the natural environment and attempts to answer how humans should treat other species (plant and animal), use Earth’s natural resources, and place value on the aesthetic experiences of nature. The society is an auxiliary organization of the American Philosophical Association, with about 700 members in over 20 countries. Many of ISEE’s current members are philosophers, teachers, or environmentalists. The ISEE officers include president Mark Sagoff (Institute for Philosophy and Public Policy, University of Maryland) and John Baird Callicott, vice president, (professor of philosophy at the University of North Texas). Two other key members are editors

Environmental Encyclopedia 3

International trade in toxic waste

of the ISEE newsletter, Jack Weir and Holmes Rolston, III (Professor of Philosophy, Colorado State University). All have contributed to the ongoing ISEE Master Environmental Ethics Bibliography. ISEE publishes a quarterly newsletter available to members in print form and maintains an Internet site of back issues. Of special note is the ISEE Bibliography, an ongoing project that contains over 5,000 records from journals such as Environmental Ethics, Environmental Values, and the Journal of Agricultural and Environmental Ethics. Another work in progress, the ISEE Syllabus Project, continues to be developed by Callicott and Robert Hood, doctoral candidate at Bowling Green State University. They maintain a database of course offerings in environmental philosophy and ethics, based on information from two-year community colleges and four-year state universities, private institutions, and master’s- and doctorate-granting universities. ISEE supports the enviroethics program which has spurred many Internet discussion groups and is constantly expanding into new areas of communication. [Nicole Beatty]

RESOURCES ORGANIZATIONS International Society for Environmental Ethics, Environmental Philosophy Inc., Department of Philosophy, University of North Texas, P.O. Box 310980, Denton, TX USA 76203-0980,

International trade in toxic waste Just as VCRs, cars, and laundry soap are traded across borders, so too is the waste that accompanies their production. In the United States alone, industrial production accounts for at least 500 million lb (230 million kg) of hazardous waste a year. The industries of other developed nations also produce waste. While some of it is disposed within national borders, a portion is sent to other countries where costs are cheaper and regulations less stringent than in the waste’s country of origin. Unlike consumer products, internationally traded hazardous waste has begun to meet local opposition. In some recent high-profile cases, barges filled with waste have traveled the world looking for final resting places. In at least one case, a ship may have dumped about ten tons of toxic municipal incinerator ash in the ocean after being turned away from dozens of ports. In recent years national and international bodies have begun to voice official opposition to this dangerous trade through bans and regulations. The international trade in toxic wastes is, at bottom, waste disposal with a foreign-relations twist. Typically a

manufacturing facility generates waste during the production process. The facility manager pays a waste-hauling firm to dispose of the waste. If the landfills in the country of origin cost too much, or if there are no landfills that will take the waste, the disposal firm will find a cheaper option, perhaps a landfill in another country. In the United States, the shipper must then notify the Environmental Protection Agency (EPA), which then notifies the State Department. After ascertaining that the destination country will indeed accept the waste, American regulators approve the sale. Disposing of the waste overseas in a landfill is only the most obvious example of this international trade. Waste haulers also sell their cargo as raw materials for recycling. For example, used lead-acid batteries discarded by American consumers are sent to Brazil where factory workers extract and resmelt the lead. Though the lead-acid alone would classify as hazardous, whole batteries do not. Waste haulers can ship these batteries overseas without notification to Mexico, Japan, and Canada, among other countries. In other cases, waste haulers sell products, like DDT, that have been banned in one country to buyers in another country that has no ban. Whatever the strategy for disposal, waste haulers are most commonly small, independent operators who provide a service to waste producers in industrialized countries. These haulers bring waste to other countries to take advantage of cheaper disposal options and less stringent regulatory climates. Some countries forbid the disposal of the certain kinds of waste. Countries without such prohibitions will import more waste. Cheap landfills depend on cheap labor and land. Countries with an abundance of both can become attractive destinations. Entrepreneurs or government officials in countries, like Haiti, or regions within countries, such as Wales, that lack a strong manufacturing base, view waste disposal as a viable, inexpensive business. Inhabitants may view it as the best way to make money and create jobs. Simply by storing hazardous waste, the country of Guinea-Bissau could have made $120 million, more money than its annual budget. Though the less developed countries (LDC) predictably receive large amounts of toxic waste, the bulk of the international trade occurs between industrialized nations. Canada and the United Kingdom in particular import large volumes of toxic waste. Canada imports almost 85% of the waste sent abroad by American firms, approximately 150,000 lb (70,000 kg) per year. The bulk of the waste ends up at an incinerator in Ontario or a landfill in Quebec. Because Canada’s disposal regulations are less strict than United States laws, the operators of the landfill and incinerator can charge lower fees than similar disposal sites in the United States. A waste hauler’s life becomes complicated when the receiving country’s government or local activists discover that 769

Environmental Encyclopedia 3

International trade in toxic waste

the waste may endanger health and the environment. Local regulators may step in and forbid the sale. This happened many times in the case of the Khian Sea, a ship that had contracted to dispose of Philadelphia’s incinerator ash. The ship was turned away from Haiti, from Guinea-Bissau, from Panama, and from Sri Lanka. For two years, beginning in 1986, the ship carried the toxic ash from port to port looking for a home for its cargo before finally mysteriously losing the ash somewhere in the Indian Ocean. This early resistance to toxic-waste dumping has since led to the negotiation of international treaties forbidding or regulating the trade in toxic waste. In 1989, the African, Caribbean, and Pacific countries (ACP) and the countries belonging to the European Economic Community (EEC) negotiated the Lome IV Convention, which bans shipments of nuclear and hazardous waste from the EEC to the ACP countries. ACP countries further agreed not to import such waste from non-EEC countries. Environmentalists have encouraged the EEC to broaden its commitment to limiting waste trade. In the same year, under the auspices of the United Nations Environment Programme (UNEP), the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal was negotiated. This requires shippers to obtain government permission from the destination country before sending waste to foreign landfills or incinerators. Critics contend that Basel merely formalizes the trade. In 1991, the nations of the Organization of African Unity negotiated another treaty restricting the international waste trade. The Bamako Convention on the Ban of the Import into Africa and the Control of Transboundary Movement and Management of Hazardous Wastes within Africa criminalized the import of all hazardous waste. Bamako further forbade waste traders from importing to Africa materials that had been banned in one country to a country that has no such ban. Bamako also radically redefined the assessment of what constitutes a health hazard. Under the treaty, all chemicals are considered hazardous until proven otherwise. These international strategies find their echoes in national law. Less developed countries have tended to follow the Lome and Bamako examples. At least eighty-three African, Latin-Caribbean, and Asian-Pacific countries have banned hazardous waste imports. And the United States, in a policy similar to the Basel Convention, requires hazardous waste shipments to be authorized by the importing country’s government. The efforts to restrict toxic waste trade reflect, in part, a desire to curb environmental inequity. When waste flows from a richer country to a poorer country or region, the inhabitants living near the incinerator, landfill, or 770

recycling facility are exposed to the dangers of toxic compounds. For example, tests of workers in the Brazilian lead resmelting operation found blood-lead levels several times the United States standard. Lead was also found in the water supply of a nearby farm after five cows died. The loose regulations that keep prices low and attract waste haulers mean that there are fewer safeguards for local health and the environment. For example, leachate from unlined landfills can contaminate local groundwater. Jobs in the disposal industry tend to be lower paying than jobs in manufacturing. The inhabitants of the receiving country receive the wastes of industrialization without the benefits. Stopping the waste trade is a way to force manufacturers to change production processes. As long as cheap disposal options exist, there is little incentive to change. A waste-trade ban makes hazardous waste expensive to discard, and will force business to search for ways to reduce this cost. Companies that want to reduce their hazardous waste may opt for source reduction, which limits the hazardous components in the production process. This can both reduce production costs and increase output. A Monsanto facility in Ohio saved more than $3 million dollars a year while eliminating more than 17 million lb (8 million kg) of waste. According to officials at the plant, average yield increased by 8%. Measures forced by a lack of disposal options can therefore benefit the corporate bottom line, while reducing risks to health and the environment. See also Environmental law; Environmental policy; Groundwater pollution; Hazardous waste siting; Incineration; Industrial waste treatment; Leaching; Ocean dumping; Radioactive waste; Radioactive waste management; Smelter; Solid waste; Solid waste incineration; Solid waste recycling and recovery; Solid waste volume reduction; Storage and transport of hazardous materials; Toxic substance; Waste management; Waste reduction [Alair MacLean]

RESOURCES BOOKS Dorfman, M., W. Muir, and C. Miller. Environmental Dividends: Cutting More Chemical Waste. New York: INFORM, 1992. Moyers, B. D. Global Dumping Ground: The International Traffic in Hazardous Waste. Cabin John, MD: Seven Locks Press, 1990. Vallette, J., and H. Spalding. The International Trade in Wastes: A Greenpeace Inventory. Washington, DC: Greenpeace, 1990.

PERIODICALS Chepesiuk, R. “From Ash to Cash: The International Trade in Toxic Waste.” E Magazine 2 (July-August 1991): 30–37.

Environmental Encyclopedia 3

Intrinsic value

International Union for the Conservation of Nature and Natural Resources see IUCN—The World Conservation Union

International Voluntary Standards International Voluntary Standards are industry guidelines or agreements that provide technical specifications so that products, processes, and services can be used worldwide. The need for development of a set of international standards to be followed and used consistently for environmental management systems was recognized in response to an increased desire by the global community to improve environmental management practices. In the early 1990s, the International Organization for Standardization or ISO, which is located in Geneva, Switzerland, began development of a strategic plan to promote a common international approach to environmental management. ISO 14000 is the title of a series of voluntary international environmental standards that is under development by ISO and is 142 member nations, including the United States. Some of the standards developed by ISO include standardized sampling, testing and analytical methods for use in the monitoring of environmental variables such as the quality of air, water and soil. [Marie H. Bundy]

International Whaling Commission see International Convention for the Regulation of Whaling (1946)

International Wildlife Coalition The International Wildlife Coalition (IWC) was established by a small group of individuals who came from a variety of environmental and animal rights organizations in 1984. Like many NGOs (nongovernmental organizations) that arose in the 1970s and 1980s their initial work involved the protection of whales. The IWC raised money for whale conservation programs on endangered Atlantic humpback whale populations. This was one of the first species where researchers identified individual animals through tail photographs. Using this technique the IWC developed what is now a common tool, a whale adoption program based on individual animals with human names. From that basis, the fledgling group established itself in an advocacy role with three principles in their mandate: to prevent cruelty to wildlife, to prevent killing of wildlife, and to prevent destruction of wildlife habitat. In light of

those principles, the IWC can be characterized as an extended animal rights organization. They maintain the “prevention of cruelty” aspect common to humane societies, perhaps the oldest progenitor of animal rights groups. In standing by an ethic of preventing killing, they stand with animal rights groups, but by protecting habitat they take a more significant step by acting in a broad way to achieve their initial two principles. The program thus works at both ends of the spectrum, undertaking wildlife rehabilitation and other programs dealing with the individual animals, as well as lobbying and promoting letter writing campaigns to improve wildlife legislation. For example, they have used their Brazilian office to create pressure to combat the international trade in exotic pets, and their Canadian office to oppose the harp seal hunt and the deterioration of Canada’s impending endangered species legislation. Their United States-based operation has built a reputation in the research field working with government agencies to ensure that whale-watching on the eastern seaboard does not harm the whales. Offices in the United Kingdom are a focus for the IWC concern over the European Community policies, such as lifting their ban on importing fur from animals killed in leg hold traps. It has become evident that the diversity within the varied groups that constitute the environmental community is a positive force, however, most conservation NGOs do not cross the gulf between animal rights and habitat conservation. A clear distinction exists between single animal approaches and broader conservation ideals, as they appeal to different protection strategies, and potentially different donors. Although the emotional appeal of releasing porpoises from fishing nets alive outranks backroom lobbying for changes in fishing regulations, the lobbying effort protect more porpoises. The IWC may be deemed more successful by exploiting a range of targets, or less so than a dedicated advocacy group applying all its focus to one issue. They can point to a growth from a modest 3,000 supporters in the beginning, to over 100,000 people supporting the International Wildlife Coalition today. [David Duffus]

RESOURCES ORGANIZATIONS International Wildlife Coalition, 70 East Falmouth Highway, East Falmouth, MA USA 02536 (508) 548-8328, Fax: (508) 548-8542, Email: [email protected],

Intrinsic value Saying that an object has intrinsic value means that, even though it has no specific use, market, or monetary value, it 771

Introduced species

nevertheless can be valuable in and of itself and for its own sake. The Northern spotted owl (Strix occidentalis caurina) for example, has no instrumental or market value; it is not a means to any human end, nor is it sold or traded in any market. But, environmentalists argue, utility and price are not the only measures of worth. Indeed, they say, some of the things humans value most—truth, love, respect—are not for sale at any price, and to try to put a price on them would only tend to cheapen them. Such things have “intrinsic value.” Similarly, environmentalists say, the natural environment and its myriad life-forms are valuable in their own right. Wilderness, for instance, has intrinsic value and is worthy of protecting for its own sake. To say that something has intrinsic value is not necessarily to deny that it may also have instrumental value for humans and non-human animals alike. Deer, for example, have intrinsic value; but they also have instrumental value as a food source for wolves and other predator species. See also Shadow pricing

Introduced species Introduced species (also called invasive species) are those that have been released by humans into an area to which they are not native. These releases can occur accidently, from places such as the cargo holds of ships. They can also occur intentionally, and species have been introduced for a range of ornamental and recreational uses, as well as for agricultural, medicinal, and pest control purposes. Introduced species can have dramatically unpredictable effects on the environment and native species. Such effects can include overabundance of the introduced species, competitive displacement, and disease-caused mortality of the native species. Numerous examples of adverse consequences associated with the accidental release of species or the long term effects of deliberately introduced species exist in the United States and around the world. Introduced species can be beneficial as long as they are carefully regulated. Almost all the major varieties of grain and vegetables used in the United States originated in other parts of the world. This includes corn, rice, wheat, tomatoes, and potatoes. The kudzu vine, which is native to Japan, was deliberately introduced into the southern United States for erosion control and to shade and feed livestock. It is, however, an extremely aggressive and fast-growing species, and it can form continuous blankets of foliage that cover forested hillsides, resulting in malformed and dead trees. Other species introduced as ornamentals have spread into the wild, displacing or outcompeting native species. Several varieties of cultivated roses, such as the multiflora rose, are serious pests and nuisance shrubs in field and pastures. The purple loos772

Environmental Encyclopedia 3 estrife, with its beautiful purple flowers, was originally

brought from Europe as a garden ornamental. It has spread rapidly in freshwater wetlands in the northern United States, displacing other plants such as cattails. This is viewed with concern by ecologists and wildlife biologists since the food value of loosestrife is minimal, while the roots and starchy tubes of cattails are an important food source to muskrats. Common ragweed was accidently introduced to North America, and it is now a major health irritant for many people. Introduced species are sometimes so successful because human activity has changed the conditions of a particular environment. The Pine Barrens of southern New Jersey form an ecosystem that is naturally acidic and low in nutrients. Bogs in this area support a number of slow-growing plant species that are adapted to these conditions, including peat moss, sundews, and pitcher plants. But urban runoff, which contain fertilizers, and wastewater effluent, which is high in both nitrogen and phosphorus, have enriched the bogs; the waters there have become less acidic and shown a gradual elevation in the concentration of nutrients. These changes in aquatic chemistry have resulted in changes in plant species, and the acidophilus mosses and herbs are being replaced by fast-growing plants that are not native to the Pine Barrens. Zebra mussels were transported by accident from Europe to the United States, and they are causing severe problems in the Great Lakes. They proliferate at a prodigious rate, crowding out native species and clogging industrial and municipal water-intake pipes. Many ecologists fear that shipping traffic will transport the zebra mussel to harbors all over the country. Scattered observations of this tiny crustacean have already been made in the lower Hudson River in New York. Although introduced species are usually regarded with concern, they can occasionally be used to some benefit. The water hyacinth is an aquatic plant of tropical origin that has become a serious clogging nuisance in lakes, streams, and waterways in the southern United States. Numerous methods of physical and chemical removal have been attempted to eradicate or control it, but research has also established that the plant can improve water quality. The water hyacinth has proved useful in the withdrawal of nutrients from sewage and other wastewater. Many constructed wetlands, polishing ponds, and waste lagoons in waste treatment plants now take advantage of this fact by routing wastewater through floating beds of water hyacinth. The reintroduction of native species is extremely difficult, and it is an endeavor that has had low rates of success. Efforts by the Fish and Wildlife Service to reintroduce the endangered whooping crane into native habitat in the southwestern United States were initially unsuccessful be-

Environmental Encyclopedia 3 cause of the fragility of the eggs, as well as the poor parenting skills of birds raised in captivity. The service then devised a strategy of allowing the more common sandhill crane to incubate the eggs of captive whooping cranes in wilderness nests, and the fledglings were then taught survival skills by their surrogate parents. Such projects, however, are extremely time and labor intensive; they are also costly and difficult to implement for large numbers of most species. Due to the difficulties and expense required to protect native species and to eradicate introduced species, there are not many international laws and policies that seek to prevent these problems before they begin. Thus customs agents at ports and airports routinely check luggage and cargo for live plant and animal materials to prevent the accidental or deliberate transport of non-native species. Quarantine policies are also designed to reduce the probability of spreading introduced species, particularly diseases, from one country to another. There are similar concerns about genetically engineered organisms, and many have argued that their creation and release could have the same devastating environmental consequences as some introduced species. For this reason, the use of bioengineered organisms is highly regulated; both the Food and Drug Administration and the Environmental Protection Agency (EPA) impose strict controls on the field testing of bioengineered products, as well as on their cultivation and use. Conservation policies for the protection of native species are now focused on habitats and ecosystems rather than single species. It is easier to prevent the encroachment of introduced species by protecting an entire ecosystem from disturbance, and this is increasingly well recognized both inside and outside the conservation community. See also Bioremediation; Endangered species; Fire ants; Gypsy moth; Rabbits in Australia; Wildlife management [Usha Vedagiri and Douglas Smith]

RESOURCES BOOKS Common Weeds of the United States. United States Department of Agriculture. New York: Dover Publications, 1971. Forman, R. T. T., ed. Pine Barrens: Ecology and Landscape. New York: Academic Press, 1979.

Inversion see Atmospheric inversion

Ionizing radiation

Iodine 131 A radioactive isotope of the element iodine. During the 1950s and early 1960s, iodine-131 was considered a major health hazard to humans. Along with cesium-137 and strontium-90, it was one of the three most abundant isotopes found in the fallout from the atmospheric testing of nuclear weapons. These three isotopes settled to the earth’s surface and were ingested by cows, ultimately affecting humans by way of dairy products. In the human body, iodine-131, like all forms of that element, tends to concentrate in the thyroid, where it may cause cancer and other health disorders. The Chernobyl nuclear reactor explosion is known to have released large quantities of iodine-131 into the atmosphere. See also Radioactivity

Ion Forms of ordinary chemical elements that have gained or lost electrons from their orbit around the atomic nucleus and, thus, have become electrically charged. Positive ions (those that have lost electrons) are called cations because when charged electrodes are placed in a solution containing ions the positive ions migrate to the cathode (negative electrode). Negative ions (those that have gained extra electrons) are called anions because they migrate toward the anode (positive electrode). Environmentally important cations include the hydrogen ion (H+) and dissolved metals. Important anions include the hydroxyl ion (OH-) as well as many of the dissolved ions of nonmetallic elements. See also Ion exchange; Ionizing radiation

Ion exchange The process of replacing one ion that is attached to a charged surface with another. A very important type of ion exchange is the exchange of cations bound to soil particles. Soil clay minerals and organic matter both have negative surface charges that bind cations. In a fertile soil the predominant exchangeable cations are Ca2+, Mg2+ and K+. In acid soils Al3+ and H+ are also important exchangeable ions. When materials containing cations are added to soil, cations leaching through the soil are retarded by cation exchange.

Ionizing radiation High-energy radiation with penetrating competence such as x rays and gamma rays which induces ionization in living material. Molecules are bound together with covalent bonds, and generally an even number of electrons binds the atoms together. However, high-energy penetrating radiation can 773

Environmental Encyclopedia 3

Iron minerals

fragment molecules resulting in atoms with unpaired electrons known as “free radicals.” The ionized “free radicals” are exceptionally reactive, and their interaction with the macromolecules (DNA, RNA, and proteins) of living cells can, with high dosage, lead to cell death. Cell damage (or death) is a function of penetration ability, the kind of cell exposed, the length of exposure, and the total dose of ionizing radiation. Cells that are mitotically active and have a high oxygen content are most vulnerable to ionizing radiation. See also Radiation exposure; Radiation sickness; Radioactivity

Iron minerals The oxides and hydroxides of ferric iron (Fe(III)) are very important minerals in many soils, and are important suspended solids in some fresh water systems. Important oxides and hydroxides of iron include goethite, hematite, lepidocrocite, and ferrihydrite. These minerals tend to be very finely divided and can be found in the clay-sized fraction of soils, and like other clay-sized minerals, are important adsorbers of ions. At high pH they adsorb hydroxide (OH-) ions creating negatively charged surfaces that contribute to cation exchange surfaces. At low pH they adsorb hydrogen (H+) ions, creating anion exchange surfaces. In the pH range between 8 and 9 the surfaces have little or no charge. Iron hydroxide and oxide surfaces strongly adsorb some environmentally important anions, such as phosphate, arsenate and selanite, and cations like copper, lead, manganese and chromium. These ions are not exchangeable, and in environments where iron oxides and hydroxides are abundant, surface adsorption can control the mobility of these strongly adsorbed ions. The hydroxides and oxides of iron are found in the greatest abundance in older highly weathered landscapes. These minerals are very insoluble and during soil weathering they form from the iron that is released from the structure of the soil-forming minerals. Thus, iron oxide and hydroxide minerals tend to be most abundant in old landscapes that have not been affected by glaciation, and in landscapes where the rainfall is high and the rate of soil mineral weathering is high. These minerals give the characteristic red (hematite or ferrihydrite) or yellow-brown (goethite) colors to soils that are common in the tropics and subtropics. See also Arsenic; Erosion; Ion exchange; Phosphorus; Soil profile; Soil texture

Irradiation of food see Food irradiation 774

Irrigation Irrigation is the method of supplying water to land to support plant growth. This technology has had a powerful role in the history of civilization. In arid regions sunshine is plentiful and soil is usually fertile, so irrigation supplies the critical factor needed for plant growth. Yields have been high, but not without costs. Historic problems include salinization and water logging; contemporary difficulties include immense costs, spread of water-borne diseases, and degraded aquatic environments. One geographer described California’s Sierra Nevada as the “mother nurse of the San Joaquin Valley.” Its heavy winter snowpack provides abundant and extended runoff for the rich valley soils below. Numerous irrigation districts, formed to build diversion and storage dams, supply water through gravity-fed canals. The snow melt is low in nutrients, so salinization problems are minimal. Wealth from the lush fruit orchards has enriched the state. By contrast, the Colorado River, like the Nile, flows mainly through arid lands. Deeply incised in places, the river is also limited for irrigation by the high salt content of desert tributaries. Still, demand for water exceeds supply. Water crossing the border into Mexico is so saline that the federal government has built a desalinization plant at Yuma, Arizona. Colorado River water is imperative to the Imperial Valley, which specializes in winter produce in the rich, delta soils. To reduce salinization problems, one-fifth of the water used must be drained off into the growing Salton Sea. Salinization and water logging have long plagued the Tigris, Euphrates, and Indus River flood plains. Once fertile areas of Iraq and Pakistan are covered with salt crystals. Half of the irrigated land in our western states is threatened by salt buildup. Some of the worst problems are degraded aquatic environments. The Aswan High Dam in Egypt has greatly amplified surface evaporation, reduced nutrients to the land and to fisheries in the delta, and has contributed to the spread of schistosomiasis via water snails in irrigation ditches. Diversion of drainage away from the Aral Sea for cotton irrigation has severely lowered the shoreline, and threatens this water body with ecological disaster. Spray irrigation in the High Plains is lowering the Ogallala Aquifer’s water table, raising pumping costs. Kesterson Marsh in the San Joaquin Valley has become a hazard to wildlife because of selenium poisoning from irrigation drainage. The federal Bureau of Reclamation has invested huge sums in dams and reservoirs in western states. Some question the wisdom of such investments, given the past century of farm surpluses, and argue that water users are not paying the true cost.

Environmental Encyclopedia 3

Island biogeography

A farm irrigation system. (U. S. Geological Survey Reproduced by permission.)

Irrigation still offers great potential, but only if used with wisdom and understanding. New technologies may yet contribute to the world’s ever-increasing need for food. See also Climate; Commercial fishing; Reclamation [Nathan H. Meleen]

RESOURCES BOOKS Huffman, R. E. Irrigation Development and Public Water Policy. New York: Ronald Press, 1953. Powell, J. W. “The Reclamation Idea.” In American Environmentalism: Readings in Conservation History. 3rd ed., edited by R. F. Nash. New York: McGraw-Hill, 1990. Wittfogel, K. A. “The Hydraulic Civilizations.” In Man’s Role in Changing the Face of the Earth, edited by W. L. Thomas Jr. Chicago: University of Chicago Press, 1956. Zimmerman, J. D. Irrigation. New York: Wiley, 1966.

OTHER U.S. Department of Agriculture. Water: 1955 Yearbook of Agriculture. Washington, DC: U.S. Government Printing Office, 1955.

Island biogeography Island biogeography is the study of past and present animal and plant distribution patterns on islands and the processes

that created those distribution patterns. Historically, island biogeographers mainly studied geographic islands—continental islands close to shore in shallow water and oceanic islands of the deep sea. In the last several decades, however, the study and principles of island biogeography have been extended to ecological islands such as forests and prairie fragments isolated by human development. Biogeographic “islands” may also include ecosystems isolated on mountaintops and landlocked bodies of water such as Lake Malawi in the African Rift Valley. Geographic islands, however, remain the main laboratories for developing and testing the theories and methods of island biogeography. Equilibrium theory Until the 1960s, biogeographers thought of islands as living museums—relict (persistent remnant of an otherwise extinct species of plant or animal) scraps of mainland ecosystems in which little changed—or closed systems mainly driven by evolution. That view began to radically change in 1967 when Robert H. MacArthur and Edward O. Wilson published The Theory of Island Biogeography. In their book, MacArthur and Wilson detail the equilibrium theory of island biogeography—a theory that became the new paradigm of the field. The authors proposed that island ecosystems exist in dynamic equilibrium, with a steady turnover of species. Larger islands—as well as islands closest to a source of immigrants—accommodate the most species in the equilibrium condition, according to their theory. MacArthur and Wilson also worked out mathematical models to demonstrate and predict how island area and isolation dictate the number of species that exist in equilibrium. Dispersion The driving force behind species distribution is dispersion—the means by which plants and animals actively leave or are passively transported from their source area. An island ecosystem can have more than one source of colonization, but nearer sources dominate. How readily plants or animals disperse is one of the main reasons equilibrium will vary from species to species. Birds and bats are obvious candidates for anemochory (dispersal by air), but some species normally not associated with flight are also thought to reach islands during storms or even normal wind currents. Orchids, for example, have hollow seeds that remain airborne for hundreds of kilometers. Some small spiders, along with other insects like bark lice, aphids, and ants (collectively knows as aerial plankton) often are among the first pioneers of newly formed islands. Whether actively swimming or passively floating on logs or other debris, dispersal by sea is called thallasochory. Crocodiles have been found on Pacific islands 600 miles (950 km) from their source areas, but most amphibians, larger terrestrial reptiles, and, in particular, mammals, have difficulty crossing even narrow bodies of water. Thus, thalla775

Island biogeography

sochory is the medium of dispersal primarily for fish, plants, and insects. Only small vertebrates such as lizards and snakes are thought to arrive at islands by sea on a regular basis. Zoochory is transport either on or inside an animal. This method is primarily a means of plant dispersal, mostly by birds. Seeds ride along either stuck to feathers or survive passage through a bird’s digestive tract and are deposited in new territory. Anthropochory is dispersal by human beings. Although humans intentionally introduce domestic animals to islands, they also bring unintended invaders, such as rats. Getting to islands is just the first step, however. Plants and animals often arrive to find harsh and alien conditions. They may not find suitable habitats. Food chains they depend on might be missing. Even if they manage to gain a foothold, their limited numbers make them more susceptible to extinction. Chances of success are better for highly adaptable species and those that are widely distributed beyond the island. Wide distribution increases the likelihood a species on the verge of extinction may be saved by the rescue effect, the replenishing of a declining population by another wave of immigration. Challenging established theories Many biogeographers point out that isolated ecosystems are more than just collections of species that can make it to islands and survive the conditions they encounter there. Several other contemporary theories of island biogeography build on MacArthur and Wilson’s theory; other theories contradict it. Equilibrium theory suggests that species turnover is constant and regular. Evidence collected so far indicates MacArthur and Wilson’s model works well in describing communities of rapid dispersers who have a regular turnover, such as insects, birds, and fish. However, this model may not apply to species who disperse more slowly. Proponents of historical legacy models argue that communities of larger animals and plants (forest trees, for example) take so long to colonize islands that changes in their populations probably reflect sudden climactic or geological upheaval rather than a steady turnover. Other theories suggest that equilibrium may not be dynamic, that there is little or no turnover. Through competition, established species keep out new colonists; the newcomers might occupy the same ecological niches as their predecessors. Established species may also evolve and adapt to close off those niches. Island resources and habitats may also be distinct enough to limit immigration to only a few well-adapted species. Thus, in these later models, dispersal and colonization are not nearly as random as in MacArthur and Wilson’s model. These less random, more deterministic theories of island ecosystems conform to specific assembly rules—a 776

Environmental Encyclopedia 3 complex list of factors accounting for the species present in the source areas, the niches available on islands, and competition between species. Some biogeographers suggest that every island—and perhaps every habitat on an island—may require its own unique model. Human disruption of island ecosystems further clouds the theoretical picture. Not only are habitats permanently altered or lost by human intrusion, but anthropochory also reduces an island’s isolation. Thus, finding relatively undisturbed islands to test different theories can be difficult. Since the time of naturalists Charles Darwin and his colleague, Alfred Wallace, islands have been ideal “natural laboratories” for studying evolution. Patterns of evolution stand out on islands for two reasons: island ecosystems tend to be simpler than other geographical regions, and they contain greater numbers of endemic species, plant, and animal species occurring only in a particular location. Many island endemics are the result of adaptive radiation—the evolution of new species from a single lineage for the purpose of filling unoccupied ecological niches. Many species from mainland source areas simply never make it to islands, so species that can immigrate find empty ecological niches where once they faced competition. For example, monitor lizards immigrating to several small islands in Indonesia found the niche for large predators empty. Monitors on these islands evolved into Komodo Dragons, filling the niche. Conservation of biodiversity Theories of island biogeography also have potential applications in the field of conservation. Many conservationists argue that as human activity such as logging and ranching encroach on wild lands, remaining parks and reserves begin to resemble small, isolated islands. According to equilibrium theory, as those patches of wild land grow smaller, they support fewer species of plants and animals. Some conservationists fear that plant and animal populations in those parks and reserves will sink below minimum viable population levels—the smallest number of individuals necessary to allow the species to continue reproducing. These conservationists suggest that one way to bolster populations is to set aside larger areas and to limit species isolation by connecting parks and preserves with wildlife corridors. Islands with greatest variety of habitats support the most species; diverse habitats promotes successful dispersal, survival, and reproduction. Thus, in attempting to preserve island biodiversity, conservationists focus on several factors: the size (the larger the island, the more habitats it contains), climate, geology (soil that promotes or restricts habitats), and age of the island (sparse or rich habitats). All of these factors must be addressed to ensure island biodiversity. [Darrin Gunkel]

Environmental Encyclopedia 3

ISO 14000: International Environmental Management Standards

RESOURCES BOOKS Harris, Larry D. The Fragmented Forest: Island Biogeography Theory and the Preservation of Biotic Diversity. Chicago: University of Chicago Press, 1984. Mac Arthur, Robert H., and Edward O. Wilson.The Theory of Island Biogeography. Princeton: Princeton University Press, 1967. Quaman, David. Song of the Dodo: Island Biogeography in an Age of Extinction. New York: Scribner, 1996. Whittaker, Robert J. Island Biogeography: Ecology, Evolution and Conservation. London: Oxford University Press, 1999.

PERIODICALS Grant, P. R. “Competition Exposed by Knight?” Nature 396: 216–217.

OTHER “Island Biogeography.” University of Oxford School of Geography and the Environment. August 7, 2000 [cited June 26, 2002]. .

ORGANIZATIONS Environmental Protection Agency (EPA), 1200 Pennsylvania Avenue, NW, Washington, DC USA 20460 (202) 260-2090, Email: [email protected],

ISO 14000: International Environmental Management Standards ISO 14000 refers to a series of environmental management standards that were adopted by the International Organization for Standardization (ISO) in 1996 and are beginning to be implemented by businesses across the world. Any person or organization interested in the environment and in the goal of improving environmental performance of businesses should be interested in ISO 14000. Such interested parties include businesses themselves, their legal representatives, environmental organizations and their members, government officials, and others. What is the ISO and what are ISO standards? The International Organization for Standardization (ISO) is a private (nongovernmental) worldwide organization whose purpose is to promote uniform standards in international trade. Its members are elected representatives from national standards organizations in 111 countries. The ISO covers all fields involving promoting goods, services, or products and where a Member Body suggests that standardization is desirable, with the exception of electrical and electronic engineering, which are covered by a different organization called the International Electrotechnical Commission (IEC). However, the ISO and the IEC work closely together. Since the ISO began operations in 1947, its Central Secretariat has been located in Geneva, Switzerland. Between 1951 (when it published its first standard) and 1997, the ISO issued over 8,800 standards. Standards are docu-

ments containing technical specifications, rules, guidelines, and definitions to ensure equipment, products, and services perform as specified. Among the best known standards published by the ISO are those that comprise the ISO 9000 series, which was developed between 1979–1986, and published in 1987. Because ISO 9000 is a forerunner to ISO 14000, it is important to understand the basic structure and function of ISO 9000. The ISO 9000 series is a set of standards for quality management and quality assurance. The standards apply to processes and systems that produce products; they do not apply to the products themselves. Further, the standards provide a general framework for any industry; they are not industry-specific. A company that has become registered under ISO 9000 has demonstrated that it has a documented system for quality that is in place and consistently applied. ISO 9000 standards apply to all kinds of companies whether large or small, in services or manufacturing. The latest major set of standards published by the ISO is the ISO 14000 series. The impetus for that series came from the United Nations Conference on the Environment and Development (UNCED), which was held in Rio De Janeiro in 1992 and attended by representatives of over one hundred nations. One of the documents resulting from that conference was the Global Environmental Initiative, which prompted the ISO to develop its ISO 14000 series of international environmental standards. The ISO’s goal is to insure that businesses adopt common internal procedures for environmental controls including, but not limited to, audits. It is important to note that the standards are process standards, not performance standards. The goal is to ensure that businesses are in compliance with their own national and local applicable environmental laws and regulations. The initial standards in the series include numbers 14001, 14004, and 14010-14012; all of them adopted by the ISO in 1996. Provisions of ISO 14000 Series standards ISO 14000 sets up criteria pursuant to which a company may become registered or certified as to its environmental management practices. Central to the process of registration pursuant to ISO 14000 is a company’s Environmental Management System (EMS). The EMS is a set of procedures for assessing compliance with environmental laws and company procedures for environmental protection, identifying and resolving with problems, and engaging the company’s workforce in a commitment to improved environmental performance by the company. ISO 14001 series can be divided into two groups: guidance documents and specification documents. The series sets out standards against which a company’s EMS will be evaluated. For example, it must include an accurate summary of the legal standards with which the company must comply, 777

ISO 14000: International Environmental Management Standards

such as permit stipulations, and relevant provisions of statutes and regulations, and even provisions of administrative or court-certified consent judgments. To become certified, the EMS must: (1) include an environmental policy; (2) establish plans to meet environmental goals and comply with legal requirements; (3) provide for implementation of the policy and operation under it including training for personnel, communication, and document control; (4) set up monitoring and measurement devices and an audit procedure to insure continuing improvement; and (5) provide for management review. The EMS must be certified by a registrar who has been qualified under ISO 13012, a standard that predates the ISO 14000 series. The ISO 14004 series is a guidance document that gives advice that may be followed but is not required. It includes five principles each of which corresponds to one of the five areas of ISO 14001 listed above. ISO 14010, 14011, and 14012 are auditing standards. For example, 14010 covers general principles of environmental auditing, and 14011 provides guidelines for auditing of an Environmental Management System (EMS). ISO 14012 provides guidelines for establishing qualifications for environmental auditors, whether those auditors are internal or external. Plans for additional standards within the ISO 14000 Series standards The ISO is considering proposals for standards on training and certifying independent auditors (called registrars) who will certify that ISO 14000-certified business have established and adhere to stringent internal systems to monitor and improve their own environmental protection actions. Later the ISO may also establish standards for assessing a company’s environmental performance. Standards may be adopted to for use of eco-labeling and life cycle assessment of goods involved in international trade. Benefits and consequences of ISO 14000 Series standards A company contemplating obtaining ISO 14000 registration must evaluate its potential advantages as well as its costs to the company. ISO 14000 certification may bring various rewards to companies. For example, many firms are hoping that, in return for obtaining ISO 14000 certification (and the actions required to do so), regulatory agencies such as the U.S. Environmental Protection Agency (EPA) will give them more favorable treatment. For example, leniency might be shown in less stringent filing or monitoring requirements or even less severe sanctions for past or present violations of environmental statutes and regulations. Further, compliance may be merely for good public relations, leading consumers to view the certified company 778

Environmental Encyclopedia 3

as a good corporate citizen that works to protect the environment. There is public pressure on companies to demonstrate their environmental stewardship and accountability; obtaining ISO 14000 certification is one way to do so. The costs to the company will depend on the scope of the Environmental Management System (EMS). For example, the EMS might be international, national, or limited to individual plants operated by the company. That decision will affect the costs of the environmental audit considerably. National and international systems may prove to be costly. On the other hand, a company may realize cost savings. For example, an insurance company may give reduced rates on insurance to cover accidental pollution releases to a company that has a proven environmental management system in place. Internally, by implementing an EMS, a company may realize cost savings as a result of waste reduction, use of less fewer toxic chemicals, less energy use, and recycling. A major legal concern raised by lawyers studying the ISO 14000 standards relates to the question of confidentiality. There are serious questions as to whether a governmental regulatory agency can require disclosure of information discovered during a self-audit by a company. The use of a third-party auditor during preparation of the EMS process may weaken a company’s argument that information discovered is privileged. ISO 14000 has potential consequences with respect to international law as well as international trade. ISO 14000 is intended to promote a series of universally accepted EMS practices and lead to consistency in environmental standards between and among trading partners. Some developing countries such as Mexico are reviewing ISO 14000 standards and considering incorporating their provisions within their own environmental laws and regulations. On the other hand, some developing countries have suggested that environmental standards created by ISO 14000 may constitute nontariff barriers to trade in that costs of ISO 14000 registration may be prohibitively high for small- to medium-size companies. Companies that have implemented ISO 9000 have learned to view their companies’ operations through a “quality of management” lens and implementation of ISO 14000 may lead to use of “environmental quality” lens. ISO 14000 has the potential to lead to two kinds of cultural changes. First, within the corporation, it has the potential to lead to consideration of environmental issues throughout the company and its business decisions ranging from hiring of employees to marketing. Second, ISO 14000 has the potential to become part of a global culture as the public comes to view ISO 14000 certification as a benchmark connoting good environmental stewardship by a company. [Paulette L. Stenzel]

Environmental Encyclopedia 3 RESOURCES BOOKS Tibor, T., and I. Feldman. ISO 14000: A Guide to the New Environmental Management Standards. Irwin Publishing Company, 1996. von Zharen, W. M. ISO 14000: Understanding the Environmental Standards. Government Institutes, 1996.

PERIODICALS Kass, S. L. “The Lawyer’s Role in Implementing ISO 14000.” Natural Resources & Environment 3, no. 5 (Spring 1997).

Isotope Different forms of atoms of the same element. Atoms consist of a nucleus, containing positively-charged particles (protons) and neutral particles (neutrons), surrounded by negatively-charged particles (electrons). Isotopes of an element differ only in the number of neutrons in the nucleus and hence in atomic weight. The nuclei of some isotopes are unstable and undergo radioactive decay. An element can have several stable and radioactive isotopes, but most elements have only two or three isotopes that are of any importance. Also, for most elements the radioactive isotopes are only of concern in material exposed to certain types of radiation sources. Carbon has three important isotopes with atomic weights of 12, 13, and 14. C-12 is stable and represents 98.9% of natural carbon. C-13 is also stable and represents 1.1% of natural carbon. C-14 represents an insignificant fraction of naturally-occurring carbon, but it is radioactive and important because its radioactive decay is valuable in the dating of fossils and ancient artifacts. It is also useful in tracing the reactions of carbon compounds in research. See also Nuclear fission; Nuclear power; Radioactivity; Radiocarbon dating

Itai-itai disease The symptoms of Itai-Itai disease were first observed in 1913 and characterized between 1947 and 1955; it was 1968, however, before the Japanese Ministry of Health and Welfare officially declared that the disease was caused by chronic cadmium poisoning in conjunction with other factors such as the stresses of pregnancy and lactation, aging, and dietary deficiencies of vitamin D and calcium. The name arose from the cries of pain, “itai-itai” (ouch-ouch) by the most seriously stricken victims, older Japanese farm women. Although men, young women, and children were also exposed, 95% of the victims were post-menopausal women over 50 years of age. They usually had given birth to several children and had lived more than 30 years within 2 mi (3 km) of the lower stream of the Jinzu River near Toyama.

Itai-itai disease

The disease started with symptoms similar to rheumatism, neuralgia, or neuritis. Then came bone lesions, osteomalacia, and osteoporosis, along with renal disfunction and proteinuria. As it escalated, pain in the pelvic region caused the victims to walk with a duck-like gait. Next, they were incapable of rising from their beds because even a slight strain caused bone fractures. The suffering could last many years before it finally ended with death. Overall, an estimated 199 victims have been identified, of which 162 had died by December 1992. The number of victims increased during and after World War II as production expanded at the Kamioka Mine owned by the Mitsui Mining and Smelting Company. As 3,000 tons of zinc-lead ore per day were mined and smelted, cadmium was discharged in the wastewater. Downstream, farmers withdrew the fine particles of flotation tailings in the Jinzu River along with water for drinking and crop irrigation. As rice plants were damaged near the irrigation inlets, farmers dug small sedimentation pools that were ineffective against the nearly invisible poison. Both the numbers of Itai-Itai disease patients and the damage to the rice crops rapidly decreased after the mining company built a large settling basin to purify the wastewater in 1955. However, even after the discharge into the Jinzu River was halted, the cadmium already in the rice paddy soils was augmented by airborne exhausts. Mining operations in several other Japanese prefectures also produced cadmium-contaminating rice, but afflicted individuals were not certified as Itai-Itai patients. That designation was applied only to those who lived in the Jinzu River area. In 1972 the survivors and their families became the first pollution victims in Japan to win a lawsuit against a major company. They won because in 1939 Article 109 of the Mining Act had imposed strict liability upon mining facilities for damages caused by their activities. The plaintiffs had only to prove that cadmium discharged from the mine caused their disease, not that the company was negligent. As epidemiological proof of causation sufficed as legal proof in this case, it set a precedent for other pollution litigation as well. Despite legal success and compensation, the problem of contaminated rice continues. In 1969 the government initially set a maximum allowable standard of 0.4 parts per million (ppm) cadmium in unpolished rice. However, because much of the contaminated farmland produced grain in excess of that level, in 1970 under the Foodstuffs Hygiene Law this was raised to 1 ppm cadmium for unpolished rice and 0.9 ppm cadmium for polished rice. To restore contaminated farmland, Japanese authorities instituted a program in which, each year, the most highly contaminated soils in a small area are exchanged for uncontaminated soils. Less contaminated soils are rehabilitated through the addi779

IUCN—The World Conservation Union

tion of lime, phosphate, and a cadmium sequestering agent, EDTA. By 1990 about 10,720 acres (4,340 ha), or 66.7% of the approximately 16,080 acres (6,510 ha) of the most highly cadmium contaminated farmland had been restored. In the remaining contaminated areas where farm families continue to eat homegrown rice, the symptoms are alleviated by treatment with massive doses of vitamins B1, B12, D, calcium, and various hormones. New methods have also been devised to cause the cadmium to be excreted more rapidly. In addition, the high costs of compensation and restoration are leading to the conclusion that prevention is not only better but cheaper. This is perhaps the most encouraging factor of all. See also Bioaccumulation; Environmental law; Heavy metals and heavy metal poisoning; Mine spoil waste; Smelter; Water pollution [Frank M. D’Itri]

RESOURCES BOOKS Kobayashi, J. “Pollution by Cadmium and the Itai-Itai Disease in Japan.” In Toxicity of Heavy Metals in the Environment, Part 1, edited by F. W. Oehme. New York: Marcel Dekker, 1978. Kogawa, K. “Itai-Itai Disease and Follow-Up Studies.” In Cadmium in the Environment, Part II, edited by J. O. Nriagu. New York: Wiley, 1981. Tsuchiya, K., ed. Cadmium Studies in Japan: A Review. Tokyo, Japan, and Amsterdam, Netherlands: Kodansha and Elsevier/North-Holland Biomedical Press, 1978.

IUCN—The World Conservation Union Founded in 1948 as the International Union for the Conservation of Nature and Natural Resources (IUCN), IUCN works with governments, conservation organizations, and industry groups to conserve wildlife and approach the world’s environmental problems using “sound scientific insight and the best available information.” Its membership, currently over 980, comes from 140 countries and includes 56 sovereign states, as well as government agencies and nongovernmental organizations. IUCN exists to serve its members, representing their views and providing them with the support necessary to achieve their goals. Above all, IUCN works with its members “to achieve development that is sustainable and that provides a lasting improvement in the quality of life for people all over the world.” IUCN’s three basic conservation objectives are: (1) to secure the conservation of nature, and especially of biological diversity, as an essential foundation for the future; (2) to ensure that where the earth’s natural resources are used this is done in a wise, equitable, and sustainable way; (3) to guide the development of human communities toward ways of life that are both of 780

Environmental Encyclopedia 3 good quality and in enduring harmony with other components of the biosphere. IUCN is one of the few organizations to include both governmental agencies and nongovernmental organizations. It is in a unique position to provide a neutral forum where these organizations can meet, exchange ideas, and build partnerships to carry out conservation projects. IUCN is also unusual in that it both develops environmental policies and then implements them through the projects it sponsors. Because the IUCN works closely with, and its membership includes, many government scientists and officials, the organization often takes a conservative, pro-management, as opposed to a “preservationist,” approach to wildlife issues. It may encourage or endorse limited hunting and commercial exploitation of wildlife if it believes this can be carried out on a sustainable basis. IUCN maintains a global network of over 5,000 scientists and wildlife professionals who are organized into six standing commissions that deal with various aspects of the union’s work. There are commissions on Ecology, Education, Environmental Planning, Environmental Law, National Parks and Protected Areas, and Species Survival. These commissions create action plans, develop policies, advise on projects and programs, and contribute to IUCN publications, all on an unpaid, voluntary basis. IUCN publishes an authoritative series of “Red Data Books,” describing the status of rare and endangered wildlife. Each volume provides information on the population, distribution, habitat and ecology, threats, and protective measures in effect for listed species. The “Red Data Books” concept was originated in the mid-1960s by the famous British conservationist Sir Peter Scott, and the series now includes a variety of publication on regions and species. Other titles in the series of “Red Data Books” include Dolphins, Porpoises, and Whales of the World; Lemurs of Madagascar and the Comoros; Threatened Primates of Africa; Threatened Swallowtail Butterflies of the World; Threatened Birds of the Americas; and books on plants and other species of wildlife, including a series of conservation action plans for threatened species. Other notable IUCN works include World Conservation Strategy: Living Resources Conservation for Sustainable Development and its successor document Caring for the Earth—A Strategy for Sustainable Living; and the United Nations List of Parks and Protected Areas. IUCN also publishes books and papers on regional conservation, habitat preservation, environmental law and policy, ocean ecology and management, and conservation and development strategies. [Lewis G. Regenstein]

Environmental Encyclopedia 3 RESOURCES ORGANIZATIONS IUCN—The World Conservation Union Headquarters, Rue Mauverney 28, Gland, Switzerland CH-1196 ++41 (22) 999-0000, Fax: ++41 (22) 999-0002, Email: [email protected],

Ivory-billed woodpecker The ivory-billed woodpecker (Campephilus principalis) is one of the rarest birds in the world and is considered by most authorities to be extinct in the United States. The last confirmed sighting of ivory-bills was in Cuba in 1987 or 1988. Though never common, the ivory-billed woodpecker was rarely seen in the United States after the first years of the twentieth century. Some were seen in Louisiana in 1942, and since then, occasional sightings have been unverified. Interest in the bird rekindled in 1999, when a student at Louisiana State University claimed to have seen a pair of ivory-billed woodpeckers in a wilderness preserve. Teams of scientists searched the area for two years. No ivory-billed woodpecker was sighted, though some evidence made it plausible the bird was in the vicinity. By mid-2002, the ivorybilled woodpecker’s return from the brink of extinction remained a tantalizing possibility, but not an established fact. The ivory-billed woodpecker was a huge bird, averaging 19–20 in (48–50 cm) long, with a wingspan of over 30 in (76 cm). The ivory-colored bills of these birds were prized as decorations by native Americans. The naturalist John James Audubon found ivory-billed woodpecker in swampy forest edges in Texas in the 1830s. But by the end of the nineteenth century, the majority of the bird’s prime habitat had been destroyed by logging. Ivory-billed woodpeckers required large tracts of land in the bottomland cypress, oak, and black gum forests of the Southeast, where they fed off insect larva in mature trees. This species was the largest woodpecker in North America, and they preferred the largest of these trees, the same ones targeted by timber companies as the most profitable to harvest. The territory for breeding pairs of ivory-billed woodpeckers consists of about three square miles of undisturbed, swampy forest, and there was little prime habitat left for them after 1900, for most of these areas had been heavily logged. By the 1930s, one of the only virgin cypress swamps left was the Singer Tract in Louisiana, an 80,000-acre (32,375-ha) swathe of land owned by the Singer Sewing Machine Company. In 1935 a team of ornithologists descended on it to locate, study, and record some of the last ivory-billed woodpeckers in existence. They found the birds and were able to film and photograph them, as well as make the only sound recordings of them in existence. The Audubon Society, the state of Louisiana, and the U.S. Fish and Wildlife Service tried to buy the land from Singer to make it a refuge for the rare birds. But Singer

Ivory-billed woodpecker

had already sold timber rights to the land. During World War II, when demand for lumber was particularly high, the Singer Tract was leveled. One of the giant cypress trees that was felled contained the nest and eggs of an ivory-billed woodpecker. Land that had been virgin forest then became soybean fields. Few sightings of these woodpeckers were made in the 1940s, and none exist for the 1950s. But in the early 1960s ivory-billed woodpeckers were reported seen in South Carolina, Texas, and Louisiana. Intense searches, however, left scientists with little hope by the end of that decade, as only six birds were reported to exist. Subsequent decades yielded a few individual sightings in the United States, but none were confirmed. In 1985 and 1986, there was a search for the Cuban subspecies of the ivory-billed woodpecker. The first expedition yielded no birds, but trees were found that had apparently been worked by the birds. The second expedition found at least one pair of ivory-billed woodpeckers. Most of the land formerly occupied by the Cuban subspecies was cut over for sugar cane plantations by the 1920s, and surveys in 1956 indicated that this population had declined to about a dozen birds. The last reported sightings of the species occurred in the Sierra de Moa area of Cuba. They are still considered to exist there, but the health of any remaining individuals must be in question, given the inbreeding that must occur with such a low population level and the fact that so little suitable habitat remains. In 1999, a student at Louisiana State University (LSU) claimed to have seen a pair of ivory-billed woodpeckers while he was hunting for turkey in the Pearl River Wildlife Management Area near the Louisiana-Mississippi border. The student, David Kulivan, was a credible witness, and he soon convinced ornithologists at LSU to search for the birds. News of the sighting attracted thousands of amateur and professional birders over the next two years. Scientists from LSU, Cornell University, and the Louisiana Department of Wildlife and Fisheries organized an expedition that included posting of high-tech listening devices. Over more than two years, no one else saw the birds, though scientists found trunks stripped of bark, characteristic of the way the ivorybilled woodpecker feeds, and two groups heard the distinct double rapping sound the ivory-billed woodpecker makes when it knocks a trunk. No one heard the call of the ivorybilled woodpecker, though this sound would have been considered definitive evidence of the ivory-billed woodpecker’s existence. Hope for confirmation of Kulivan’s sighting rested on deciphering the tapes made by a dozen recording devices. This was being done at Cornell University, and was expected to take years. By mid-2002, the search for the ivory-billed woodpecker in Louisiana had wound down, disappointingly in781

Environmental Encyclopedia 3

Izaak Walton League

Ivory-billed woodpecker (Campephilus). (Photograph by James Tanner. Photo Researchers Inc. Reproduced by permission.)

conclusive. While some scientists remained skeptical about the sighting, others believed that the forest in the area may have regrown enough to support an ivory-billed woodpecker population. See also Deforestation; Endangered species; Extinction; International Council for Bird Preservation; Wildlife management [Eugene C Beckham]

RESOURCES BOOKS Collar, N. J., et al. Threatened Birds of the Americas: The ICBP/IUCN Red Data Book. Washington, DC: Smithsonian Institution Press, 1992. Ehrlich, P. R., D. S. Dobkin, and D. Wheye. The Birder’s Handbook. New York: Simon & Schuster, 1988. Ehrlich, P. R., D. S. Dobkin, and D. Wheye. Birds in Jeopardy: The Imperiled and Extinct Birds of the United States and Canada, Including Hawaii and Puerto Rico. Stanford: Stanford University Press, 1992.

PERIODICALS Gorman, James. “Listening for the Call of a Vanished Bird” New York Times, March 5, 2002, F1. Graham, Frank Jr. “Is the Ivorybill Back?” Audubon (May/June 2000): 14.

782

Pianin, Eric. “Scientists Give Up Search for Woodpecker; Some Signs Noted of Ivory-Billed Bird Not Seen Since ’40s” Washington Post, February 21, 2002, A2. Tomkins, Shannon. “Dead or Alive?” Houston Chronicle, April 14, 2002, 8.

Izaak Walton League In 1922, 54 sportsmen and sportswomen—all concerned with the apparent destruction of American fishing waterways—established the Izaak Walton League of America (IWLA). They looked upon Izaak Walton, a seventeenthcentury English fisherman and author of The Compleat Angler, as inspiration in protecting the waters of America. The Izaak Walton League has since widened its focus: as a major force in the American conservation movement, IWLA now pledges in its slogan “to defend the nation’s soil, air, woods, water, and wildlife.” When sportsmen and sportswomen formed IWLA approximately 70 years ago, they worried that American industry would ruin fishing streams. Raw sewage, soil erosion, and rampant pollution threatened water and wildlife.

Environmental Encyclopedia 3 Initially the League concentrated on preserving lakes, streams, and rivers. In 1927, at the request of President Calvin Coolidge, IWLA organized the first national water pollution inventory. Izaak Walton League members (called “Ikes") subsequently helped pass the first national water pollution control act in the 1940s. In 1969 IWLA instituted the Save Our Streams program, and this group mobilized forces to pass the groundbreaking Clean Water Act of 1972. The League did not only concentrate on the preservation of American waters, however. From its 1926 campaign to protect the black bass, to the purchase of a helicopter in 1987 to help game law officers protect waterfowl from poachers in the Gulf of Mexico, IWLA has also been instrumental in the preservation of wildlife. In addition, the League has fought to protect public lands such as the National Elk Refuge in Wyoming, the Everglades National Park, and the Isle Royale National Park. IWLA currently sponsors several environmental programs designed to conserve natural resources and educate the public. The aforementioned Save Our Streams (SOS) program is a grassroots organization designed to monitor water quality in streams and rivers. Through 200 chapters nationwide, SOS promotes “stream rehabilitation” through stream adoption kits and water pollution law training. Another program, Wetlands Watch, allows local groups to purchase, adopt, and protect nearby wetlands. Similarly, the Izaak Walton League Endowment buys land to save it from unwanted development. IWLA’s Uncle Ike Youth Education program aims to educate children and convince them of the necessity of preserving the environment. A last major program from the League is its internationally acclaimed Outdoor Ethics program. Outdoor Ethics works to stop poaching and other illegal and unsportsmanlike outdoor activities by educating hunters, anglers, and others. The League also sponsors and operates regional conservation efforts. Its Midwest Office, based in Minnesota,

Izaak Walton League

concentrates on preservation of the Upper Mississippi River region. The Chesapeake Bay Program is a major regional focus. Almost 25% of the “Ikes” live in the region of this estuary, and public education, awards, and local conservation projects help protect Chesapeake Bay. In addition the Soil Conservation Program focuses on combating soil erosion and groundwater pollution, and the Public Lands Restoration Task Force works out of its headquarters in Portland, Oregon, to strike a balance between forests and the desire for their natural resources in the West. IWLA makes its causes known through a variety of publications. Splash, a product of SOS, enlightens the public as to how to protect streams in America. Outdoor Ethics, a newsletter from the program of the same name, educates recreationists to responsible practices of hunting, boating, and other outdoor activities. The League also publishes a membership magazine, Outdoor America, and the League Leader, a vehicle of information for IWLA’s 2,000 chapter and division officers. IWLA has also produced the longestrunning weekly environmental program on television. Entitled Make Peace with Nature, the program has aired on PBS for almost 20 years and presents stories of environmental interest. Having expanded its scope from water to the general environment, IWLA has become a vital force in the national conservation movement. Through its many and varied programs, the League continues to promote constructive and active involvement in environmental problems. [Andrea Gacki]

RESOURCES ORGANIZATIONS Izaak Walton League of America, 707 Conservation Lane, Gaithersburg, MD USA 20878 (301) 548-0150, Fax: (301) 548-0146, Toll Free: (800) IKE-LINE, Email: [email protected],

783

This Page Intentionally Left Blank

J

Wes Jackson (1936 – ) American environmentalist and writer Wes Jackson is a plant geneticist, writer, and co-founder, with his wife Dana Jackson, of the Land Institute in Salina, Kansas. He is one of the leading critics of conventional agricultural practices, which in his view are depleting topsoil, reducing genetic diversity, and destroying small family farms and rural communities. Jackson is also critical of the culture that provides the pretext and the context within which this destruction occurs and is justified as “necessary,” “efficient,” and “economical.” He contrasts a culture or mind-set that emphasizes humanity’s mastery or dominion over nature with an alternative vision that takes “nature as the measure” of human activity. The former viewpoint can produce temporary triumphs but not long-lasting or sustainable livelihood; only the latter holds out the hope that humans can live with nature, on nature’s terms. Jackson was born in 1936 in Topeka, Kansas, the son of a farmer. Young and restless, Jackson held various jobs— welder, farm hand, ranch hand, teacher—before devoting his time to the study of agricultural practices in the United States and abroad. He attended Kansas Wesleyan University, the University of Kansas, and North Carolina State University, where he earned his doctorate in plant genetics in 1967. According to Jackson, agriculture as we know it is unnatural, artificial, and, by geological time-scales, of relatively recent origin. It requires plowing, which leads to loss of topsoil, which in turn reduces and finally destroys fertility. Large-scale “industrial” agriculture also requires large investments, complex and expensive machinery, fertilizers, pesticides, and herbicides, and leads to a loss of genetic diversity, to soil erosion and compaction, and other negative consequences. It is also predicated on the planting and harvesting of annual crops—corn, wheat, soybeans—that leaves fields uncovered for long periods and thus leaves precious topsoil unprotected and vulnerable to erosion by wind and water. For every bushel of corn harvested, a bushel of topsoil is lost. Jackson estimates that America has lost between one-

third and one-half of its topsoil since the arrival of the first European settlers. At the Land Institute, Jackson and his associates are attempting to re-think and revise agricultural practices so as to “make nature the measure” and enable farmers to “meet the expectations of the land,” rather than the other way around. In particular, they are returning to, and attempting to learn from, the native prairie plants and the ecosystems that sustain them. They are also exploring the feasibility of alternative farming methods that might minimize or even eliminate entirely the planting and harvesting of annual crops, favoring instead the use of perennials that protect and bind topsoil. Jackson’s emphasis is not exclusively scientific or technical. Like his long-time friend Wendell Berry, Jackson emphasizes the culture in agriculture. Why humans grow food is not at all mysterious or problematic: we must eat in order to live. But how we choose to plant, grow, harvest, distribute, and consume food is clearly a cultural and moral matter having to do with our attitudes and beliefs. Our contemporary consumer culture is out of kilter, Jackson contends, in various ways. For one, the economic emphasis on minimizing costs and maximizing yields ignores longer-term environmental costs that come with the depletion of topsoil, the diminution of genetic diversity, and the depopulation of rural communities. For another, most Americans have lost (and many have never had) a sense of connectedness with the land and the natural environment; Jackson contends that they are unaware of the mysteries and wonder of birth, death and rebirth, and of cycles and seasons, that are mainstays of a meaningful human life. To restore this sense of mystery and meaning requires what Jackson calls homecoming and “the resettlement of America,” and “becoming native to this place.” More Americans need to return to the land, to repopulate rural communities, and to re-learn the wealth of skills that we have lost or forgotten or never acquired. Such skills are more than matters of method or technique, they also have to do with ways of relating to nature and to each other. Jackson has received several awards, such as the Pew 785

Environmental Encyclopedia 3

James Bay hydropower project

Conservation Scholars award (1990) and a MacArthur Fellowship (1992). Wes Jackson has been called, by critics and admirers alike, a radical and a visionary. Both labels appear to apply. For Jackson’s vision is indeed radical, in the original sense of the term (from the Latin radix, or root). It is a vision not only of “new roots for agriculture” but of new and deeper roots for human relationships and communities that, like protected prairie topsoil, will not easily erode. [Terence Ball]

RESOURCES BOOKS Berry, W. “New Roots for Agricultural Research.” In The Gift of Good Land. San Francisco: North Point Press, 1981. Eisenberg, E. New Roots for Agriculture. San Francisco: Friends of the Earth, 1980. Jackson, Wes. Altars of Unhewn Stone. San Francisco: North Point Press, 1987. ———. Becoming Native to this Place. Lexington, KY: University Press of Kentucky, 1994. ———, W. Berry, and B. Coleman, eds. Meeting the Expectations of the Land. San Francisco: North Point Press, 1984.

PERIODICALS Eisenberg, E. “Back to Eden.” The Atlantic (October 1989): 57–89.

James Bay hydropower project James Bay forms the southern tip of the much larger Hudson Bay in Quebec, Canada. To the east lies the Quebec-Labrador peninsula, an undeveloped area with vast expanses of pristine wilderness. The region is similar to Siberia, covered in tundra and sparse forests of black spruce and other evergreens. It is home to roughly 100 species of birds, twenty species of fish and dozens of mammals, including muskrat, lynx, black bear, red fox, and the world’s largest herd of caribou. The area has also been home to the Cree and other Native Indian tribes for centuries. Seven rivers drain the wet, rocky region, the largest being the La Grande. In the 1970s, the government-owned Hydro-Quebec electric utility began to divert these rivers, flooding 3,861 square miles (10,000 km2) of land. They built a series of reservoirs, dams and dikes on La Grande that generated 10,300 megawatts of power for homes and businesses in Quebec, New York, and New England. With its $16 billion price tag, the project is one of the world’s largest energy projects. The complex generates a total of 15,000 megawatts. A second phase of the project added two more hydroelectric complexes, supplying another 12,000 megawatts of power-786

the equivalent of more than thirty-five nuclear power plants.

But the project has had many opponents. The Cree and other Inuit tribes joined forces with American environmentalists to protest the project. Its environmental impact has had scant analysis; in fact, damage has been severe. Ten thousand caribou drowned in 1984, while crossing one of the newly-dammed rivers on their migration route. When the utility flooded land, it destroyed habitat for countless plants and animals. The graves of Cree Indians, who for millennia, had hunted, traveled, and lived along the rivers, were inundated. The project also altered the ecology of the James and Hudson bays, disrupting spawning cycles, nutrient systems, and other important maritime resources. Naturally-occurring mercury in rocks and soil is released as the land is flooded and accumulates as it passes through the food chain from microscopic organisms, to fish, to humans. A majority of the native people in villages where fish are a main part of the diet show symptoms of mercury poisoning. Despite these problems, Hydro-Quebec pursued the project, partly because of Quebec’s long-standing struggle for independence from Canada. The power is sold to corporate customers, providing income for the province and attracting industry to Quebec. The Cree and environmentalists, joined by New York congressmen, took their fight to court. On Earth Day 1993, they filed suit against New York Power Authority in United States District Court in New York, challenging the legality of the agreement, which was to go into effect in 1999. Their claim was based on the United States Constitution and the 1916 Migratory Bird Treaty with Canada. In February 2002, nearly 70 percent of Quebec’s James Bay Cree Indians endorsed a 2.25 billion dollar deal with the Quebec government for hydropower development on their land. Approval for the deal ranged from a low of 50 percent, to a high of 83 percent, among the nine communities involved. Some Cree spokespersons considered the agreement a vindication of the long campaign, waged since 1975, to have Cree rights respected. Under the deal, the James Bay Cree would receive $16 million in 2002, $30.7 million in 2003, then $46.5 million a year for 48 years. In return, the Cree would drop environmental lawsuits totaling $2.4 billion. The Cree also agreed to hydroelectric plants along the Eastman River and Rupert River, subject to environmental approval. The deal guarantees the Cree jobs with the hydroelectric authority and gives them more control over logging and other areas of their economy. See also Environmental law; Hetch Hetchy Reservoir; Nuclear energy [Bill Asenjo Ph.D.]

Environmental Encyclopedia 3 RESOURCES BOOKS McCutcheon, S. Electric Rivers: The Story of the James Bay Project. Montreal: Black Rose Books, 1991.

PERIODICALS Associated Press. “James Bay Cree Approve Deal with Quebec on Hydropower Development.” February 05, 2002. Picard, A. “James Bay II.” Amicus Journal 12 (Fall 1990): 10–16.

Japanese logging In recent decades the timber industry has intensified efforts to harvest logs from tropical, temperate, and boreal forests worldwide to meet an increasing demand for wood and wood products. Japanese companies have been particularly active in logging and importing timber from around the world. Because of wasteful and destructive logging practices that result from efforts to maximize corporate financial gains, those interested in reducing deforestation have raised many concerns about Japan’s logging industry. The world’s forests, especially tropical rain forests, are rich in species, including plants, insects, birds, reptiles, and mammals. Many of these species exist only in very limited areas where conditions are suitable for their existence. These endangered forest dwellers provide unique and irreplaceable genetic material that can contribute to the betterment of domestic plants and animals. The forests are a valuable resource for medically useful drugs. Healthy forests stabilize watersheds by absorbing rainfall and retarding runoff. Mat roots help control soil erosion, preventing the silting of waterways and damage to reefs, fisheries, and spawning grounds. The United Nations Food and Agriculture Organization reports tropical deforestation rates of 42 million acres (over 17 million ha) per year. More than half of the Earth’s primary tropical forest area has vanished, and more than half of the remaining forest has been degraded. While Brazil contains about a third of the world’s remaining tropical rain forest, southeast Asia is now a major supplier of tropical woods. Burma, Thailand, Laos, Vietnam, Kampuchea, Malaysia, Indonesia, Borneo, New Guinea, and the Philippines contain 20% of the world’s remaining tropical forests. With current rates of deforestation, it is estimated that almost all of Southeast Asia’s primary forests will be gone by the year 2010. While a number of countries make use of rain forest wood, Japan is the number one importer of tropical timber. Japan’s imports account for about 30% of the world trade in tropical lumber. Japan also imports large amounts of timber from temperate and boreal forests in Canada, Russia, and the United States. These three countries contain most of the remaining

Japanese logging

boreal forests, and they supply more than half of the world’s industrial wood. As demand for timber continues to climb, previously undisturbed virgin forests are increasingly being used. To speed harvesting, logging roads are built to provide access, and heavy equipment is brought in to hasten work. In the process, soil is compacted, making plant re-growth difficult or impossible. Although these practices are not limited to one country, Japanese firms have been cited by environmentalists as particularly insensitive to the environmental impact of logging. The globe’s forests are sometimes referred to as the ’lungs of the planet,’ exchanging carbon dioxide for oxygen. Critics claim that the wood harvesting industry is destroying this vital natural resource, and in the process this industry is endangering the planet’s ability to nurture and sustain life. Widespread destruction of the world’s forests is a growing concern. Large Japanese companies, and companies affiliated with Japanese firms, have logged old growth forests in several parts of the globe to supply timber for the Japanese forest products industry. Clear-cutting of trees over large areas in tropical rain forests has made preservation of the original flora and fauna impossible. Many species are becoming extinct. Replanting may in time restore the trees, but it will not restore the array of organisms that were present in the original forest. Large scale logging activities have had a largely negative impact on the local economies in exporting regions because whole logs are shipped to Japan for further processing. Developed countries such as the United States and Canada, which in the past harvested timber and processed it into lumber and other products, have lost jobs to Japan. Indigenous cultures that have thrived in harmony with their forest homelands for eons are displaced and destroyed. Provision has not been made for the survival of local flora and fauna, and provision for forest re-establishment has thus far proven inadequate. As resentment has grown in impacted areas, and among environmentalists, efforts have emerged to limit or stop large-scale timber harvesting and exporting. Although concern has been voiced over all large-scale logging operations, special concern has been raised over harvesting of tropical timber from previously undisturbed primary forest areas. Tropical rain forests are especially unique and valuable natural resources for many reasons, including the density and variety of species within their borders. The exploitation of these unique ecosystems will result in the extinction of many potentially valuable species of plants and animals that exist nowhere else. Many of these forms of life have not yet been named or scientifically studied. In addition, over-harvesting of tropical rainforests has a negative effect on weather patterns, especially by reducing rainfall. 787

Environmental Encyclopedia 3

Japanese logging

Japan is a major importer of tropical timber from Malaysia, New Guinea, and the Solomon Islands. Although the number of imported logs has declined in recent years, this has been matched by an increase in imported tropical plywood manufactured in Indonesia and Malaysia. As a result, the total amount of timber removed has remained fairly constant. An environmentalist group called the Rainforest Action Network (RAN) has issued an alarm concerning recent expansion of logging activity by firms affiliated with Japanese importers. The RAN alleges that: “After laying waste to the rain forests of Asia and the Pacific islands, giant Malaysian logging companies are setting their sights on the Amazon. This past year, some of Southeast Asia’s biggest forestry conglomerates have moved into Brazil, and are buying controlling interests in area logging companies, and purchasing rights to cut down vast rain forest territories for as little as $3 U.S. dollars per acre. In the last few months of 1996 these companies quadrupled their South American interests, and now threaten 15% of the Amazon with immediate logging. According to The Wall Street Journal, up to 30 million acres (12.3 million ha) are at stake. Major players include the WTK Group, Samling, Mingo, and Rimbunan Hijau.” The RAN claims that “the same timber companies in Sarawak, Malaysia, worked with such rapacious speed that they devastated the region’s forest within a decade, displacing traditional peoples and leaving the landscape marred with silted rivers and eroded soil.” One large Japanese firm, the Mitsubishi Corporation, has been targeted for criticism and boycott by the RAN, as one of the world’s largest importers of timber. The boycott is an effort to encourage environmentally-conscious consumers to stop buying products marketed by companies affiliated with the huge conglomerate, including automobiles, cameras, beer, cell phones, and consumer electronics equipment. Through its subsidiaries, Mitsubishi has logged or imported timber from the Philippines, Malaysia, Papua New Guinea, Bolivia, Indonesia, Brazil, Chile, Canada (British Columbia and Alberta), Siberia, and the United States (Alaska, Oregon, Washington, and Texas). The RAN charges that “Mitsubishi Corporation is one of the most voracious destroyers of the world’s rain forests. Its timber purchases have laid waste to forests in the Philippines, Malaysia, Papua New Guinea, Indonesia, Brazil, Bolivia, Australia, New Zealand, Siberia, Canada, and even the United States.” The Mitsubishi Corporation itself does not sell consumer products, but it consists of 190 interlinked companies and hundreds of associated firms that do market to consumers. This conglomerate forms one of the world’s largest industrial and financial powers. The Mitsubishi umbrella includes Mitsubishi Bank, Mitsubishi Heavy Industries, Mitsubishi Electronics, Mitsubishi Motors, and other major components. To force Mit788

subishi and other corporations involved with timber harvesting to operate in a more environmentally responsible way and to end “their destructive logging and trading practices,” an international boycott was organized in 1990 by the World Rainforest Movement (tropical forests) and the Taiga Rescue Network (boreal forests). The Mitsubishi Corporation has countered criticism by launching a program “to promote the regeneration of rain forests...in Malaysia that plants seedlings and monitors their development.” In 1990, the corporation formed an Environmental Affairs Department, one of the first of its kind in Japan, to draft environmental guidelines, and coordinate corporate environmental activities. In the words of the Mitsubishi Corporation Chairman, “A business cannot continue to exist without the trust and respect of society for its environmental performance.” Mitsubishi Corporation reports that they have launched a program to support experimental reforestation projects in Malaysia, Brazil, and Chile. In Malaysia, the company is working with a local agricultural university, under the guidance of a professor from Japan. About 300,000 seedlings were planted on a barren site in 1991. Within five years, the trees were over 33 feet (10 m) in height and the corporation claimed that they were “well on the way to establishing techniques for regenerating tropical forest on burnt or barren land using indigenous species.” Similar projects are underway in Brazil and Chile. The company is also conducting research on sustainable management of the Amazon rain forests. In Canada, Mitsubishi Corporation has participated in a pulp project called Al-Pac to start a mill “which will supply customers in North America, Europe, and Asia,” meeting “the strictest environmental standards by employing advanced, environmentally safe technology. Al-Pac harvests around 0.25% of its total area annually and all harvested areas will be reforested.” [Bill Asenjo Ph.D.]

RESOURCES BOOKS Marx, M. J. The Mitsubishi Campaign: First Year Report. Rainforest Action Network, San Francisco, 1993. Mitsubishi Corporation Annual Report 1996. Mitsubishi Corporation, Tokyo, 1996. Wakker, E. “Mitsubishi’s Unsustainable Timber Trade: Sarawak.” In Restoration of Tropical Forest Ecosystems. L. and M. Lohmann, eds. Netherlands: Kluwer Academic Publishers, 1993.

PERIODICALS Marshall, G. “The Political Economy of the Logging: The Barnett Inquiry into Corruption in the Papua New Guinea Timber Industry,” The Ecologist 20, no. 5 (1990). Neff, R., and W. J. Holstein. “Mitsubishi is on the Move,” Business Week, September 24, 1990. World Rainforest Report XII, no. 4 (October-December 1995). San Francisco: Rainforest Action Network.

K

Kapirowitz Plateau The Kapirowitz Plateau, a wildlife refuge on the northern rim of the Grand Canyon, has come to symbolize wildlife management gone awry, a classic case of misguided human intervention intended to help wildlife that ended up damaging the animals and the environment. The Kapirowitz is located on the Colorado River in northwestern Arizona, and is bounded by steep cliffs dropping down to the Kanab Canyon to the west, and the Grand and Marble canyons to the south and southeast. Because of its inaccessibility, according to naturalist James B. Trefethen, the Plateau was considered a “biological island,” and its deer population “evolved in almost complete genetic isolation.” The lush grass meadows of the Kapirowitz Plateau supported a resident population of 3,000 mule deer (Odocoileus hemionus), which were known and renowned for their massive size and the huge antlers of the old bucks. Before the advent of Europeans, Paiute and Navajo Indians hunted on the Kapirowitz in the fall, stocking up on meat and skins for the winter. In the early 1900s, in an effort to protect and enhance the magnificent deer population of the Kapirowitz, the federal government prohibited all killing of deer, and even eliminated the predator population in the area. As a result, the deer population exploded, causing massive overbrowsing, starvation, and a drastic decline in the health and population of the herd. In 1893, when the Kapirowitz and surrounding lands were designated the Grand Canyon National Forest Reserve, hundreds of thousands of sheep, cattle, and horses were grazing on the Plateau, resulting in overgrazing, erosion, and large-scale damage to the land. On November 28, 1906, President Theodore Roosevelt established the one million-acre (400,000-ha) Grand Canyon National Game Preserve, which provided complete protection of the Kapirowitz ’s deer population. By then, however, overgrazing by livestock had destroyed much of the native vegetation and changed the Kapirowitz considerably for the worse. Contin-

ued pasturing of over 16,000 horses and cattle degraded the Kapirowitz even further. The Forest Service carried out President Roosevelt’s directive to emphasize “the propagation and breeding” of the mule deer by not only banning hunting, but also natural predators as well. From 1906 to 1931, federal agents poisoned, shot, or trapped 4,889 coyotes (Canis latrans), 781 mountain lions (Puma concolor, 554 bobcats (Felis rufus), and 20 wolves (Canis lupus). Without predators to remove the old, the sick, the unwary, and other biologically unfit animals, and keep the size of the herd in check, the deer herd began to grow out of control, and to lose those qualities that made its members such unique and magnificent animals. After 1906, the deer population doubled within 10 breeding seasons, and by 1918 (two years later), it doubled again. By 1923, the herd had mushroomed to at least 30,000 deer, and perhaps as many as 100,000 according to some estimates. Unable to support the overpopulation of deer, range grasses and land greatly deteriorated, and by 1925, 10,000– 15,000 deer were reported to have died from starvation and malnutrition. Finally, after relocation efforts mostly failed to move a significant number of deer off of the Kapirowitz, hunting was reinstated, and livestock grazing was strictly controlled. By 1931, hunting, disease, and starvation had reduced the herd to under 20,000. The range grasses and other vegetation returned, and the Kapirowitz began to recover. In 1975 James Trefethen wrote, “the Kapirowitz today again produces some of the largest and heaviest antlered mule deer in North America.” In the fields of wildlife management and biology, the lessons of the Kapirowitz Plateau are often cited (as in the writings of naturalist Aldo Leopold) to demonstrate the valuable role of predators in maintaining the balance of nature (such as between herbivores and the plants they consume) and survival of the fittest. The experience of the Kapirowitz shows that in the absence of natural predators, prey populations (especially ungulates) tend to increase beyond the carrying capacity of the land, and eventually the 789

Environmental Encyclopedia 3

Robert Francis Kennedy Jr.

results are overpopulation and malnutrition. See also Predator control; Predator-prey interactions [Lewis G. Regenstein]

RESOURCES BOOKS Leopold, A. A Sand County Almanac. New York: Oxford University Press, 1949. Trefethen, J. B. An American Crusade for Wildlife. New York: Winchester Press, 1975.

PERIODICALS Rasmussen, D. I. “Biotic Communities of the Kapirowitz Plateau,” Ecological Monographs 3 (1941): 229–275.

Robert Francis Kennedy Jr. (1954 – ) American environmental lawyer Robert “Bobby” Kennedy Jr. had a very controversial youth. Kennedy entered a drug rehabilitation program, at the age of 28, after being found guilty of drug possession following the South Dakota incident. He was sentenced to two years probation and community service. Clearly the incident was a turning point in Bobby’s life. “Let’s just say, I had a tumultuous adolescence that lasted until I was 29,” he told a reporter for New York magazine, which ran a long profile of Kennedy in 1995, entitled “Nature Boy.” The title refers to the passion which has enabled Kennedy to emerge from his bleak years as a strong and vital participant in environmental causes. A Harvard graduate and published author, Kennedy serves as chief prosecuting attorney for a group called the Hudson Riverkeeper (named after the famed New York river) and senior attorney for the Natural Resources Defense Council. Kennedy, who earlier in his career served as assistant district attorney in New York City after passing the bar, is also a clinical professor and supervising attorney at the Environmental Litigation Clinic at Pace University School of Law in New York. While Kennedy appeared to be following in the family’s political footsteps, working, for example, on several political campaigns and serving as a state coordinator for his Uncle Ted’s 1980 presidential campaign, it is in environmental issues that Bobby Jr. has found himself. He has worked on environmental issues across the Americas and has assisted several indigenous tribes in Latin America and Canada in successfully negotiating treaties protecting traditional homelands. He is also credited with leading the fight to protect New York City’s water supply, a battle which resulted in the New York City Watershed Agreement, regarded as an 790

international model for combining development and environmental concerns. Opportunity was always around the corner for a young, confident and intelligent Kennedy. After Harvard, Bobby Jr. earned a law degree at the University of Virginia. In 1978, the subject of Kennedy’s Harvard thesis—a prominent Alabama judge—was named head of the FBI. A publisher offered Bobby money to expand his previous research into a book, published in 1978, called Judge Frank M. Johnson, Jr.: A Biography. Bobby did a publicity tour which included TV appearances, but the reviews were mixed. In 1982, Bobby married Emily Black, a Protestant who later converted to Catholicism. Two children followed: Robert Francis Kennedy III and Kathleen Alexandra, named for Bobby’s Aunt Kathleen, who died in a plane crash in 1948. The marriage, however, coincided with Bobby’s fallout through drug addiction. In 1992, Bobby and Emily separated, and a divorce was obtained in the Dominican Republic. In 1994, Bobby married Mary Richardson, an architect, with whom he would have two more children. During this time Bobby emerged as a leading environmental activist and litigator. Kennedy is quoted as saying: “To me...this is a struggle of good and evil between shortterm greed and a long-term vision of building communities that are dignified and enriching and that meet the obligations of future generations. There are two visions of America. One is that this is just a place where you make a pile for yourself and keep moving. And the other is that you put down roots and build communities that are examples to the rest of humanity.” Kennedy goes on: “The environment cannot be separated from the economy, housing, civil rights. How we distribute the goods of the earth is the best measure of our democracy. It’s not about advocating for fishes and birds. It’s about human rights.” RESOURCES BOOKS Young Kennedys: The New Generation. Avon, 1998.

OTHER Biography Resource Center Online. Biography Resource Center. Farmington Hills, MI: The Gale Group. 2002.

ORGANIZATIONS Riverkeeper, Inc., 25 Wing & Wing, Garrison, NY USA 10524-0130 (845) 424-4149, Fax: (845) 424-4150, Email: [email protected],

Kepone Kepone (C10Cl10O) is an organochlorine pesticide that was manufactured by the Allied Chemical Corporation in Vir-

Environmental Encyclopedia 3 ginia from the late 1940s to the 1970s. Kepone was responsible for human health problems and extensive contamination of the James River and its estuary in the Chesapeake Bay. It is a milestone in the development of a public environmental consciousness, and its history is considered by many to be a classic example of negligent corporate behavior and inadequate oversight by state and federal agencies. Kepone is an insecticide and fungicide that is closely related to other chlorinated pesticides such as DDT and aldrin. As with all such pesticides, Kepone causes lethal damage to the nervous systems of its target organisms. A poorly water-soluble substance, it can be absorbed through the skin, and it bioaccumulates in fatty tissues from which it is later released into the bloodstream. It is also a contact poison; when inhaled, absorbed, or ingested by humans, it can damage the central nervous system as well as the liver and kidneys. It can also lead to neurological symptoms such as tremors, muscle spasms, sterility, and cancer. Although the manufacture and use of Kepone is now banned by the Environmental Protection Agency (EPA), organochlorines have long half-lives, and these compounds, along with their residues and degradation products, can persist in the environment over many decades. Allied Chemical first opened a plant to manufacture nitrogen-based fertilizers in 1928 in the town of Hopewell, on the banks of the James River in Virginia. This plant began producing Kepone in 1949. Commercial production was subsequently begun, although a battery of toxicity tests indicated that Kepone was both toxic and carcinogenic and that it caused damage to the functioning of the nervous, muscular, and reproductive systems in fish, birds, and mammals. It was patented by Allied in 1952 and registered with federal agencies in 1957. The demand for the pesticide grew after 1958, and Allied expanded production by entering into a variety of subcontracting agreements with a number of smaller companies, including the Life Science Products Company. In 1970, a series of new environmental regulations came into effect which should have changed the way wastes from the manufacture of Kepone were discharged. The Refuse Act Permit Program and the National Pollutant Discharge Elimination Program (NPDES) of the Clean Water Act required all dischargers of effluents into United States waters to register their discharges and obtain permits from federal agencies. At the time these regulations went into effect, Allied Chemical had three pipes discharging Kepone and plastic wastes into the Gravelly Run, a tributary of the James River, about 75 mi (120 km) north of Chesapeake Bay. A regional sewage treatment plant which would accept industrial wastes was then under construction but not scheduled for completion until 1975. Rather than installing expensive pollution control equipment for the interim pe-

Kepone

riod, Allied chose to delay. They adopted a strategy of misinformation, reporting the releases as temporary and unmonitored discharges, and they did not disclose the presence of untreated Kepone and other process wastes in the effluents. The Life Science Products Company also avoided the new federal permit requirements by discharging their wastes directly into the local Hopewell sewer system. These discharges caused problems with the functioning of the biological treatment systems at the sewage plant; the company was required to reduce concentrations of Kepone in sewage, but it continued its discharges at high concentrations, violating these standards with the apparent knowledge of plant treatment officials. During this same period, an employee of Life Science Products visited a local Hopewell physician, complaining of tremors, weight loss, and general aches and pains. The physician discovered impaired liver and nervous functions, and a blood test revealed an astronomically high level of Kepone—7.5 parts per million. Federal and state officials were contacted, and the epidemiologist for the state of Virginia toured the manufacturing facility at Life Science Products. This official reported that “Kepone was everywhere in the plant,” and that workers wore no protective equipment and were “virtually swimming in the stuff.” Another investigation discovered 75 cases of Kepone poisoning among the workers; some members of their families were also found to have elevated concentrations of the chemical in their blood. Further investigations revealed that the environment around the plant was also heavily contaminated. The soil contained 10,000–20,000 ppm of Kepone. Sediments in the James River, as well as local landfills and trenches around the Allied facilities were just as badly contaminated. Government agencies were forced to close 100 mi (161 km) of the James River and its tributaries to commercial and recreational fishing and shellfishing. In the middle of 1975, Life Science Products finally closed its manufacturing facility. It has been estimated that since 1966, it and Allied together produced 3.2 million lb (1.5 million kg) of Kepone and were responsible for releasing 100,000–200,000 lb (45,360–90,700 kg) into the environment. In 1976, the Northern District of Virginia filed criminal charges against Allied, Life Science Products, the city of Hopewell, and six individuals on 1,097 counts relating to the production and disposal of Kepone. The indictments were based on violations of the permit regulations, unlawful discharge into the sewer systems, and conspiracy related to that discharge. The case went to trial without a jury. The corporations and the individuals named in the charges negotiated lighter fines and sentences by entering pleas of “no contest.” Allied ultimately paid a fine of 13.3 million dollars, although their annual sales reach three billion dollars. Life Science Products 791

Kesterson National Wildlife Refuge

was fined four million dollars, which it could not pay due to lack of assets. Company officers were fined 25,000 dollars each, and the town of Hopewell was fined 10,000 dollars. No one was sentenced to a jail term. Civil suits brought against Allied and the other defendants resulted in a settlement of 5.25 million dollars to pay for cleanup expenses and to repair the damage that had been done to the sewage treatment plant. Allied paid another three million dollars to settle civil suits brought by workers for damage to their health. Environmentalists and many others considered the results of legal action against the manufacturers of Kepone unsatisfactory. Some have argued that these results are typical of environmental litigation. It is difficult to establish criminal intent beyond a reasonable doubt in such cases, and even when guilt is determined, sentencing is relatively light. Corporations are rarely fined in amounts that affect their financial strength, and individual officers are almost never sent to jail. Corporate fines are generally passed along as costs to the consumer, and public bodies are treated even more lightly, since it is recognized that the fines levied on public agencies are paid by taxpayers. Today, the James River has been reopened to fishing for those species that are not prone to the bioaccumulation of Kepone. Nevertheless, sediments in the river and its estuary contain large amounts of deposited Kepone which is released during periods of turbulence. Scientists have published studies which document that Kepone is still moving through the food chain/web and the ecosystem in this area, and Kepone toxicity has been demonstrated in a variety of invertebrate test species. There are still deposits of Kepone in the local sewer pipes in Hopewell; these continue to release the chemical, endangering treatment plant operations and polluting receiving waters. [Usha Vedagiri and Douglas Smith]

RESOURCES BOOKS Goldfarb, W. Kepone: A Case Study. New Brunswick, NJ: Rutgers University, 1977. Sax, N. I. Dangerous Properties of Industrial Materials. 6th ed. New York: Van Nostrand Reinhold, 1984.

Kesterson National Wildlife Refuge One of a dwindling number of freshwater marshes in California’s San Joaquin Valley, Kesterson National Wildlife Refuge achieved national notoriety in 1983 when refuge managers discovered that agricultural runoff was poisoning the area’s birds. Among other elements and agricultural chemicals reaching toxic concentrations in the wetlands, 792

Environmental Encyclopedia 3

Breeding populations of American coots were affected by selenium poisoning at Kesterson National Wildlife Refuge. (Photograph by Leonard Lee Rue III. Visuals Unlimited. Reproduced by permission.)

the naturally occurring element selenium was identified as the cause of falling fertility and severe birth defects in the refuge’s breeding populations of stilts, grebes, shovellers, coots, and other aquatic birds. Selenium, lead, boron, chromium, molybdenum, and numerous other contaminants were accumulating in refuge waters because the refuge had become an evaporation pond for tainted water draining from the region’s fields. The soils of the arid San Joaquin valley are the source of Kesterson’s problems. The flat valley floor is composed of ancient sea bed sediments that contain high levels of trace elements, heavy metals, and salts. But with generous applications of water, this sun-baked soil provides an excellent medium for food production. Perforated pipes buried in the fields drain away excess water—and with it dissolved salts and trace elements—after flood irrigation. An extensive system of underground piping, known as tile drainage, carries wastewater into a network of canals that lead to Kesterson Refuge, an artificial basin constructed by the Bureau of Reclamation to store irrigation runoff from central California’s heavily-watered agriculture. Originally a final drainage canal from Kesterson to San Francisco Bay was planned, but because an outfall point was never agreed upon, contaminated drainage water remained trapped in Kester-

Environmental Encyclopedia 3 son’s 12 shallow ponds. In small doses, selenium and other trace elements are not harmful and can even be dietary necessities. But steady evaporation in the refuge gradually concentrated these contaminants to dangerous levels. Wetlands in California’s San Joaquin valley were once numerous, supporting huge populations of breeding and migrating birds. In the past half century drainage and the development of agricultural fields have nearly depleted the area’s marshes. The new ponds and cattail marshes at Kesterson presented a rare opportunity to extend breeding habitat, and the area was declared a national wildlife refuge in 1972, one year after the basins were constructed. Eleven years later, in the spring of 1983, observers discovered that a shocking 60% of Kesterson’s nestlings were grotesquely deformed. High concentrations of selenium were found in their tissues, an inheritance from parent birds who ate algae, plants, and insects—all tainted with selenium—in the marsh. Following extensive public outcry the local water management district agreed to try to protect the birds. Alternate drainage routes were established, and by 1987 much of the most contaminated drainage had been diverted from the wildlife refuge. The California Water Resource Control Board ordered the Bureau of Reclamation to drain the ponds and clean out contaminated sediments, at a cost of well over $50 million. However, these contaminants, especially in such large volumes and high concentrations, are difficult to contain, and similar problems could quickly emerge again. Furthermore, these problems are widespread. Selenium poisoning from irrigation runoff has been discovered in least nine other national wildlife refuges, all in the arid west, since it appeared at Kesterson. Researchers continue to work on affordable and effective responses to such contamination in wetlands, an increasingly rare habitat in this country.

Keystone species

Ketones Ketones belong to a class of organic compounds known as carbonyls. They contain a carbon atom linked to an oxygen atom with a double bond (C=O). Acetone (dimethyl ketone) is a ketone commonly used in industrial applications. Other ketones include methyl ethyl ketone (MEK), methyl isobutyl ketone (MIBK), methyl amyl ketone (MAK), isophorone, and diacetone alcohol. As solvents, ketones have the ability to dissolve other materials or substances, particularly polymers and adhesives. They are ingredients in lacquers, epoxies, polyurethane, nail polish remover, degreasers, and cleaning solvents. Ketones are also used in industry for the manufacture of plastics and composites and in pharmaceutical and photographic film manufacturing. Because they have high evaporation rates and dry quickly, they are sometimes employed in drying applications. Some types of ketones used in industry, such as methyl isobutyl ketone and methyl ethyl ketone, are considered both hazardous air pollutants (HAP) and volatile organic compounds (VOC) by the EPA. As such, the Clean Air Act regulates their use. In addition to these industrial sources, ketones are released into the atmosphere in cigarette smoke and car and truck exhaust. More “natural” environmental sources such as forest fires and volcanoes also emit ketones. Acetone, in particular is readily produced in the atmosphere during the oxidation of organic pollutants or natural emissions. Ketones (in the form of acetone, beta-hydroxybutyric acid, and acetoacetic acid) also occur in the human body as a byproduct of the metabolism, or break down, of fat. [Paula Anne Ford-Martin]

[Mary Ann Cunningham Ph.D.]

RESOURCES PERIODICALS

RESOURCES BOOKS Harris, T. Death in the Marsh. Washington, DC: Island Press, 1991.

PERIODICALS Claus, K. E. “Kesterson: An Unsolvable Problem?” Environment 89 (1987): 4–5. Harris, T. “The Kesterson Syndrome.” Amicus Journal 11 (Fall 1989): 4–9.

Wood, Andrew. “Cleaner Ketone Oxidation.” Chemical Week (Aug 1, 2001).

OTHER U.S. National Library of Medicine. Hazardous Substances Data Bank. [cited May 2002]. .

ORGANIZATIONS American Chemical Society, 1155 Sixteenth St. NW, Washington, D.C. USA 20036 (202) 872-4600, Fax: (202) 872-4615, Toll Free: (800) 2275558, Email: [email protected],

Marshal, E. “Selenium in Western Wildlife Refuges.” Science 231 (1986): 111–12. Tanji, K., A. La¨uchli, and J. Meyer. “Selenium in the San Joaquin Valley.” Environment 88 (1986): 6–11.

ORGANIZATIONS Kesterson National Wildlife Refuge, c/o San Luis NWR Complex, 340 I Street, P.O. Box 2176, Los Banos, CA USA 93635 (209) 826-3508,

Keystone species Keystone species have a major influence on the structure of their ecological community. The profound influence of keystone species occurs because of their position and activity 793

Keystone species

within the food chain/web. In the sense meant here, a “major influence” means that removal of a keystone species would result in a large change in the abundance, and even the local extirpation, of one or more species in the community. This would fundamentally change the structure of the overall community in terms of species composition, productivity, and other characteristics. Such changes would have substantial effects on all of the species that are present, and could allow new species to invade the community. The original use of the word “keystone” was in architecture. An architectural keystone is a wedge-shaped stone that is strategically located at the summit of an arch. The keystone serves to lock all other elements of the arch together, and it thereby gives the entire structure mechanical integrity. Keystone species play an analogous role in giving structure to the “architecture” of their ecological community. The concept of keystone species was first applied to the role of certain predators (i.e., keystone predators) in their community. More recently, however, the term has been extended to refer to other so-called “strong interactors.” This has been particularly true of keystone herbivores that have a relatively great influence on the species composition and relative abundance of plants in their community. Keystone species directly exert their influence on the populations of species that they feed upon, but they also have indirect effects on species lower in the food web. Consider, for example, a hypothetical case of a keystone predator that regulates the population of a herbivore. This effect will also, of course, indirectly influence the abundance of plant species that the herbivore feeds upon. Moreover, by affecting the competitive relationships among the various species of plants in the community, the abundance of plants that the herbivore does not eat will also be indirectly affected by the keystone predator. Although keystone species exert their greatest influence on species with which they are most closely linked through feeding relationships, their influences can ramify throughout the food web. Ecologists have documented the presence of keystone species in many types of communities. The phenomenon does not, however, appear to be universal, in that keystone species have not been identified in many ecosystems. Predators as keystone species The term “keystone species” was originally used by the American ecologist Robert Paine to refer to the critical influence of certain predators. His original usage of the concept was in reference to rocky intertidal communities of western North America, in which the predatory starfish Pisaster ochraceous prevents the mussel Mytilus californianus from monopolizing the available space on rocky habitats and thereby eliminating other, less-competitive herbivores and even seaweeds from the community. By feeding on mussels, which are the dominant competitor among the herbivores 794

Environmental Encyclopedia 3 in the community, the starfish prevents these shellfish from achieving the dominance that would otherwise be possible. This permits the development of a community that is much richer in species than would occur in the absence of the predatory starfish. Paine demonstrated the keystone role of the starfish by conducting experiments in which the predator was excluded from small areas using cages. When this was done, the mussels quickly became strongly dominant in the community and eliminated virtually all other species of herbivores. Paine also showed that once mussels reached a certain size they were safe from predation by the starfish. This prevented the predator from eliminating the mussel from the community. Sea otters (Enhydra lutris) of the west coast of North America are another example of a keystone predator. This species feeds heavily on sea urchins when these invertebrates are available. By greatly reducing the abundance of sea urchins, the sea otters prevent these herbivores from overgrazing kelps and other seaweeds in subtidal habitats. Therefore, when sea otters are abundant, urchins are not, and this allows luxurious kelp “forests” to develop. In the absence of otters, the high urchin populations can keep the kelp populations low, and the habitat then may develop as a rocky “barren ground.” Because sea otters were trapped very intensively for their fur during the eighteenth and nineteenth centuries, they were extirpated over much of their natural range. In fact, the species had been considered extinct until the 1930s, when small populations were “discovered” off the coast of California and in the Aleutian Islands of Alaska. Thanks to effective protection from trapping, and deliberate reintroductions to some areas, populations of sea otters have now recovered over much of their original range. This has resulted in a natural depletion of urchin populations, and a widespread increase in the area of kelp forests. Herbivores as Keystone Species Some herbivorous animals have also been demonstrated to have a strong influence on the structure and productivity of their ecological community. One such example is the spruce budworm (Choristoneura fumiferana), a moth that occasionally irrupts in abundance and becomes an important pest of conifer forests in the northeastern United States and eastern Canada. The habitat of spruce budworm is mature forests dominated by balsam fir (Abies balsamea), white spruce (Picea glauca), and red spruce (P. rubens). This native species of moth is always present in at least small populations, but it sometimes reaches very high populations, which are known as irruptions. When budworm populations are high, many species of forest birds and small mammals occur in relatively large populations that subsist by feeding heavily on larvae of the moth. However, during irruptions of budworm most of the fir and spruce foliage is eaten by the abundant larvae, and after this happens for several years

Environmental Encyclopedia 3

Kirtland’s warbler

many of the trees die. Because of damages caused to mature trees in the forest the budworm epidemic collapses, and then a successional recovery begins. The plant communities of early succession contain many species of plants that are uncommon in mature conifer forests. Eventually, however, another matures, conifer forest redevelops, and the cycle is primed for the occurrence of another irruption of the budworm. Clearly, spruce budworm is a good example of a keystone herbivore, because it has such a great influence on the populations of plant species in its habitat, and also on the many animal species that are predators of the budworm. Another example of a keystone herbivore concerns snow geese (Chen caerulescens) in salt marshes of western Hudson Bay. In the absence of grazing by flocks of snow geese this ecosystem would become extensively dominated by several competitively superior species, such as the saltmarsh grass Puccinellia phryganodes and the sedge Carex subspathacea. However, vigorous feeding by the geese creates bare patches of up to several square meters in area, which can then be colonized by other species of plants. The patchy disturbance regime associated with goose grazing results in the development of a relatively complex community, which supports more species of plants than would otherwise be possible. In addition, by manuring the community with their droppings, the geese help to maintain higher rates of plant productivity than might otherwise occur. In recent years, however, large populations of snow goose have caused severe damages to the salt-marsh habitat by over-grazing. This has resulted in the development of salt-marsh “barrens” in some places, which may take years to recover. Plants as keystone species Some ecologists have also extended the idea of keystone species to refer to plant species that are extremely influential in their community. For example, sugar maple (Acer saccharum) is a competitively superior species that often strongly dominates stands of forest in eastern North America. Under these conditions most of the communitylevel productivity is contributed by sugar-maple trees. In addition, most of the seedlings and saplings are of sugar maple. This is because few seedlings of other species of trees are able to tolerate the stressful conditions beneath a closed sugar-maple canopy. Other ecologists prefer to not use the idea of keystone species to refer to plants that, because of their competitive abilities, are strongly dominant in their community. Instead, these are sometimes referred to as “foundationstone species.” This term reflects the facts that strongly dominant plants contribute the great bulk of the biomass and productivity of their community, and that they support almost all herbivores, predators, and detritivores that are present. [Bill Freedman Ph.D.]

RESOURCES BOOKS Begon, M., J. L. Harper, and C. R. Townsend. Ecology. Individuals, Populations and Communities. 3rd ed. London: Blackwell Sci. Pub., 1996. Krebs, C. J. Ecology. The Experimental Analysis of Distribution and Abundance. San Francisco: Harper and Row, 1985. Ricklefs, R. E. Ecology. New York: W. H. Freeman and Co., 1990.

PERIODICALS Paine, R. T. “Intertidal Community Structure: Experimental Studies of the Relationship Between A Dominant Competitor and Its Principal Predator.” Oecologia 15 (1974): 93–120.

Killer bees see Africanized bees

Kirtland’s warbler Kirtland’s warbler (Dendroica kirtlandii) is an endangered species and one of the rarest members of the North American wood warbler family. Its entire breeding range is limited to a seven-county area of north-central Michigan. The restricted distribution of the Kirtland’s warbler and its specific niche requirements have probably contributed to low population levels throughout its existence, but human activity has had a large impact on their numbers over the past hundred years. The first specimen of Kirtland’s warbler was taken by Samuel Cabot in October 1841, and brought on ship in the West Indies during an expedition to the Yucatan. But this specimen went unnoticed until 1865, long after the species had been formally described. Charles Pease is credited with discovering Kirtland’s warbler. He collected a specimen on May 13, 1851 near Cleveland, Ohio, and gave it to his father-in-law, Dr. Jared P. Kirtland, a renowned naturalist. Kirtland sent the specimen to his friend, ornithologist Spencer Fullerton Baird, who described the new species the following year and named it in honor of the naturalist. The wintering grounds of Kirtland’s warbler is the Bahamas, a fact which was well established by the turn of the century, but its nesting grounds went undiscovered until 1903, when Norman Wood found the first nest in Oscoda County, Michigan. Every nest found since then has been within a 60 mile (95 km) radius of this spot. In 1951 the first exhaustive census of singing males was undertaken in an effort to establish the range of Kirtland’s warblers as well as its population level. Assuming that numbers of males and females are approximately equal and that a singing male is defending an active nesting site, the total of 432 in this census indicated a population of 864 birds. Ten years later another census counted 502 singing males, indicating the population was over 1,000 birds. In 1971, 795

Environmental Encyclopedia 3

Krakatoa

annual counts began, but for the next 20 years these counts revealed that the population had dropped significantly, reaching lows of 167 singing males in 1974 and 1987. In the early 1990s, conservation efforts on behalf of the species began to bear fruit and the population began to recover. By 2001 the annual census counted 1,085 singing males or a total population of over 2,000 birds. The first problem facing this endangered species centers on its specialized nesting and habitat requirements. The Kirtland’s warbler nests on the ground, and its reproductive success is tied closely to its selection of young jack pine trees as nesting sites. When the jack pines are 5–20 ft (1.5–6 m) tall, at an age of 8–20 years, their lower branches are at ground level and provide the cover this warbler needs. The life cycle of the pine, however, is dependent on forest fires, as the intense heat is needed to open the cones for seed release. The advent of fire protection in forest management reduced the production of the number of young trees the warblers needed and the population suffered. Once this relationship was fully understood, jack pine stands were managed for Kirtland’s warbler, as well as commercial harvest, by instituting controlled burns on a 50 year rotational basis. The second problem is the population pressures brought to bear by a nest parasite, the brown-headed cowbird (Molothrus ater), which lays its eggs in the nests of other songbirds. Originally a bird of open plains, it did not threaten Kirtland’s warbler until Michigan was heavily deforested, thus providing it with appropriate habitat. Once established in the warbler’s range, it has increasingly pressured the Kirtland’s population. Cowbird chicks hatch earlier than other birds and they compete successfully with the other nestlings for nourishment. Efforts to trap and destroy this nest parasite in the warbler’s range have resulted in improved reproductive success for Kirtland’s warbler. See also Deforestation; Endangered Species Act; International Council on Bird Preservation; Rare species; Wildlife management [Eugene C. Beckham]

RESOURCES BOOKS Ehrlich, P. R., D. S. Dobkin, and D. Wheye. Birds in Jeopardy: The Imperiled and Extinct Birds of the United States and Canada, Including Hawaii and Puerto Rico. Stanford: Stanford University Press, 1992.

PERIODICALS Weinrich, J. A. “Status of Kirtland’s Warbler, 1988.” Jack-Pine Warbler 67 (1989): 69–72.

OTHER U.S. Fish and Wildlife Service. Kirtland"s Warbler (Dendroica kirtlandii). April 2, 2002 [cited June 19, 2002]. .

796

Krakatoa The explosion of this triad of volcanoes on August 27, 1883, the culmination of a three-month eruptive phase, astonished the world because of its global impact. Perhaps one of the most influential factors, however, was its timing. It happened during a time of major growth in science, technology, and communications, and the world received current news accompanied by the correspondents’ personal observations. The explosion was heard some 3,000 mi (4,828 km) away, on the Island of Rodriguez in the Indian Ocean. The glow of sunsets was so vivid three months later that fire engines were called out in New York City and nearby towns. Krakatoa (or Krakatau), located in the Sunda Strait between Java and Sumatra, is part of the Indonesian volcanic system, which was formed by the subduction of the Indian Ocean plate under the Asian plate. A similar explosion occurred in A.D. 416, and another major eruption was recorded in 1680. Now a new volcano is growing out of the caldera, likely building toward some future cataclysm. This immense natural event, perhaps twice as powerful as the largest hydrogen bomb, had an extraordinary impact on the solid earth, the oceans, and the atmosphere, and demonstrated their interdependence. It also made possible the creation of a wildlife refuge and tropical rain forest preserve on the Ujung Kulon Peninsula of southwestern Java. Studies revealed that this caldera, like Crater Lake, Oregon, resulted from Krakatoa’s collapse into the now empty magma chamber. The explosion produced a 131-ft (40-m) high tsunami, or tidal wave, which carried a steamship nearly 2 mi (3.2 km) inland, and caused most of the fatalities resulting from the eruption. Tidal gauges as far away as San Francisco Bay and the English Channel recorded fluctuations. The explosion provided substantial benefits to the young science of meteorology. Every barometer on Earth recorded the blast wave as it raced towards its antipodal position in Columbia, and then reverberated back and forth in six more recorded waves. The distribution of ash in the stratosphere gave the first solid evidence of rapidly flowing westerly winds, as debris encircled the equator over the next 13 days. Global temperatures were lowered about 0.9°F (0.5°C), and did not return to normal until five years later. An ironic development is that the Ujung Kulon Peninsula was never resettled after the tsunami killed most of the people. Without Krakatoa’s explosion, the population would have most likely grown significantly and much of the habitat there would likely have been altered by agriculture. Instead, the area is now a national park that supports a variety of species, including the Javan rhino (Rhinoceros sondaicus), one of Earth’s rarest and most endangered species. This park has provided a laboratory for scientists to study nature’s

Environmental Encyclopedia 3

Joseph Wood Krutch

Krill. (McGraw-Hill Inc. Reproduced by permission.)

healing process after such devastation. See also Mount Pinatubo, Philippines; Mount Saint Helens, Washington; Volcano [Nathan H. Meleen]

RESOURCES BOOKS Nardo, D. Krakatoa. World Disasters Series. San Diego: Lucent Books, 1990. Simkin, T., and R. Fiske. Krakatau 1883: The Volcanic Eruption and Its Effects. Washington, DC: Smithsonian Books, 1983.

PERIODICALS Ball, R. “The Explosion of Krakatoa,” National Geographic 13 (June 1902): 200–203. Plage, D., and M. Plage. “Return of Java’s Wildlife,” National Geographic 167 (June 1985): 750–71.

Krill Marine crustaceans in the order Euphausiacea. Krill are zooplankton, and most feed on microalgae by filtering them from the water. In high latitudes, krill may account for a large proportion of the total zooplankton. Krill often occur in large swarms and in a few species these swarms may reach several hundred square meters in size with densities over 60,000 individuals per square meter. This swarming behavior makes them valuable food sources for many species of whales and seabirds. Humans have also begun to harvest krill for use as a dietary protein supplement.

Joseph Wood Krutch (1893 – 1970) American literary critic and naturalist

Through much of his career, Krutch was a teacher of criticism at Columbia University and a drama critic for The Nation. But then respiratory problems led him to early retirement in the desert near Tucson, Arizona. He loved the desert and there turned to biology and geology, which he applied to maintain a consistent, major theme found in all of his writings, that of the relation of humans and the universe. Krutch subsequently became an accomplished naturalist. Readers can find the theme of man and universe in Krutch’s early work, for example The Modern Temper (1929), and in his later writings on human-human and humannature relationships, including natural history—what Rene Jules Dubos described as “the social philosopher protesting against the follies committed in the name of technological progress, and the humanist searching for permanent values in man’s relationship to nature.” Assuming a pessimistic stance in his early writings, Krutch despaired about lost connections, arguing that for humans to reconnect, they must conceive of themselves, nature, “and the universe in a significant reciprocal relationship.” Krutch’s later writings repudiated much of his earlier despair. He argued against the dehumanizing and alienating forces of modern society and advocated systematically reassembling—by reconnecting to nature—"a world man can live in.” In The Voice of the Desert (1954), for instance, he claimed that “we must be part not only of the human community, but of the whole community.” In such books as The Twelve Seasons (1949) and The Great Chain of Life (1956), he demonstrated that humans “are a part of Nature...whatever we discover about her we are discovering also about ourselves.” This view was based on a solid anti-deterministic approach that opposed mechanistic and behavioristic theories of evolution and biology. His view of modern technology as out of control was epitomized in the automobile. Driving fast prevented people from reflecting or thinking or doing anything except controlling the monster: “I’m afraid this is the metaphor of our society as a whole,” he commented. Krutch also disliked the proliferation of suburbs, which he labeled “affluent slums.” He argued in Human Nature and the Human Condition (1959) that “modern man should be concerned with achieving the good life, not with raising the [material] standard of living.” An editorial ran in The New York Times a week after Krutch’s death: today’s younger generation, it read, “unfamiliar with Joseph Wood Krutch but concerned about the environment and contemptuous of materialism,” should “turn to a reading of his books with delight to themselves and profit to the world.” [Gerald L. Young Ph.D.]

797

Kudzu

RESOURCES BOOKS Krutch, J. W. The Desert Year. New York: Viking, 1951. Margolis, J. D. Joseph Wood Krutch: A Writer’s Life. Knoxville: The University of Tennessee Press, 1980. Pavich, P. N. Joseph Wood Krutch. Western Writers Series, no. 89. Boise: Boise State University, 1989.

PERIODICALS Gorman, J. “Joseph Wood Krutch: A Cactus Walden.” MELUS: The Journal of the Society for the Study of the Multi-Ethnic Literature of the United States 11 (Winter 1984): 93–101. Holtz, W. “Homage to Joseph Wood Krutch: Tragedy and the Ecological Imperative.” The American Scholar 43 (Spring 1974): 267–279. Lehman, A. L. “Joseph Wood Krutch: A Selected Bibliography of Primary Sources.” Bulletin of Bibliography 41 (June 1984): 74–80.

Kudzu Pueraria lobata or kudzu, also jokingly referred to as “foota-night” and “the vine that ate the South,” is a highly aggressive and persistent semi-woody vine introduced to the United States in the late nineteenth century. It has since become a symbol of the problems possible for native ecosystems caused by the introduction of exotic species. Kudzu’s best known characteristic is its extraordinary capacity for rapid growth, managing as much as 12 in (30.5 cm) a day and 60–100 ft (18–30 m) a season under ideal conditions. When young, kudzu has thin, flexible, downy stems that grow outward as well as upward, eventually covering virtually everything in its path with a thick mat of leaves and tendrils. This lateral growth creates the dramatic effect, common in southeastern states such as Georgia, of telephone poles, buildings, neglected vehicles, and whole areas of woodland being enshrouded in blankets of kudzu. Kudzu’s tendency towards aggressive and overwhelming colonization has many detrimental effects, killing stands of trees by robbing them of sunlight and pulling down or shorting out utility cables. Where stem nodes touch the ground, new roots develop which can extend 10 ft (3 m) or more underground and eventually weigh several hundred pounds. In the nearly ideal climate of the Southeast, the prolific vine easily overwhelms virtually all native competitors and also infests cropland and yards. A member of the pea family, kudzu is itself native to China and Japan. Introduced to the United States at the Japanese garden pavilion during the 1876 Philadelphia Centennial Exhibition, kudzu’s broad leaves and richly fragrant reddish-purple blooms made it seem highly desirable as an ornamental plant in American gardens. It now ranges along the eastern seaboard from Florida to Pennsylvania, and westward to Texas. Although hardy, kudzu does not tolerate cold weather and prefers acidic, well-drained soils and bright 798

Environmental Encyclopedia 3 sunlight. It rarely flowers or sets seed in the northern part of its range and loses its leaves at first frost. For centuries, the Japanese have cultivated kudzu for its edible roots, medicinal qualities, and fibrous leaves and stems, which are suitable for paper production. After its initial introduction as an ornamental, kudzu also was touted as a forage crop and as a cure for erosion in the United States. Kudzu is nutritionally comparable to alfalfa, and its tremendous durability and speed of growth were thought to outweigh the disadvantages caused for cutting and baling by its rope-like vines. But its effectiveness as a ground cover, particularly on steeply-sloped terrain, is responsible for kudzu’s spectacular spread. By the 1930s, the United States Soil Conservation Service was enthusiastically advocating kudzu as a remedy for erosion, subsidizing farmers as well as highway departments and railroads, with as much as $8 an acre to use kudzu for soil retention. The Depression-era Civilian Conservation Corps also facilitated the spread of kudzu, planting millions of seedlings as part of an extensive erosion control project. Kudzu also has had its unofficial champions, the best known of whom is Channing Cope of Covington, Georgia. As a journalist for Atlanta newspapers and popular radio broadcaster, Cope frequently extolled the virtues of kudzu, dubbing it the “miracle vine” and declaring that it had replaced cotton as “King” of the South. The spread of the vine was precipitous. In the early 1950s, the federal government began to question the wisdom of its support for kudzu. By 1953, the Department of Agriculture stopped recommending the use of kudzu for either fodder or ground cover. In 1982, kudzu was officially declared a weed. Funding is now directed more at finding ways to eradicate kudzu or at least to contain its spread. Continuous overgrazing by livestock will eventually eradicate a field of kudzu, as will repeated applications of defoliant herbicides. Even so, stubborn patches may take five or more years to be completely removed. Controlled burning is usually ineffective and attempting to dig up the massive root system is generally an exercise in futility, but kudzu can be kept off lawns and fences (as an ongoing project) by repeated mowing and enthusiastic pruning. A variety of new uses are being found for kudzu, and some very old uses are being rediscovered. Kudzu root can be processed into flour and baked into breads and cakes; as a starchy sweetener, it also may be used to flavor soft drinks. Medical researchers investigating the scientific bases of traditional herbal remedies have suggested that isoflavones found in kudzu root may significantly reduce craving for alcohol in alcoholics. Eventually, derivatives of kudzu may also prove to be useful for treatment of high blood pressure. Methane and gasohol have been successfully produced from kudzu, and kudzu’s stems may prove to be an economically viable

Environmental Encyclopedia 3

Kyoto Protocol/Treaty

source of fiber for paper production and other purposes. The prolific vine has also become something of a humorous cultural icon, with regional picture postcards throughout the south portraying spectacular and only somewhat exaggerated images of kudzu’s explosive growth. Fairs, festivals, restaurants, bars, rock groups, and road races have all borrowed their name and drawn some measure of inspiration from kudzu, poems have been written about it, and kudzu cookbooks and guides to kudzu crafts are readily available in bookstores. [Lawrence J. Biskowski]

RESOURCES PERIODICALS Dolby, V. “Kudzu Grows Beyond Erosion Control to Help Control Alcoholism.” Better Nutrition 58, no. 11 (November 1996): 32. Hipps, C. “Kudzu.” Horticulture 72, no. 6 (June 1994): 36–9. Tenenbaum, D. “Weeds from Hell,” Technology Review 99, no. 6 (August 1996): 32–40.

Kwashiorkor One of many severe protein energy malnutrition disorders that are a widespread problem among children in developing countries. The word’s origin is in Ghana, where it means a deposed child, or a child that is no longer suckled. The disease usually affects infants between one and four years of age who have been weaned from breast milk to a high starch, low protein diet. The disease is characterized by lethargy, apathy, or irritability. Over time the individual will experience retarded growth processes both physically and mentally. Approximately 25% of children suffer from recurrent relapses of kwashiorkor, interfering with their normal growth. Kwashiorkor results in amino acid deficiencies which inhibit protein synthesis in all tissues. The lack of sufficient plasma proteins, specifically albumin, results in systemic pressure changes, ultimately causing generalized edema. The liver swells with stored fat because there are no hepatic proteins being produced for digestion of fats. Kwashiorkor additionally results in reduced bone density and impaired renal function. If treated early on in its development the disease can be reversed with proper dietary therapy and treatment of associated infections. If the condition is not reversed in its early stages, prognosis is poor and physical and mental growth will be severely retarded. See also Sahel; Third World

Kyoto Protocol/Treaty In the mid-1980s, a growing body of scientific evidence linked man-made greenhouse gas emissions to global warming. In 1990, the United Nations General Assembly issued a report that confirmed this link. The Rio Accord of 1992 resulted from this report. Formally called the United Nations Framework Convention on Climate Change (UNFCC), the accord was signed by various nations in Rio de Janeiro, Brazil and committed industrialized nations to stabilizing their emissions at 1990 levels by 2000. In December 1997, representatives of 160 nations met in Kyoto, Japan, in an attempt to produce a new and improved treaty on climate change. Major differences occurred between industrialized and still developing countries with the United States perceived, particularly by representatives of the European Union (EU), as not doing its share to reduce emissions, especially those of carbon dioxide. The outcome of this meeting, the Kyoto Protocol to the United Nations Framework Convention on Climate Change (UNFCCC), required industrialized nations to reduce their emissions of carbon dioxide, methane, nitrous oxide, hydrofluorocarbons, sulfur dioxides, and perfluorocarbons below 1990 levels by 2012. The requirements would be different for each country and would have to begin by 2008 and be met by 2012. There would be no requirements for the developing nations. Whether or not to sign and ratify the treaty was left up to the discretion of each individual country. Global warming The organization that provided the research for the Kyoto Protocol was the Intergovernmental Panel on Climate Change (IPCC), set up in 1988 as a joint project of the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO). In 2001 the IPCC released a report, “Climate Change, 2001". Using the latest climatic and atmospheric scientific research available, the report predicted that global mean surface temperatures on earth would increase by 2.5–10.4°F (1.5–5.9 °C) by the year 2100, unless greenhouse gas emissions were reduced well below current levels. This warming trend was seen as rapidly accelerating, with possible dire consequences to human society and the environment. These accelerating temperature changes were expected to lead to rising sea levels, melting glaciers and polar ice packs, heat waves, droughts and wildfires, and a profound and deleterious effect on human health and well-being. Some of the effects of these temperature changes may already be occurring. Most of the United States has already experienced increases in mean annual temperature of up to 4°F (2.3°C). Sea ice is melting in both the Antarctica and the Arctic. Ninety-eight percent of the world’s glaciers are 799

Kyoto Protocol/Treaty

shrinking. The sea level is rising at three times its historic rate. Florida is already feeling the early effects of global warming with shorelines suffering from erosion, with dying coral reefs, with saltwater polluting the fresh water sources, with an increase in wildfires, and with higher air and water temperatures. In Canada, forest fires have more than doubled since 1970, water wells are going dry, lake levels are down, and there is less rainfall. Controversy Since its inception, the Kyoto Protocol has generated a great deal of controversy. Richer nations have argued that the poorer, less developed nations are getting off easy. The developing nations, on the other hand, have argued that they will never be able to catch up with the richer nations unless they are allowed to develop with the same degree of pollution as that which let the industrial nations become rich in the first place. Another controversy rages between environmentalists and big business. Environmentalists have argued that the Kyoto Protocol doesn’t go far enough, while petroleum and industry spokespersons have argued that it would be impossible to implement without economic disaster. In the United States, the controversy has waxed especially high. The Kyoto Protocol was signed under the administration of President Bill Clinton, but was never ratified by the Republican-dominated U.S. Senate. Then in 2001, President George W. Bush, a former Texas oilman, backed out of the treaty, saying it would cost the U.S. economy 400 billion dollars and 4.9 million jobs. Bush unveiled an alternative proposal to the Kyoto accord that he said would reduce greenhouse gases, curb pollution and promote energy efficiency. But critics of his plan have argued that by the year 2012 it would actually increase the 1990 levels of greenhouse gas emissions by more than 30%. Soon after the Kyoto Protocol was rejected by the Bush administration, the European Union criticized the action. In particular, Germany was unable to understand why the Kyoto restrictions would adversely effect the American economy, noting that Germany had been able to reduce their emissions without serious economic problems. The Germans also suggested that President Bush’s program to induce voluntary reductions was politically motivated and was designed to prevent a drop in the unreasonably high level of consumption of greenhouse gases in the United States, a drop that would be politically damaging for the Bush administration. In rejecting the Kyoto Protocol, President Bush claimed that it would place an unfair burden on the United States. He argued that it was unfair that developing countries such as India and China should be exempt. But China had already taken major steps to affect climate change. According to a June report by the World Resources Institute, a Washington, D.C.-based environmental think tank, China volun800

Environmental Encyclopedia 3 tarily cut its carbon dioxide emissions by 19% between 1997 and 1999. Contrary to Bush’s fears that cutting carbon dioxide output would inevitably damage the United States economy, China’s economy grew by 15% during this same twoyear period. Politics has always been at the forefront of this debate. The IPCC has provided assessments of climate change that have helped shape international treaties, including the Kyoto Protocol. However, the Bush administration, acting at the request of ExxonMobil, the world’s largest oil company, and attempting to cast doubts upon the scientific integrity of the IPCC, was behind the ouster in 2002 of IPCC chairperson Robert Watson, an atmospheric scientist who supported implementing actions against global warming. The ability of trees and plants to fix carbon through the process of photosynthesis, a process called carbon or C sequestration, results in a large amount of carbon stored in biomass around the world. In the framework of the Kyoto Protocol, C sequestration to mitigate the greenhouse effect in the terrestrial ecosystem has been an important topic of discussion in numerous recent international meetings and reports. To increase C sequestration in soils in the dryland and tropical areas, as a contribution to global reductions of atmospheric CO2, the United States has promoted new strategies and new practices in agriculture, pasture use and forestry, including conservation agriculture and agroforestry. Such practices should be facilitated particularly by the application of article 3.4 of the Kyoto Protocol covering the additional activities in agriculture and forestry in the developing countries and by appropriate policies. Into the future In June 2002, the 15 member nations of the European Union formally signed the Kyoto Protocol. The ratification by the 15 EU countries was a major step toward making the 1997 treaty effective. Soon after, Japan signed the treaty, and Russia was expected to follow suit. To take effect, the Kyoto Protocol must be ratified by 55 countries, but these ratifications have to include industrialized nations responsible for at least 55% of the 1990 levels of greenhouse gases. As of 2002, over 70 countries had already signed, exceeding the minimum number of countries needed. If Russia signs the treaty, nations responsible for over 55% of the 1990 levels of greenhouse gas pollution will have signed, and the Kyoto Protocol will take effect. Before the EU ratified the protocol, the vast majority of countries that had ratified were developing countries. With the withdrawal of the United States, responsible for 36.1% of greenhouse gas emissions in 1990, ratification by industrialized nations was crucial. For example, environmentalists hope that Canada will ratify the treaty as it has already committed compliance.

Environmental Encyclopedia 3

Kyoto Protocol/Treaty

Although the Bush administration opposed the Kyoto Protocol, saying that its own plan of voluntary restrictions would work as well without the loss of billions of dollars and without driving millions of Americans out of work, the EPA, under its administrator Christine Todd Whitman, in 2002 sent a climate report to the United Nations detailing specific, far-reaching, and disastrous effects of global warming upon the American environment and its people. The EPA report also admitted that global warming is occurring because of man-made carbon dioxide and other greenhouse gases. However, it offered no major changes in administration policies, instead recommending adapting to the inevitable and catastrophic changes. Although the United States was still resisting the Kyoto Protocol in mid-2002, and the treaty’s implications for radical and effective action, various states and communities decided to go it alone. Massachusetts and New Hampshire enacted legislation to cut carbon emissions. California was considering legislation limiting emissions from cars and small trucks. Over 100 U.S. cities had already opted to cut carbon emissions. Even the U.S business community, because of their many overseas operations, was beginning to voluntarily cut back on their greenhouse emissions. [Douglas Dupler]

RESOURCES BOOKS Brown, Paige. Climate, Biodiversity and Forests: Issues and Opportunities Emerging from the Kyoto Protocol. Washington, DC: World Resources Institute, 1998.

Gelbspan, Ross. The Heat is On: The Climate Crisis, the Cover-up, the Prescription. New York: Perseus Book, 1998. McKibbin, Warwick J., and Peter Wilcoxen. Climate Change Policy After Kyoto: Blueprint for a Realistic Approach. Washington DC: The Brookings Institution Press, 2002. Victor, David G. Collapse of the Kyoto Protocol and the Struggle to Slow Global Warming. Boston: Princeton University Press, 2002.

PERIODICALS Benedick, Richard E. “Striking a New Deal on Climate Change.” Issues in Science and Technology, Fall 2001, 71. Gelbspan, Ross. “A Modest Proposal to Stop Global Warming.” Sierra, May/June 2001, 63. McKibben, Bill. “Climate Change 2001: Third Assessment Report.” New York Review of Books, July 5, 2001, 35. Rennie, John. “The Skeptical Environmentalist Replies.” Scientific American, May 2002, 14.

OTHER “Guide to the Kyoto Protocol.” Greenpeace International Web Site. 1998 [cited July 2002]. . Intergovernmental Panel on Climate Change Web Site. [cited July 2002]. . “Kyoto Protocol.” United Nations Convention of Climate Change. 1997 [cited July 1997]. . Union of Concerned Scientists Global Warming Web Site. [cited July 2002]. .

ORGANIZATIONS IPCC Secretariat, C/O World Meteorological Organization, 7bis Avenue de la Paix, C.P. 2300, CH- 1211, Geneva, Switzerland 41-22-730-8208, Fax: 41-22-730-8025, Email: [email protected], UNIDO—Climate Change/Kyoto Protocol Activities, UNIDO New York Office, New York, NY USA 10017 (212) 963-6890, Email: [email protected],

801

This Page Intentionally Left Blank

L ˜a La Nin La Nin˜a, Spanish for “the little girl,” is also called a cold episode, “El Viejo” (The Old Man), or anti-El Nin˜o. It is one of two major changes in the Pacific Ocean surface temperature that affect global weather patterns. La Nin˜a and El Nin˜ o ("the little boy") are the extreme phases of the El Nin˜o/Southern Oscillation, a climate cycle that occurs naturally in the eastern tropical Pacific Ocean. The effects of both phases are usually strongest from December to March. In some ways, La Nin˜a is the opposite of El Nin˜o. For example, La Nin˜a usually brings more rain to Australia and Indonesia, areas that are susceptible to drought during El Nin˜o. La Nin˜a is characterized by unusually cold ocean surface temperatures in the equatorial region. Ordinarily, the sea surface temperature off the western coast of South America ranges from 60–70°F (15–21°C). According to the National Oceanic and Atmospheric Administration (NOAA), the temperature dropped by up to 7°F (4°C) below normal during the 1988–1989 La Nin˜a. In the United States, La Nin˜a usually brings cooler and wetter than normal conditions in the Pacific Northwest and warmer and drier conditions in the Southeast. In contrast, during El Nin˜o, surface water temperatures in the tropical Pacific are unusually warm. Because water temperatures increase around Christmas, people in South America called the condition “El Nin˜o” to honor the Christ Child. The two weather phenomena are caused by the interaction of the ocean surface and the atmosphere in the tropical Pacific. Changes in the ocean affect the atmosphere and climate patterns around the world, with changes in the atmosphere in turn affecting the ocean temperature and currents. Before the onset of La Nin˜a, there is usually a build-up of cooler than normal subsurface water in the tropical Pacific. The cold water is brought to the surface by atmospheric waves and ocean waves. Winds and currents push warm water towards Asia. In addition, the system can drive the polar jet stream (a stream of winds at high altitude) to the north; this affects weather in the United States.

The effects of La Nin˜a and El Nin˜o are generally seen in the United States during the winter. The two conditions usually occur every three to five years. However, the period between episodes may be from two to seven years. The conditions generally last from nine to 12 months, but episodes could last as long as two years. Since 1975, El Nin˜os have occurred twice as frequently as La Nin˜as. While both conditions are cyclical, a La Nin˜a episode does not always follow an El Nin˜o episode. La Nin˜as in the twentieth century occurred in 1904, 1908, 1910, 1916, 1924, 1928, 1938, 1950, 1955, 1964, 1970, 1973, 1975, 1988, 1995, and 1998. Effects of the 1998 La Nin˜a included flooding in Mozambique in 2000 and a record warm winter in the United States. Nationwide temperatures averaged 38.4°F (3.5°C) from December 1999 through February 2000, according to the NOAA. In addition, that three-month period was the sixteenth driest winter in the 105 years that records have been kept by National Climatic Data Center. The 1998 La Nin˜a was diminishing by November 2000; another La Nin˜a had not been projected as of May 2002. Scientists from the NOAA and other agencies use various tools to monitor La Nin˜a and El Nin˜o. These tools include satellites and data buoys that are used to monitor sea surface temperatures. Tracking the two weather phenomena can help nations to prepare for potential disasters such as floods. In addition, knowledge of the systems can help businesses plan for the future. In a March 1999 article in Nation’s Restaurant News, writer John T. Barone described the impact that La Nin˜a could have on food and beverage prices. Barone projected that drought conditions in Brazil could bring an increase in the price of coffee. [Liz Swain]

RESOURCES BOOKS Caviedes, Cesar. El Nin˜o in History: Storming Through the Ages. Gainesville, FL: University Press of Florida, 2001.

803

Environmental Encyclopedia 3

La Paz Agreement

Glantz, Michael. Currents of Change: Impacts of El Nin˜o and La Nin˜a on Climate and Society. New York: Cambridge University Press, 2001.

PERIODICALS Barone, John T. “La Nin˜a to Put a Chill in Prices this Winter.” Nation’s Restaurant News (December 6, 1999): 78. Le Comte, Douglas. “Weather Around the World.” Weatherwise (March 2001): 23.

ORGANIZATIONS National Oceanic and Atmospheric Administration, 14th Street & Constitution Avenue, NW, Room 6013, Washington, DC USA 20230 (202) 482-6090, Fax: (202) 482-3154, Email: [email protected],

the open ocean from the near shore body of water. Coral reef lagoons can form in two ways. The first type is found in barrier reefs such as those in Australia and Belize, where there is a body of water (lagoon) which is separated from the open ocean by the reef formed many miles off shore. Another type of lagoon is that formed in the center of atolls, which are circular or horse-shoe shaped bodies of water in the middle of partially sunken volcanic islands with coral reefs growing around their periphery. Some of these atoll lagoons are more than 30 mi (50 km) across and have breathtaking visibility, thus providing superb sites for SCUBA diving.

La Paz Agreement The 1983 La Paz Agreement between the United States and Mexico is a pact to protect, conserve, and improve the environment of the border region of both countries. The agreement defined the region as the 62 mi (100 km) to the north and south of the international border. This area includes maritime (sea) boundaries and land in four American states and six Mexican border states. Representatives from the two countries signed the agreement on Aug. 14, 1983, in La Paz, Mexico. The agreement took effect on Feb. 16, 1984. It established six workgroups, with each group concentrating on an environmental concern. Representatives from both countries serve on the workgroups that focus on water, air, hazardous and solid waste, pollution prevention, contingency planning and emergency response, and cooperative enforcement and compliance. In February of 1992, environmental officials from the two countries released the Integrated Environmental Plan for the Mexican-U.S. Border Area. The Border XXI Program created nine additional workgroups. These groups focus on environmental information resources, natural resources, and environmental health. Border XXI involves federal, state, and local governments on both sides of the border. Residents also participate through activities such as public hearings. [Liz Swain]

Lagoon A lagoon is a shallow body of water separated from a larger, open body of water. It is typically associated with the ocean, such as coastal lagoons and coral reef lagoons. Lagoon also can be used to describe shallow areas of liquid waste material as in sewage lagoons. Oceanic lagoons can be formed in several ways. Coastal lagoons are typically found along coastlines where there are sand bars or barrier islands that separate 804

Lake Baikal The Great Lakes are a prominent feature of the North American landscape, but Russia holds the distinction of having the “World’s Great Lake.” Called the “Pearl of Siberia” or the “Sacred Sea” by locals, Lake Baikal is the world’s deepest and largest lake by volume. It has a surface area of 12,162 sq miles (31,500 sq km), a maximum depth of 5,370 ft (1,637 m), or slightly more than 1 mile, an average depth of 2,428 ft (740 m), and a volume of 30,061 cu yd (23,000 cu m). It thus contains more water than the combined volume of all of the North American Great Lakes—20 percent of the world’s fresh water (and 80 percent of the fresh water of the former Soviet Union). Lake Baikal is located in Russia in south-central Siberia near the northern border of Mongolia. Scientists estimate that the lake was formed 25 million years ago by tectonic (earthquake) displacement, creating a crescent-shaped, steep-walled basin 395 miles (635 km) long by 50 miles (80 km) wide and nearly 5.6 miles (9 km) deep. In contrast, the Great Lakes were formed by glacial scouring a mere 10,000 years ago. Although sedimentation has filled in 80 percent of the basin over the years, the lake is believed to be widening and deepening ever so slightly with time because of recurring crustal movements. The area surrounding Lake Baikal is underridden by at least three crustal plates, causing frequent earthquakes. Fortunately, most are too weak to feel. Like similarly ancient Lake Tanganyika in Africa, the waters of Lake Baikal host a great number of unique species. Of the 1,200 known animal and 600 known plant species, more than 80 percent are endemic to this lake. These include many species of fish, shrimp, and the world’s only fresh water sponges and seals. Called nerpa or nerpy by the natives, these seals (Phoca sibirica) are silvery-gray in color and can grow to 5 ft (1.5 m) long and weigh up to 286 lb (130 kg). Their diet consists almost exclusively of a strange-looking relict fish called golomyanka (Comephorus baicalensis), ren-

Environmental Encyclopedia 3

Lake Baikal

A view of the Strait of Olkhon, on the west coast of Lake Baikal, Russia. (Photograph by Press Agency, Science Photo Library. Photo Researchers Inc. Reproduced by permission.)

dered translucent by its fat-filled body. Unlike other fish, they lack scales and swim bladders and give birth to live larvae rather than eggs. The seal population is estimated at 60,000. Commercial hunters are permitted to kill 6,000 each year. Although the waters of Lake Baikal are pristine by the standards of other large lakes, increased pollution threatens its future. Towns along its shores and along the stretches of the Selenga River, the major tributary flowing into Baikal, add human and industrial wastes, some of which is nonbiodegradable and some highly toxic. A hydroelectric dam on the Angara River, the lake’s only outlet, raised the water level and placed spawning areas of some fish below the optimum depth. Most controversial to the people who depend on this lake for their livelihood and pleasure, however, was the construction of a large cellulose plant at the southern end near the city of Baikalsk in 1957. Built originally to manufacture high-quality aircraft tires (ironically, synthetic tires proved superior), today it produces clothing from bleached cellulose and employs 3,500 people. Uncharacteristic public outcry over the years has resulted in the addition

of advanced sewage treatment facilities to the plant. Although some people would like to see it shut down, the local (and national) economy has taken precedence. In 1987 the Soviet government passed legislation protecting Lake Baikal from further destruction. Logging was prohibited anywhere close to the shoreline and nature reserves and national parks were designated. However, with the recent political turmoil and crippling financial situation in the former Soviet Union, these changes have not been enforced and the lake continues to receive pollutants. Much more needs to be done to assure the future of this magnificent lake. See also Endemic species [John Korstad]

RESOURCES BOOKS Feshbach, M., and A. Friendly, Jr. Ecocide in the USSR. New York: Basic Books, 1992. Matthiessen, P. Baikal: Sacred Sea of Siberia. San Francisco: Sierra Club Books, 1992.

805

Environmental Encyclopedia 3

Lake Erie

PERIODICALS Belt, D. “Russia’s Lake Baikal, the World’s Great Lake.” National Geographic 181 (June 1992): 2–39.

Lake Erie Lake Erie is the most productive of the Great Lakes. Located along the southern fringe of the Precambrian Shield of North America, Lake Erie has been ecologically degraded by a variety of anthropogenic stressors including nutrient loading; extensive deforestation of its watershed that caused severe siltation and other effects; vigorous commercial fishing; and pollution by toxic chemicals. The watershed of Lake Erie is much more agricultural and urban in character than are those of the other Great Lakes. Consequently, the dominant sources of phosphorus (the most important nutrient causing eutrophication) to Lake Erie are agricultural runoff and municipal point sources. The total input of phosphorus to Lake Erie (standardized to watershed area) is about l.3 times larger than to Lake Ontario and more than five times larger than to the other Great Lakes. Because of its large loading rates and concentrations of nutrients, Lake Erie is more productive and has a larger standing crop of phytoplankton, fish, and other biota than the other Great Lakes. During the late 1960s and early 1970s, the eutrophic western basin of Lake Erie had a summer-chlorophyll concentration averaging twice as large as in Lake Ontario and 11 times larger than in oligotrophic Lake Superior. However, since that time the eutrophication of Lake Erie has been alleviated somewhat, in direct response to decreased phosphorus inputs with sewage and detergents. A consequence of the eutrophic state of Lake Erie was the development of anoxia (lack of oxygen) in its deeper waters during summer stratification. In the summer of 1953, this condition caused a collapse of the population of benthic mayfly larvae (Hexagenia spp.), a phenomenon that was interpreted in the popular press as the “death” of Lake Erie. Large changes have also taken place in the fish community of Lake Erie, mostly because of its fishery, the damming of streams required for spawning by anadromous fishes (fish that ascend rivers or streams to spawn), and sedimentation of shallow-water habitat by silt eroded from deforested parts of the watershed. Lake Erie has always had the most productive fishery on the Great Lakes, with fish landings that typically exceed the combined totals of all the other Great Lakes. The peak years of commercial fishery in Lake Erie were in 1935 and 1956 (62 million lb/28 million kg), while the minima were in 1929 and 1941 (24 million lb/11 million kg). Overall, the total catch by the commercial fishery has been remarkably stable over time, despite large changes 806

in species, effort, eutrophication, toxic pollution, and other changes in habitat. The historical pattern of development of the Lake Erie fishery was characterized by an initial exploitation of the most desirable and valuable species. As the populations of these species collapsed because of unsustainable fishing pressure, coupled with habitat deterioration, the fishery diverted to a progression of less-desirable species. The initial fishery focused on lake white fish (Coregonus clupeaformis), lake trout (Salvelinus namaycush), and lake herring (Leucichthys artedi), all of which rapidly declined to scarcity or extinction. The next target was “second-choice” species, such as blue pike (Stizostedion vitreum glaucum) and walleye (S. v. vitreum), which are now extinct or rare. Today’s fishery is dominated by species of much smaller economic value, such as yellow perch (Perca flavescens), rainbow smelt (Osmerus mordax), and carp (Cyprinus carpio). In 1989 an invasive species—the zebra mussel (Dreissena polymorpha—reached Lake Erie and began to have a significant ecological impact on the lake. Zebra mussels are filter feeders, and each adult mussel can filter a liter of water per day, removing every microscopic plant (phytoplankton or algae) and animal (zooplankton) in the process. Zebra mussel densities in Lake Erie have reached such a level that the entire volume of the lake’s western basin is filtered each week. This has increased water clarity up to 600 percent and reduced some forms of phytoplankton in the lake’s food web by as much as 80 percent. In addition, the increased clarity of the water allows light to penetrate deeper into the water, thus facilitating the growth of rooted aquatic plants and increasing populations of some bottom-dwelling algae and tiny animals. Zebra mussels also concentrate 10 times more toxins than do native mussels, and these contaminants are passed up the food chain to the fish and birds that eat zebra mussels. Since bioaccumulation of toxins has already led to advisories against eating some species of Great Lakes fish, the contribution of zebra mussels to contaminant cycling in lake species is a serious concern. See also Cultural eutrophication; Water pollution [Bill Freedman Ph.D.]

RESOURCES BOOKS Ashworth, W. The Late, Great Lakes: An Environmental History. New York: Knopf, 1986. Freedman, B. Environmental Ecology. 2nd edition San Diego: Academic Press, 1995.

PERIODICALS Regier, H. A., and W. L. Hartman. “Lake Erie’s Fish Community: 150 Years of Cultural Stresses.” Science 180 (1973): 1248–55.

Environmental Encyclopedia 3 OTHER “Zebra Mussels and Other Nonindigenous Species.” Sea Grant Great Lakes Network. August 15, 2001 [June 19, 2002]. .

Lake Tahoe A beautiful lake 6,200 ft (1,891 m) high in the Sierra Nevada, straddling the California-Nevada state line, Lake Tahoe is a jewel to both nature-lovers and developers. It is the tenth deepest lake in the world, with a maximum depth of 1,600 ft (488 m) and a total volume of 37 trillion gallons. At the south end of the lake sits a dam that supplies up to six feet of Lake Tahoe’s water flow into the outlet of the Truckee River. The U.S. Bureau of Reclamation controls water diversion into the Truckee, which is used for irrigation, power, and recreational purposes throughout Nevada. Tahoe and Crater Lake are the only two large alpine lakes remaining in the United States. Visitors have expressed their awe of the lake’s beauty since it was discovered by General John Fre´mont in 1844. Mark Twain wrote that it was “the fairest sight the whole Earth affords.” The arrival of Europeans in the Tahoe area was quickly followed by environmental devastation. Between 1870 and 1900, forests around the lake were heavily logged to provide timber for the mine shafts of the Comstock Lode. While this logging dramatically altered the area’s appearance for years, the natural environment eventually recovered and no long-term logging-related damage to the lake can now be detected. The same can not be said for a later assault on the lake’s environment. Shortly after World War II, people began moving into the area to take advantage of the region’s natural wonders—the lake itself and superb snow skiing— as well as the young casino business on the Nevada side of the lake. The 1960 Winter Olympics, held at Squaw Valley, placed Tahoe’s recreational assets in the international spotlight. Lakeside population grew from about 20,000 in 1960 to more than 65,000 today, with an estimated tourist population of 22 million annually. The impact of this rapid population growth soon became apparent in the lake itself. Early records showed that the lake was once clear enough to allow visibility to a depth of about 130 ft (40 m). By the late 1960s, that figure had dropped to about 100 ft (30 m). Tahoe is now undergoing eutrophication at a fairly rapid rate. Algal growth is being encouraged by sewage and fertilizer produced by human activities. Much of the area’s natural pollution controls, such as trees and plants, have been removed to make room for residential and commercial development. The lack of significant flow into and out of

Lake Tahoe

the lake also contributes to a favorable environment for algal growth. Efforts to protect the pristine beauty of Lake Tahoe go back at least to 1912. Three efforts were made during that decade to have the lake declared a national park, but all failed. By 1958, concerned conservationists had formed the Lake Tahoe Area Council to “promote the preservation and long-range development of the Lake Tahoe basin.” The Council was followed by other organizations with similar objectives, the League to Save Lake Tahoe among them. An important step in resolving the conflict between preservationists and developers occurred in 1969 with the creation of the Tahoe Regional Planning Agency (TRPA). The agency was the first and only land use commission with authority in more than one state. It consisted of fourteen members, seven appointed by each of the governors of the two states involved, California and Nevada. For more than a decade, the agency attempted to write a land-use plan that would be acceptable to both sides of the dispute. The conflict became more complex when the California Attorney General, John Van de Kamp, filed suit in 1985 to prevent TRPA from granting any further permits for development. Developers were outraged but lost all of their court appeals. By 2000, the strain of tourism, development, and nonpoint automobile pollution was having a visible impact on Lake Tahoe’s legendary deep blue surface. A study released by the University of California—Davis and the University of Nevada—Reno reported that visibility in the lake had decreased to 70 ft (21 m), an average decline of a foot a year since the 1960s. As part of a renewed effort to reverse Tahoe’s environmental decline, President Clinton signed the Lake Tahoe Restoration Act into law in late 2000, authorizing $300 million towards restoration of water quality in Lake Tahoe over a period of 10 years. See also Algal bloom; Cultural eutrophication; Environmental degradation; Fish kills; Sierra Club; Water pollution [David E. Newton and Paula Anne Ford-Martin]

RESOURCES BOOKS Strong, Douglas. Tahoe: From Timber Barons to Ecologists.Lincoln, NE: Bison Books, 1999.

OTHER United States Department of Agriculture (USDA) Forest Service. Lake Tahoe Basin Management Unit. [cited July 8, 2002]. . University of California-Davis. Tahoe Research Group. [cited July 8, 2002]. . United States Geological Survey (USGS) Lake Tahoe Data Clearinghouse. Lake Tahoe Data Clearinghouse. [cited July 8, 2002]. .

807

Environmental Encyclopedia 3

Lake Washington

Lake Washington One of the great messages to come out of the environmental movement of the 1960s and 1970s is that, while humans can cause pollution, they can also clean it up. Few success stories illustrate this point as clearly as that of Lake Washington. Lake Washington lies along the state of Washington’s west coastline, near the city of Seattle. It is 24 miles (39 km) from north to south and its width varies from 2–4 miles (3–6 km). For the first half of this century, Lake Washington was clear and pristine, a beautiful example of the Northwest’s spectacular natural scenery. Its shores were occupied by extensive wooded areas and a few small towns with populations of no more than 10,000. The lake’s purity was not threatened by Seattle, which dumped most of its wastes into Elliot Bay, an arm of Puget Sound. This situation changed rapidly during and after World War II. In 1940, the spectacular Lake Washington Bridge was built across the lake, joining its two facing shores with each other and with Seattle. Population along the lake began to boom, reaching more than 50,000 by 1950. The consequence of these changes for the lake are easy to imagine. Many of the growing communities dumped their raw sewage directly into the lake or, at best, passed their wastes though only preliminary treatment stages. By one estimate, 20 million gallons (76 million liters) of wastes were being dumped into the lake each day. On average these wastes still contained about half of their pollutants when they reached the lake. In less than a decade, the effect of these practices on lake water quality were easy to observe. Water clarity was reduced from at least 15 ft (4.6 m) to 2.5 ft (0.8 m) and levels of dissolved oxygen were so low that some species of fish disappeared. In 1956, W. T. Edmonson, a zoologist and pollution authority, and two colleagues reported their studies of the lake. They found that eutrophication of the lake was taking place very rapidly as a result of the dumping of domestic wastes into its water. Solving this problem was especially difficult because water pollution is a regional issue over which each individual community had relatively little control. The solution appeared to be the creation of a new governmental body that would encompass all of the Lake Washington communities, including Seattle. In 1958, a ballot measure establishing such an agency, known as Metro, was passed in Seattle but defeated in its suburbs. Six months later, the Metro concept was redefined to include the issue of sewage disposal only. This time it passed in all communities. Metro’s approach to the Lake Washington problem was to construct a network of sewer lines and sewage treatment plants that directed all sewage away from the lake and delivered it instead to Puget Sound. The lake’s 808

pollution problems were solved within a few years. By 1975 the lake was back to normal, water clarity returned to 15 ft and levels of potassium and nitrogen in the lake decreased by more than 60 percent. Lake Washington’s biological oxygen demand (BOD), a critical measure of water purity, decreased by 90 percent and fish species that had disappeared were once again found in the lake. See also Aquatic chemistry; Cultural eutrophication; Waste management; Water quality standards [David E. Newton]

RESOURCES BOOKS Edmonson, W. T. “Lake Washington.” In Environmental Quality and Water Development, edited by C. R. Goodman, et al. San Francisco: W. H. Freeman, 1973. ———. The Uses of Ecology: Lake Washington and Beyond. Seattle: University of Washington Press, 1991.

OTHER Li, Kevin. “The Lake Washington Story.” King County Web Site. May 2, 2001 [June 19,2002]. .

Lakes see Experimental Lakes Area; Lake Baikal; Lake Erie; Lake Tahoe; Lake Washington; Mono Lake; National lakeshore

Land degradation see Desertification

Land ethic Land ethic refers to an approach to issues of land use that emphasizes conservation and respect for our natural environment. Rejecting the belief that all natural resources should be available for unchecked human exploitation, a land ethic advocates land use without undue disturbances of the complex, delicately balanced ecological systems of which humans are a part. Land ethic, environmental ethics, and ecological ethics are sometimes used interchangeably. Discussions of land ethic, especially in the United States, usually begin with a reference of some kind to Aldo Leopold. Many participants in the debate over land and resource use admire Leopold’s prescient and pioneering quest and date the beginnings of a land ethic to his A Sand County Almanac, published in 1949. However, Leopold’s earliest

Environmental Encyclopedia 3 formulation of his position may be found in “A Conservation Ethic,” a benchmark essay on ethics published in 1933. Even recognizing Leopold’s remarkable early contribution, it is still necessary to place his pioneer work in a larger context. Land ethic is not a radically new invention of the twentieth century but has many ancient and modern antecedents in the Western philosophical tradition. The Greek philosopher Plato, for example, wrote that morality is “the effective harmony of the whole"—not a bad statement of an ecological ethic. Reckless exploitation has at times been justified as enjoying divine sanction in the Judeo-Christian tradition (man was made master of the creation, authorized to do with it as he saw fit). However, most Christian thought through the ages has interpreted the proper human role as one of careful husbandry of resources that do not, in fact, belong to humans. In the nineteenth century, the Huxleys, Thomas and Julian, worked on relating evolution and ethics. The mathematician and philosopher Bertrand Russell wrote that “man is not a solitary animal, and so long as social life survives, self-realization cannot be the supreme principle of ethics.” Albert Schweitzer became famous—at about the same time that Leopold formulated a land ethic—for teaching reverence for life, and not just human life. Many nonwestern traditions also emphasize harmony and a respect for all living things. Such a context implies that a land ethic cannot easily be separated from age-old thinking on ethics in general. See also Land stewardship [Gerald L. Young and Marijke Rijsberman]

RESOURCES BOOKS Bormann, F. H., and S. R. Kellert, eds. Ecology, Economics, Ethics: The Broken Circle. New Haven, CT: Yale University Press, 1991. Kealey, D. A. Revisioning Environmental Ethics. Albany: State University of New York Press, 1989. Leopold, A. A Sand County Almanac. New York: Oxford University Press, 1949. Nash, R. F. The Rights of Nature: A History of Environmental Ethics. Madison: University of Wisconsin Press, 1989. Rolston, H. Environmental Ethics. Philadelphia: Temple University Press, 1988. Turner, F. “A New Ecological Ethics.” In Rebirth of Value. Albany: State University of New York Press, 1991.

OTHER Callicott, J. Baird. “The Land Ethic: Key Philosophical and Scientific Challenges.” October 15, 1998 [June 19, 2002]. .

Land Institute Founded in 1976 by Wes and Dana Jackson, the Land Institute is both an independent agricultural research station

Land Institute

and a school devoted to exploring and developing alternative agricultural practices. Located on the Smoky Hill River near Salina, Kansas, the Institute attempts—in Wes Jackson’s words—to “make nature the measure” of human activities so that humans “meet the expectations of the land,” rather than abusing the land for human needs. This requires a radical rethinking of traditional and modern farming methods. The aim of the Land Institute is to find “new roots for agriculture” by reexamining its traditional assumptions. In traditional tillage farming, furrows are dug into the topsoil and seeds planted. This leaves precious topsoil exposed to erosion by wind and water. Topsoil loss can be minimized but not eliminated by contour plowing, the use of windbreaks, and other means. Although critical of traditional tillage agriculture, Jackson is even more critical of the methods and machinery of modern industrial agriculture, which in effect trades topsoil for high crop yields (roughly one bushel of topsoil is lost for every bushel of corn harvested). It also relies on plant monocultures—genetically uniform strains of corn, wheat, soybeans, and other crops. These crops are especially susceptible to disease and insect infestations and require extensive use of pesticides and herbicides which, in turn, kill useful creatures (for example, worms and birds), pollute streams and groundwater, and produce other destructive side effects. Although spectacularly successful in the short run, such an agriculture is both nonsustainable and self-defeating. Its supposed strengths—its productivity, its efficiency, its economies of scale—are also its weaknesses. Short-term gains in production do not, Jackson argues, justify the longer term depletion of topsoil, the diminution of genetic diversity, and such social side-effects as the disappearance of small family farms and the abandonment of rural communities. If these trends are to be questioned—much less slowed or reversed—a practical, productive, and feasible alternative agriculture must be developed. To develop such a workable alternative is the aim of the Land Institute. The Jacksons and their associates are attempting to devise an alternative vision of agricultural possibilities. This begins with the important but oft-neglected truism that agriculture is not selfcontained but is intertwined with and dependent on nature. The Institute explores the feasibility of alternative farming methods that might minimize or even eliminate the planting and harvesting of annual crops, turning instead to “herbaceous perennial seed-producing polycultures” that protect and bind topsoil. Food grains would be grown in pasturelike fields and intermingled with other plants that would replenish lost nitrogen and other nutrients, without relying on chemical fertilizers. Covered by a rooted living net of diverse plant life, the soil would at no time be exposed to erosion and would be aerated and rejuvenated by natural 809

Environmental Encyclopedia 3

Land reform

means. And the farmer, in symbiotic partnership, would take nature as the measure of his methods and results. The experiments at the Land Institute are intended to make this vision into a workable reality. It is as yet too early to tell exactly what these continuing experiments might yield. But the re-visioning of agriculture has already begun and continues at the Land Institute. [Terence Ball]

such as health care and education. Without these measures land reform usually falls short of redistributing wealth and power, or fails to maintain or increase production. See also Agricultural pollution; Sustainable agriculture; Sustainable development [Linda Rehkopf]

RESOURCES BOOKS

RESOURCES ORGANIZATIONS

Mengisteab, K. Ethiopia: Failure of Land Reform and Agricultural Crisis. Westport, CT: Greenwood Publishing Group, 1990.

The Land Institute, 2440 E. Water Well Road, Salina, KS USA 67401 (785) 823-5376, Fax: (785) 823-8728, Email: thelandweb@ landinstitute.org,

PERIODICALS

Land reform Land reform is a social and political restructuring of the agricultural systems through redistribution of land. Successful land reform policies take into account the political, social, and economic structure of the area. In agrarian societies, large landowners typically control the wealth and the distribution of food. Land reform policies in such societies allocate land to small landowners, to farm workers who own no land, to collective farm operations, or to state farm organizations. The exact nature of the allocation depends on the motivation of those initiating the changes. In areas where absentee ownership of farmland is common, land reform has become a popular method for returning the land to local ownership. Land reforms generally favor the family-farm concept, rather than absentee landholding. Land reform is often undertaken as a means of achieving greater social equality, but it can also increase agricultural productivity and benefit the environment. A tenant farmer may have a more emotional and protective relation to the land he works, and he may be more likely to make agricultural decisions that benefit the ecosystem. Such a farmer might, for instance, opt for natural pest control. An absentee owner often does not have the same interest in land stewardship. Land reform does have negative connotations and is often associated with the state collective farms under communism. Most proponents of land reform, however, do not consider these collective farms good examples, and they argue that successful land reform balances the factors of production so that the full agricultural capabilities of the land can be realized. Reforms should always be designed to increase the efficiency and economic viability of farming. Land reform is usually more successful if it is enacted with agrarian reforms, which may include the use of agricultural extension agents, agricultural cooperatives, favorable labor legislation, and increased public services for farmers, 810

Perney, L. “Unquiet on the Brazilian Front.” Audubon 94 (January-February 1992): 26–9.

Land stewardship Little has been written explicitly on the subject of land stewardship. Much of the literature that does exist is limited to a biblical or theological treatment of stewardship. However, literature on the related ideas of sustainability and the land ethic has expanded dramatically in recent years, and these concepts are at the heart of land stewardship. Webster’s and the Oxford English Dictionary both define a “steward” as an official in charge of a household, church, estate, or governmental unit, or one who makes social arrangements for various kinds of events; a manager or administrator. Similarly, stewardship is defined as doing the job of a steward or, in ecclesiastical terms, as “the responsible use of resources,” meaning especially money, time and talents, “in the service of God.” Intrinsic in those restricted definitions is the idea of responsible caretakers, of persons who take good care of the resources in their charge, including natural resources. “Caretaking” universally includes caring for the material resources on which people depend, and by extension, the land or environment from which those resources are extracted. Any concept of steward or stewardship must include the notion of ensuring the essentials of life, all of which derive from the land. While there are few works written specifically on land stewardship, the concept is embedded implicitly and explicitly in the writings of many articulate environmentalists. For example, Wendell Berry, a poet and essayist, is one of the foremost contemporary spokespersons for stewardship of the land. In his books, Farming: A Handbook (1970), The Unsettling of America (1977), The Gift of Good Land (1981), and Home Economics (1987), Berry shares his wisdom on caring for the land and the necessity of stewardship. He finds a mandate for good stewardship in religious traditions, includ-

Environmental Encyclopedia 3 ing Judaism and Christianity: “The divine mandate to use the world justly and charitably, then, defines every person’s moral predicament as that of a steward.” Berry, however, does not leave stewardship to divine intervention. He describes stewardship as “hopeless and meaningless unless it involves long-term courage, perseverance, devotion, and skill” on the part of individuals, and not just farmers. He suggests that when we lost the skill to use the land properly, we lost stewardship. However, Berry does not limit his notion of stewardship to a biblical or religious one. He lays down seven rules of land stewardship—rules of “living right.” These are: using the land will lead to ruin of the land unless it “is properly cared for;” Oif people do not know the land intimately, they cannot care for it properly; Omotivation to care for the land cannot be provided by “general principles or by incentives that are merely economic;” Omotivation to care for the land, to live with it, stems from an interest in that land that “is direct, dependable, and permanent;” Omotivation to care for the land stems from an expectation that people will spend their entire lives on the land, and even more so if they expect their children and grandchildren to also spend their entire lives on that same land; Othe ability to live carefully on the land is limited; owning too much acreage, for example, decreases the quality of attention needed to care for the land; Oa nation will destroy its land and therefore itself if it does not foster rural households and communities that maintain people on the land as outlined in the first six rules. Stewardship implies at the very least then, an attempt to reconnect to a piece of land. Reconnecting means getting to know that land as intimately as possible. This does not necessarily imply ownership, although enlightened ownership is at the heart of land stewardship. People who own land have some control of it, and effective stewardship requires control, if only in the sense of enough power to prevent abuse. But, ownership obviously does not guarantee stewardship—great and widespread abuses of land are perpetrated by owners. Absentee ownership, for example, often means a lack of connection, a lack of knowledge, and a lack of caring. And public ownership too often means non-ownership, leading to the “Tragedy of the Commons.” Land ownership patterns are critical to stewardship, but no one type of ownership guarantees good stewardship. Berry argues that true land stewardship usually begins with one small piece of land, used or controlled or owned by an individual who lives on that land. Stewardship, however, extends beyond any one particular piece of land. It implies O

Land Stewardship Project

knowledge, and caring for, the entire system of which that land is a part, a knowledge of a land’s context as well as its content. It also requires understanding the connections between landowners or land users and the larger communities of which they are a part. This means that stewardship depends on interconnected systems of ecology and economics, of politics and science, of sociology and planning. The web of life that exists interdependent with a piece of land mandates attention to a complex matrix of connections. Stewardship means keeping the web intact and functional, or at least doing so on enough land over a long-enough period of time to sustain the populations dependent on that land. Berry and many other critics of contemporary landuse patterns and policies claim that little attention is being paid to maintaining the complex communities on which sustenance, human and otherwise, depends. Until holistic, ecological knowledge becomes more of a basis for economic and political decision-making, they assert, stewardship of the critical land-base will not become the norm. See also Environmental ethics; Holistic approach; Land use; Sustainable agriculture; Sustainable biosphere; Sustainable development [Gerald L. Young Ph.D.]

RESOURCES BOOKS Byron, W. J. Toward Stewardship: An Interim Ethic of Poverty, Power and Pollution. New York: Paulist Press, 1975. de Jouvenel, B. “The Stewardship of the Earth.” In The Fitness of Man’s Environment. New York: Harper & Row, 1968. Knight, Richard L., and Peter B. Landres, eds. Stewardship Across Boundaries. Washington, DC: Island Press, 1998. Paddock, J., N. Paddock, and C. Bly. Soil and Survival: Land Stewardship and the Future of American Agriculture. San Francisco: Sierra Club Books, 1986.

Land Stewardship Project The Land Stewardship Project (LSP) is a nonprofit organization based in Minnesota and committed to promoting an ethic of environmental and agricultural stewardship. The group believes that the natural environment is not an exploitable resource but a gift given to each generation for safekeeping. To preserve and pass on this gift to future generations, for the LSP, is both a moral and a practical imperative. Founded in 1982, the LSP is an alliance of farmers and city-dwellers dedicated both to preserving the small family farm and practicing sustainable agriculture. Like Wendell Berry and Wes Jackson (with whom they are affiliated), the LSP is critical of conventional agricultural 811

Environmental Encyclopedia 3

Land trusts

practices that emphasize plant monocultures, large acreage, intensive tillage, extensive use of herbicides and pesticides, and the economies of scale that these practices make possible. The group believes that agriculture conducted on such an industrial scale is bound to be destructive not only of the natural environment but of family farms and rural communities as well. The LSP accordingly advocates the sort of smaller scale agriculture that, in Berry’s words, “depletes neither soil, nor people, nor communities.” The LSP sponsors legislative initiatives to save farmland and wildlife habitat, to limit urban sprawl and protect family farms, and to promote sustainable agricultural practices. It supports educational and outreach programs to inform farmers, consumers, and citizens about agricultural and environmental issues. The LSP also publishes a quarterly Land Stewardship Letter and distributes video tapes about sustainable agriculture and other environmental concerns. [Terence Ball]

RESOURCES ORGANIZATIONS The Land Stewardship Project, 2200 4th Street, White Bear Lake, MN USA 55110 (651) 653-0618, Fax: (651) 653-0589, Email: [email protected],

Land trusts A land trust is a private, legally incorporated, nonprofit organization that works with property owners to protect open land through direct, voluntary land transactions. Land trusts come in many varieties, but their intent is consistent. Land trusts are developed for the purpose of holding land against a development plan until the public interest can be ascertained and served. Some land trusts hold land open until public entities can purchase it. Some land trusts purchase land and manage it for the common good. In some cases land trusts buy development rights to preserve the land area for future generations while leaving the current use in the hands of private interests with written documentation as to how the land can be used. This same technique can be used to adjust land use so that some part of a parcel is preserved while another part of the same parcel can be developed, all based on land sensitivity. There is a hierarchy of land trusts. Some trusts protect areas as small as neighborhoods, forming to address one land use issue after which they disband. More often, land trusts are local in nature but have a global perspective with regard to their goals for future land protection. The big national trusts are names that we all recognize such as the Conservation Fund, The Nature Conservancy, the Ameri812

can Farmland Trust, and the Trust for Public Land. The Land Trust Alliance coordinates the activities of many land trusts. Currently, there are over 1,200 local and regional land trusts in the United States. Some of these trusts form as a direct response to citizen concerns about the loss of open space. Most land trusts evolve out of citizen’s concerns over the future of their state, town, and neighborhood. Many are preceded by failures on the part of local governments to respond to stewardship mandates by the voters. Land trusts work because they are built by concerned citizens and funded by private donations with the express purpose of securing the sustainability of an acceptable quality of life. Also, land trusts are effective because they purchase land (or development rights) from local people for local needs. Transactions are often carried out over a kitchen table with neighbors discussing priorities. In some cases the trust’s board of directors might be engaged in helping a citizen to draw up a will leaving farmland or potential recreation land to the community. This home rule concept is the backbone of the land trust movement. Additionally, land trusts gain strength from public/private partnerships that emerge as a result of shared objectives with governmental agencies. If the work of the land trust is successful, part of the outcome is an enhanced ability to cooperate with local government agencies. Agencies learn to trust the land trust staff and begin to rely on the special expertise that grows within a land trust organization. In some cases the land trust gains both opportunities and resources as a result of its partnership with governmental agencies. This public/private partnership benefits citizens as projects come together and land use options are retained for current and future generations. Flexibility is an important and essential quality of a land trust that enables it to be creative. Land trusts can have revolving accounts, or lines of credit, from banks that allow them to move quickly to acquire land. Compensation to landowners who agree to work with the trust may come in the form of extended land use for the ex-owner, land trades, tax compensation, and other compensation packages. Often some mix of protection and compensation packages will be created that a governmental agency simply does not have the ability to implement. A land trust’s flexibility is its most important attribute. Where a land trust can negotiate land acquisition based on a discussion among the board members, a governmental agency would go through months or even years of red tape before an offer to buy land for the public domain could be made. This quality of land trusts is one reason why many governmental agencies have built relationships with land trusts in order to protect land that the agency deems sensitive and important. There are some limiting factors constraining what land trusts can do. For the more localized trusts, limited volunteer staff and extremely limited budgets cause fund raising to

Environmental Encyclopedia 3 become a time-consuming activity. Staff turnover can be frequent so that a knowledge base is difficult to maintain. In some circumstances influential volunteers can capture a land trust organization and follow their own agenda rather than letting the agenda be set by affected stakeholders. Training is needed for those committed to working within the legal structure of land trusts. The national Trust for Public Lands has established training opportunities to better prepare local land trust staff for the complex negotiations that are needed to protect public lands. Staff that work with local citizenry to protect local needs must be aware of the costs and benefits of land preservation mechanisms. Lease purchase agreements, limited partnerships, and fee simple transactions all require knowledge of real estate law. Operating within enterprise zones and working with economic development corporations requires knowledge of state and federal programs that provide money for projects on the urban fringe. In some cases urban renewal work reveals open space within the urban core that can be preserved for community gardens or parks if that land can be secured using HUD funds or other government financing mechanisms. A relatively new source of funding for land acquisition is mitigation funds. These funds are usually generated as a result of settlements with industry or governmental agencies as compensation for negative land impacts. Distinguishing among financing mechanisms requires specialized knowledge that land trust staff need to have available within their ranks in order to move quickly to preserve open space and enhance the quality of life for urban dwellers. On the other hand some land trusts in rural areas are interested in conserving farmlands using preserves that allow farmers to continue to farm while protecting the rural character of the countryside. Like their urban counterparts, these farmland preserve programs are complex, and if they are to be effective the trust needs to employ its solid knowledge of economic trends and resources. The work that land trusts do is varied. In some cases a land trust incorporates as a result of a local threat, such as a pipeline or railway coming through an area. In some cases a trust forms to counter an undesirable land use such as a landfill or a low-level radioactive waste storage facility. In other instances, a land trust comes together to take advantage of a unique opportunity, such as a family wanting to sell some pristine forest close to town or an industry deciding to relocate leaving a lovely waterfront location with promise as a riverfront recreation area. It is rare that a land trust forms without a focused need. However, after the initial project is completed, its success breeds selfconfidence in those who worked on the project and new opportunities or challenges may sustain the goals of the fledgling organization. There are many examples of land trusts and the few highlighted here may help to enhance understanding of the

Land trusts

value of land trust activities and to offer guidance to local groups wanting to preserve land. One outstanding example of land trust activity is the Rails to Trails program in Michigan. Under this program, abandoned railroad right of ways are preserved to create green belts for recreation use through agreements with the railroad companies. The Trust for Public Lands (TPL) has assisted many local land trusts to implement a wide variety of land acquisition projects. One such complex agreement took place in Tucson, Arizona. In this case, the Tucson city government wanted to acquire seven parcels of land that totaled 40 acres (164 ha). For financial reasons the city was not able to acquire the land. At that point the Trust for Public Land was asked to become a private nonprofit partner and to work with the city to acquire the land. TPL used its creative expertise to help each of the landowners make mutually beneficial arrangements with the city so that a large urban park could become a reality. In some cases the TPL offered a life tenancy to the current owners in exchange for a reduced land price. In another case they offered a five-year tenancy and a job as caretaker, in exchange for a reduced purchase price. As the community worked on the future of the park, another landowner who owned a contiguous parcel stepped forward with an offer to sell. Each of these transactions was successful because the land trust was flexible, considerate of the land owners and up front about the goals of their work, and responsive to their public partner, the city government. Our current land trust effort in the United States has affected the way we protect our sensitive lands, reclaim damaged lands, and respond to local needs. Land trusts conserve land, guide future planning, educate local citizens and government agencies to a new way of doing business, and do it all with a minimum amount of confrontation and legal interaction. These private, non-profit organizations have stepped in and filled a niche in the environmental conservation movement started in the 1970s and have gotten results through a system of cooperative and well-informed action. [Cynthia Fridgen]

RESOURCES BOOKS Diamond, H. L., and P. F. Noonan. Land Use in America: The Report of the Sustainable Use of Land Project. Lincoln Institute of Land Policy, Washington, DC: Island Press, 1996. Endicott, E., ed. Land Conservation Through Public/Private Partnerships. Lincoln Institute of Land Policy, Washington DC: Island Press, 1993. Platt, R. H. Land Use and Society: Geography, Law, and Public Policy. Washington DC: Island Press, 1996.

OTHER Land Trust Alliance. 2002 [June 20, 2002]. . Trust for Public Land. 2002 [June 20, 2002]. .

813

Environmental Encyclopedia 3

Land use

Land use Land is any part of the earth’s surface that can be owned as property. Land comprises a particular segment of the earth’s crust and can be defined in specific terms. The location of the land is extremely important in determining land use and land value. Land is limited in supply, and, as our population increases, we have less land to support each person. Land nurtures the plants and animals that provide our food and shelter. It is the watershed or reservoir for our water supply. Land provides the minerals we utilize, the space on which we build our homes, and the site of many recreational activities. Land is also the depository for much of the waste created by modern society. The growth of human population only provides a partial explanation for the increased pressure on land resources. Economic development and a rise in the standard of living have brought about more demands for the products of the land. This demand now threatens to erode the land resource. We are terrestrial in our activities and as our needs have diversified, so has land use. Conflicts among the competing land uses have created the need for land-use planning. Previous generations have used and misused the land as though the supply was inexhaustible. Today, goals and decisions about land use must take into account and link information from the physical and biological sciences with the current social values and political realities. Land characteristics and ownership provide a basis for the many uses of land. Some land uses are classified as irreversible, for example, when the application of a particular land use changes the original character of the land to such a extent that reversal to its former use is impracticable. Reversible land uses do not change the soil cover or landform, and the land manager has many options when overseeing reversible land uses. A framework for land-use planning requires the recognition that plans, policies, and programs must consider physical and biological, economical, and institutional factors. The physical framework of land focuses on the inanimate resources of soil, rocks and geological features, water, air, sunlight, and climate. The biological framework involves living things such as plants and animals. A key feature of the physical and biological framework is the need to maintain healthy ecological relationships. The land can support many human activities, but there are limits. Once the resources are brought to these limits, they can be destroyed and replacing them will be difficult. The economic framework for land use requires that operators of land be provided sufficient returns to cover the cost of production. Surpluses of returns above costs must be realized by those who make the production decisions and 814

World land use. (McGraw-Hill Inc. Reproduced by permission.)

by those who bear the production costs. The economic framework provides the incentive to use the land in a way that is economically feasible. The institutional framework requires that programs and plans be acceptable within the working rules of society. Plans must also have the support of current governments. A basic concept of land use is the right of land—who has the right to decide the use of a given tract of land. Legal decisions have provided the framework for land resource protection. Attitudes play an important role in influencing land use decisions, and changes in attitudes will often bring changes in our institutional framework. Recent trends in land use in the United States show that substantial areas have shifted to urban and transportation uses, state and national parks, and wildlife refuges since 1950. The use of land has become one of our most serious environmental concerns. Today’s land use decisions will determine the quality of our future life styles and environment. The land use planning process is one of the most complex and least understood domestic concerns facing the nation. Additional changes in the institutional framework governing land use are necessary to allow society to protect the most limited resource on the planet—the land we live on. [Terence H. Cooper]

RESOURCES BOOKS Beatty, M. T. Planning the Uses and Management of Land. Series in Agronomy, no. 21. Madison, WI: American Standards Association, 1979. Davis, K. P. Land Use. New York: McGraw-Hill, 1976. Fabos, J. G. Land-Use Planning: From Global to Local Challenge. New York: Chapman and Hall, 1985. Lyle, John T., and Joan Woodward. Design for Human Ecosystems: Landscape, Land Use and Natural Resources. Washington, DC: Island Press, 1999.

Environmental Encyclopedia 3

Landfill

McHarg, I. L. Design With Nature. New York: John Wiley and Sons, 1995. Silber, Jane, and Chris Maser. Land-Use Planning for Sustainable Development. Boca Raton: CRC Press, 2000.

Landfill Surface water, oceans and landfills are traditionally the main repositories for society’s solid and hazardous waste. Landfills are located in excavated areas such as sand and gravel pits or in valleys that are near waste generators. They have been cited as sources of surface and groundwater contamination and are believed to pose a significant health risk to humans, domestic animals, and wildlife. Despite these adverse effects and the attendant publicity, landfills are likely to remain a major waste disposal option for the immediate future. Among the reasons that landfills remain a popular alternative are their simplicity and versatility. For example, they are not sensitive to the shape, size, or weight of a particular waste material. Since they are constructed of soil, they are rarely affected by the chemical composition of a particular waste component or by any collective incompatibility of co-mingled wastes. By comparison, composting and incineration require uniformity in the form and chemical properties of the waste for efficient operation. Landfills also have been a relatively inexpensive disposal option, but this situation is rapidly changing. Shipping costs, rising land prices, and new landfill construction and maintenance requirements contribute to increasing costs. About 57% of the solid waste generated in the United States still is dumped in landfills. In a sanitary landfill, refuse is compacted each day and covered with a layer of dirt. This procedure minimizes odor and litter, and discourages insect and rodent populations that may spread disease. Although this method does help control some of the pollution generated by the landfill, the fill dirt also occupies up to 20 percent of the landfill space, reducing its waste-holding capacity. Sanitary landfills traditionally have not been enclosed in a waterproof lining to prevent leaching of chemicals into groundwater, and many cases of groundwater pollution have been traced to landfills. Historically landfills were placed in a particular location more for convenience of access than for any environmental or geological reason. Now more care is taken in the siting of new landfills. For example, sites located on faulted or highly permeable rock are passed over in favor of sites with a less-permeable foundation. Rivers, lakes, floodplains, and groundwater recharge zones are also avoided. It is believed that the care taken in the initial siting of a landfill will reduce the necessity for future clean-up and site rehabilitation. Due to these and other factors, it is becoming increasingly difficult to find suitable locations for new landfills.

A secure landfill. (McGraw-Hill Inc. Reproduced by permission.)

Easily accessible open space is becoming scarce and many communities are unwilling to accept the siting of a landfill within their boundaries. Many major cities have already exhausted their landfill capacity and must export their trash, at significant expense, to other communities or even to other states and countries. Although a number of significant environmental issues are associated with the disposal of solid waste in landfills, the disposal of hazardous waste in landfills raises even greater environmental concerns. A number of urban areas contain hazardous waste landfills. Love Canal is, perhaps, the most notorious example of the hazards associated with these landfills. This Niagara Falls, New York neighborhood was built over a dump containing 20,000 metric tons of toxic chemical waste. Increased levels of cancer, miscarriages, and birth defects among those living in Love Canal led to the eventual evacuation of many residents. The events at Love Canal were also a major impetus behind the passage of the Comprehensive Environmental Response, Compensation and Liability Act in 1980, designed to clean up such sites. The U. S. Environmental Protection Agency estimates that there may be as many as 2,000 hazardous waste disposal sites in this country that pose a significant threat to human health or the environment. Love Canal is only one example of the environmental consequences that can result from disposing of hazardous waste in landfills. However, techniques now exist to create secure landfills that are an acceptable disposal option for hazardous waste in many cases. The bottom and sides of a secure landfill contain a cushion of recompacted clay that is flexible and resistant to cracking if the ground shifts. This clay layer is impermeable to groundwater and safely contains the waste. A layer of gravel containing a grid of perforated drain pipes is laid over the clay. These pipes collect any 815

Landscape ecology

seepage that escapes from the waste stored in the landfill.

Over the gravel bed a thick polyethylene liner is positioned. A layer of soil or sand covers and cushions this plastic liner, and the wastes, packed in drums, are placed on top of this layer. When the secure landfill reaches capacity it is capped by a cover of clay, plastic and soil, much like the bottom layers. Vegetation in planted to stabilize the surface and make the site more attractive. Sump pumps collect any fluids that filter through the landfill either from rainwater or from waste leakage. This liquid is purified before it is released. Monitoring wells around the site ensure that the groundwater does not become contaminated. In some areas where the water table is particularly high, above-ground storage may be constructed using similar techniques. Although such facilities are more conspicuous, they have the advantage of being easier to monitor for leakage. Although technical solutions have been found to many of the problems associated with secure landfills, several nontechnical issues remain. One of these issues concerns the transportation of hazardous waste to the site. Some states do not allow hazardous waste to be shipped across their territory because they are worried about the possibility of accidental spills. If hazardous waste disposal is concentrated in only a few sites, then a few major transportation routes will carry large volumes of this material. Citizen opposition to hazardous waste landfills is another issue. Given the past record of corporate and governmental irresponsibility in dealing with hazardous waste, it is not surprising that community residents greet proposals for new landfills with the NIMBY (Not In My BackYard) response. However, the waste must go somewhere. These and other issues must be resolved if secure landfills are to be a viable long-term solution to hazardous waste disposal. See also Groundwater monitoring; International trade in toxic waste; Storage and transportation of hazardous materials [George M. Fell and Christine B. Jeryan]

RESOURCES BOOKS Bagchi, A. Design, Construction and Monitoring of Landfills. 2nd ed. New York: Wiley, 1994. Neal, H. A. Solid Waste Management and the Environment: The Mounting Garbage and Trash Crisis. Englewood Cliffs, NJ: Prentice-Hall, 1987. Noble, G. Siting Landfills and Other LULUs. Lancaster, PA: Technomic Publishing, 1992. Requirements for Hazardous Waste Landfill Design, Construction and Closure. Cincinnati: U.S. Environmental Protection Agency, 1989.

PERIODICALS “Experimental Landfills Offer Safe Disposal Options.” Journal of Environmental Health 51 (March-April 1989): 217–18.

816

Environmental Encyclopedia 3 Loupe, D. E. “To Rot or Not; Landfill Designers Argue the Benefits of Burying Garbage Wet vs. Dry.” Science News 138 (October 6, 1990): 218–19+. Wingerter, E. J., et al. “Are Landfills and Incinerators Part of the Answer? Three Viewpoints.” EPA Journal 15 (March-April 1989): 22–26.

Landscape ecology Landscape ecology is an interdisciplinary field that emerged from several intellectual traditions in Europe and North America. An identifiable landscape ecology started in central Europe in the 1960s and in North America in the late 1970s and early 1980s. It became more visible with the establishment, in 1982, of the International Association of Landscape Ecology, with the publication of a major text in the field, Landscape Ecology, by Richard Forman and Michel Godron in 1984, and with the publication of the first issue of the association’s journal, Landscape Ecology in 1987. The phrase ’landscape ecology’ was first used in 1939 by the German geographer, Carl Troll. He suggested that the “concept of landscape ecology is born from a marriage of two scientific outlooks, the one geographical (landscape), the other biological (ecology).” Troll coined the term landscape ecology to denote “the analysis of a physico-biological complex of interrelations, which govern the different area units of a region.” He believed that “landscape ecology...must not be confined to the large scale analysis of natural regions. Ecological factors are also involved in problems of population, society, rural settlement, land use, transport, etc.” Landscape has long been a unit of analysis and a conceptual centerpiece of geography, with scholars such as Carl Sauer and J. B. Jackson adept at “reading the landscape,” including both the natural landscape of landforms and vegetation, and the cultural landscape as marked by human actions and as perceived by human minds. Zev Naveh has been working on his own version of landscape ecology in Israel since the early 1970s. Like Troll, Naveh includes humans in his conception, in fact enlarges landscape ecology to a global human ecosystem science, sort of a “bio-cybernetic systems approach to the landscape and the study of its use by [humans].” He sees landscape ecology first as a holistic approach to biosystems theory, the centerpiece being “recognition of the total human ecosystem as the highest level of integration,” and, second, as playing a central role in cultural evolution and as a “basis for interdisciplinary, task-oriented, environmental education.” Landscape architecture is also to some extent landscape ecology, since landscape architects design complete vistas, from their beginnings and at various scales. This concern with designing and creating complete landscapes from bare ground can certainly be considered ecological, as it includes creating or adapting local land forms, planting appropriate

Environmental Encyclopedia 3 vegetation, and designing and building various kinds of ’furniture’ and other artifacts on site. The British Landscape Institute and the British Ecological Society held a joint meeting in 1983, recognizing “that the time for ecology to be harnessed for the service of landscape design has arrived.” The meeting produced the twenty-fourth symposium of the British Ecological Society titled Ecology and Design in Landscape. Landscape planning can also to some degree be considered landscape ecology, especially in the ecological approach to landscape planning developed by Ian McHarg and his students and colleagues, and the LANDEP, or Landscape Ecological Planning, approach designed by Ladislav Miklos and Milan Ruzicka. Both of these ecological planning approaches are complex syntheses of spatial patterns, ecological processes, and human needs and wants. Building on all of these traditions, yet slowly finding its own identity, landscape ecology is considered by some as a sub-domain of biological ecology and by others as a discipline in its own right. In Europe, landscape ecology continues to be an extension of the geographical tradition that is preoccupied with human-landscape interactions. In North America, landscape ecology has emerged as a branch of biological ecology, more concerned with landscapes as clusters of interrelated natural ecosystems. The European form of landscape ecology is applied to land and resource conservation, while in North America it focuses on fundamental questions of spatial pattern and exchange. Both traditions can address major environmental problems, especially the extinction of species and the maintenance of biological diversity. The term landscape, despite the varied traditions and emerging disciplines described above, remains somewhat indeterminate, depending on the criteria set by individual researchers to establish boundaries. Some consensus exists on its general definition in the new landscape ecology, as described in the composite form attempted here: a terrestrial landscape is miles- or kilometers-wide in area; it contains a cluster of interacting ecosystems repeated in somewhat similar form; and it is a heterogeneous mosaic of interconnected land forms, vegetation types, and land uses. As Risser and his colleagues emphasize, this interdisciplinary area focuses explicitly on spatial patterns: “Specifically, landscape ecology considers the development and dynamics of spatial hetereogeneity, spatial and temporal interactions and exchanges across heterogeneous landscapes, influences of spatial heterogeneity on biotic and abiotic processes, and management of spatial heterogeneity.” Instead of trying to identify homogeneous ecosystems, landscape ecology focuses particularly on the heterogeneous patches and mosaics created by human disruption of natural systems, by the intermixing of cultural and natural landscape patterns. The real rationale for a land-

Landscape ecology

scape ecology perhaps should be this acknowledgment of the heterogeneity of contemporary landscape patterns, and the need to deal with the patchwork mosaics and intricate matrices that result from long-term human disturbance, modification, and utilization of natural systems. Typical questions asked by landscape ecologists include these formulated by Risser and his colleagues: “What formative processes, both historical and present, are responsible for the existing pattern in a landscape?” “How are fluxes of organisms, of material, and of energy related to landscape heterogeneity?” “How does landscape heterogeneity affect the spread of disturbances?” While the first question is similar to ones long asked in geography, the other two are questions traditional to ecology, but distinguished here by the focus on heterogeneity. Richard Forman, a prominent figure in the evolving field of landscape ecology, thinks the field has matured enough for general principles to have emerged; not ecological laws as such, but principles backed by enough evidence and examples to be true for 95 percent of landscape analyses. His 12 principles are organized by four categories: landscapes and regions; patches and corridors; mosaics; and applications. The principles outline expected or desirable spatial patterns and relationships, and how those patterns and relationships affect system functions and flows, organismic movements and extinctions, resource protection, and optimal environmental conditions. Forman claims the principles “should be applicable for any environmental or societal landuse objective,” and that they are useful in more effectively “growing wood, protecting species, locating houses, protecting soil, enhancing game, protecting water resources, providing recreation, locating roads, and creating sustainable environments.” Perhaps Andre Corboz provided the best description when he wrote of “the land as palimpsest:” landscape ecology recognizes that humans have written large on the land, and that behind the current writing visible to the eye, there is earlier writing as well, which also tells us about the patterns we see. Landscape ecology also deals with gaps in the text and tries to write a more complete accounting of the landscapes in which we live and on which we all depend. [Gerald L. Young Ph.D.]

RESOURCES BOOKS Farina, Almo. Landscape Ecology in Action. New York: Kluwer, 2000. Forman, R. T. T., and M. G. Landscape Ecology. New York: Wiley, 1986. Risser, P. G., J. R. Karr, and R. T. T. Forman. Landscape Ecology: Directions and Approaches. Champaign: Illinois Natural History Survey, 1983. Tjallingii, S. P., and A. A. de Veer, eds. Perspectives in Landscape Ecology: Contributions to Research, Planning and Management of Our Environment.

817

Landslide

Troll, C. Landscape Ecology. Delft, The Netherlands: The ITC-UNESCO Centre for Integrated Surveys, 1966. Turner, Monica, R. H. Gardner, and R. V. O’Neill. Landscape Ecology in Theory and Practice: Patterns and Processes. New York: Springer Verlag, 2001. Wageningen, The Netherlands: Pudoc, 1982. (Proceedings of the International Congress Organized by the Netherlands Society for Landscape Ecology, Veldhoven, The Netherlands, 6-11 April, 1981). Zonneveld, I. S., and R. T. T. Forman, eds. Changing Landscapes: An Ecological Perspective. New York: Springer-Verlag, 1990.

PERIODICALS Forman, R. T. T. “Some General Principles of Landscape and Regional Ecology.” Landscape Ecology (June 1995): 133–142. Golley, F.B. “Introducing Landscape Ecology.” Landscape Ecology 1, no. 1 (1987): 1–3. Naveh, Z. “Landscape Ecology as an Emerging Branch of Human Ecosystem Science.” Advances in Ecological Research 12 (1982): 189–237.

Landslide A general term for the discrete downslope movement of rock and soil masses under gravitational influence along a failure zone. The term “landslide” can refer to the resulting land form, as well as to the process of movement. Many types of landslides occur, and they are classified by several schemes, according to a variety of criteria. Landslides are categorized most commonly on basis of geometric form, but also by size, shape, rate of movement, and water content or fluidity. Translational, or planar, failures, such as debris avalanches and earth flows, slide along a fairly straight failure surface which runs approximately parallel to the ground surface. Rotational failures, such as rotational slumps, slide along a spoon shaped failure surface, leaving a hummocky appearance on the landscape. Rotational slumps commonly transform into earthflows as they continue down slope. Landslides are usually triggered by heavy rain or melting snow, but major earthquakes can also cause landslides.

Land-use control Land-use control is a relatively new concept. For most of human history, it was assumed that people could do whatever they wished with their own property. However, societies have usually recognized that the way an individual uses private property can sometimes have harmful affects on neighbors. Land-use planning has reached a new level of sophistication in developed countries over the last century. One of the first restrictions on land use in the United States, for example, was a 1916 New York City law limiting the size of skyscrapers because of the shadows they might cast on adjacent property. Within a decade, the federal government began to act aggressively on land control measures. It passed the Mineral Leasing Act of 1920 in an attempt to control the exploitation of oil, natural gas, phosphate, and potash. 818

Environmental Encyclopedia 3 It adopted the Standard State Zoning Act of 1922 and the Standard City Planning Enabling Act of 1928 to promote the concept of zoning at state and local levels. Since the 1920s, every state and most cities have adopted zoning laws modeled on these two federal acts. Often detailed, exhaustive, and complex zoning regulations now control the way land is used in nearly every governmental unit. They specify, for example, whether land can be used for single-dwelling construction, multiple-dwelling construction, farming, industrial (heavy or light) development, commercial use, recreation or some other purpose. Requests to use land for purposes other than that for which it is zoned requires a variance or conditional use permit, a process that is often long, tedious, and confrontational. Many types of land require special types of zoning. For example, coastal areas are environmentally vulnerable to storms, high tides, flooding, and strong winds. The federal government passed laws in 1972 and 1980, the National Coastal Zone Management Acts, to help states deal with the special problem of protecting coastal areas. Although initially slow to make use of these laws, states are becoming more aggressive about restricting the kinds of construction permitted along seashore areas. Areas with special scenic, historic, or recreational value have long been protected in the United States. The nation’s first national park, Yellowstone National Park, was created in 1872. Not until 44 years later, however, was the National Park Service created to administer Yellowstone and other parks established since 1872. Today, the National Park Service and other governmental agencies are responsible for a wide variety of national areas such as forests, wild and scenic rivers, historic monuments, trails, battlefields, memorials, seashores and lakeshores, parkways, recreational areas, and other areas of special value. Land-use control does not necessarily restrict usage. Individuals and organizations can be encouraged to use land in certain desirable ways. An enterprise zone, for example, is a specifically designated area in which certain types of business activities are encouraged. The tax rate might be reduced for businesses locating in the area or the government might relax certain regulations there. Successful land-use control can result in new towns or planned communities, designed and built from the ground up to meet certain pre-determined land-use objectives. One of the most famous examples of a planned community is Brasilia, the capital of Brazil. The site for a new capital— an undeveloped region of the country—was selected and a totally new city was built in the 1950s. The federal government moved to the new city in 1960, and it now has a population of more than 1.5 million. See also Bureau of Land Management; Riparian rights [David E. Newton]

Environmental Encyclopedia 3 RESOURCES BOOKS Becker, Barbara, Eric D. Kelly, and Frank So. Community Planning: An Introduction to the Comprehensive Plan. Washington, DC: Milldale Press, 2000. Newton, D. E. Land Use, A–Z. Hillside, NJ: Enslow Press, 1991. Platt, Rutherford H. Land Use and Society: Geography, Law, and Public Policy. Washington, DC: Island Press, 1996.

Lawn treatment

confused with the symptoms of other, similar diseases. Childhood AIDS symptoms appear more quickly since young children have immune systems that are less fully developed. [Liane Clorfene Casten]

Lawn treatment Latency Latency refers to the period of time it takes for a disease to manifest itself within the human body. It is the state of seeming inactivity that occurs between the instant of stimulation or initiating event and the beginning of response. The latency period differs dramatically for each stimulation and as a result, each disease has its unique time period before symptoms occur. When pathogens gain entry into a potential host, the body may fail to maintain adequate immunity and thus permits progressive viral or bacterial multiplication. This time lapse is also known as the incubation period. Each disease has definite, characteristic limits for a given host. During the incubation period, dissemination of the pathogen takes place and leads to the inoculation of a preferred or target organ. Proliferation of the pathogen, either in a target organ or throughout the body, then creates an infectious disease. Botulism, tetanus, gonorrhea, diphtheria, staphylococcal and streptococcal disease, pneumonia, and tuberculosis are among the diseases that take varied periods of time before the symptoms are evident. In the case of the childhood diseases—measles, mumps, and chicken pox—the incubation period is 14–21 days. In the case of cancer, the latency period for a small group of transformed cells to result in a tumor large enough to be detected is usually 10–20 years. One theory postulates that every cancer begins with a single cell or small group of cells. The cells are transformed and begin to divide. Twenty years of cell division ultimately results in a detectible tumor. It is theorized that very low doses of a carcinogen could be sufficient to transform one cell into a cancerous tumor. In the case of AIDS, an eight- to eleven-year latency period passes before the symptoms appear in adults. The length of this latency period depends upon the strength of the person’s immune system. If a person suspects he or she has been infected, early blood tests showing HIV antibodies or antigens can indicate the infection within three months of the stimulation. The three-month period before the appearance of HIV antibodies or antigens is called the “window period.” In many cases, doctors may fail to diagnose the disease at first, since AIDS symptoms are so general they may be

Lawn treatment in the form of pesticides and inorganic fertilizers poses a substantial threat to the environment. Homeowners in the United States use approximately three times more pesticides per acre than the average farmer, adding up to some 136 million pounds (61.7 kg)annually. Home lawns occupy more acreage in the United States than any agricultural crop, and a majority of the wildlife pesticide poisonings tracked by the Environmental Protection Agency (EPA) annually are attributed to chemicals used in lawn care. The use of grass fertilizer is also problematic when it runs off into nearby waterways. Lawn grass in almost all climates in the United States requires watering in the summer, accounting for some 40 to 60 percent of the average homeowner’s water use annually. Much of the water sprinkled on lawns is lost as runoff. When this runoff carries fertilizer, it can cause excess growth of algae in downstream waterways, clogging the surface of the water and depleting the water of oxygen for other plants and animals. Herbicides and pesticides are also carried into downstream water, and some of these are toxic to fish, birds, and other wildlife. Turf grass lawns are ubiquitous in all parts of the United States, regardless of the local climate. From Alaska to Arizona to Maine, homeowners surround their houses with grassy lawns, ideally clipped short, brilliantly green, and free of weeds. In almost all cases, the grass used is a hybrid of several species of grass from Northern Europe. These grasses thrive in cool, moist summers. In general, the United States experiences hotter, dryer summers than Northern Europe. Moving from east to west across the country, the climate becomes less and less like that the common turf grass evolved in. The ideal American lawn is based primarily on English landscaping principals, and it does not look like an English lawn unless it is heavily supported with water. The prevalence of lawns is a relatively recent phenomenon in the United States, dating to the late nineteenth century. When European settlers first came to this country, they found indigenous grasses that were not as nutritious for livestock and died under the trampling feet of sheep and cows. Settlers replaced native grasses with English and European grasses as fodder for grazing animals. In the late eighteenth century, American landowners began surrounding their estates with lawn grass, a style made popular 819

Lawn treatment

earlier in England. The English lawn fad was fueled by eighteenth century landscaper Lancelot “Capability” Brown, who removed whole villages and stands of mature trees and used sunken fences to achieve uninterrupted sweeps of green parkland. Both in England and the United States, such lawns and parks were mowed by hand, requiring many laborers, or they were kept cropped by sheep or even deer. Small landowners meanwhile used the land in front of their houses differently. The yard might be of stamped earth, which could be kept neatly swept, or it may have been devoted to a small garden, usually enclosed behind a fence. The trend for houses set back from the street behind a stretch of unfenced lawn took hold in the mid-nineteenth century with the growth of suburbs. Frederick Law Olmsted, the designer of New York City’s Central Park, was a notable suburban planner, and he fueled the vision of the English manor for the suburban home. The unfenced lawns were supposed to flow from house to house, creating a common park for the suburb’s residents. These lawns became easier to maintain with the invention of the lawn mower. This machine debuted in England as early as 1830, but became popular in the United States after the Civil War. The first patent for a lawn sprinkler was granted in the United States in 1871. These developments made it possible for middle class home owners to maintain lush lawns themselves. Chemicals for lawn treatment came into common use after World War II. Herbicides such as 2,4-D were used against broadleaf weeds. The now-banned DDT was used against insect pests. Homeowners had previously fertilized their lawns with commercially available organic formulations like dried manure, but after World War II inorganic, chemical-based fertilizers became popular for both agriculture and lawns and gardens. Lawn care companies such as Chemlawn and Lawn Doctor originated in the 1960s, an era when homeowners were confronted with a bewildering array of chemicals deemed essential to a healthy lawn. Rachel Carson’s 1962 book Silent Spring raised an alarm about the prevalence of lawn chemicals and their environmental costs. Carson explained how the insecticide DDT builds up in the food chain, passing from insects and worms to fish and small birds that feed on them, ultimately endangering large predators like the eagle. DDT was banned in 1972, and some lawn care chemicals were restricted. Nevertheless, the lawn care industry continued to prosper, offering services such as combined seeding, herbicide, and fertilizer at several intervals throughout the growing season. Lawn care had grown to a $25 billion industry in the United States by the 1990s. Even as the perils of particular lawn chemicals became clearer, it was difficult for homeowners to give them up. Statistics from the United States National Cancer Institute show that the incidence of childhood leukemia is 6.5% greater in familes that use lawn pesticides than in those who 820

Environmental Encyclopedia 3 do not. In addition, 32 of the 34 most widely used lawn care pesticides have not been tested for health and environmental issues. Because some species of lawn grasses grow poorly in some areas of the United States, it does not thrive without extra water and fertilizer. It is vulnerable to insect pests, which can be controlled with pesticides, and if a weed-free lawn is the aim, herbicides are less labor-intensive than digging out dandelions one by one. Some common pesticides used on lawns are acephate, bendiocarb, and diazinon. Acephate is an organophosphate insecticide which works by damaging the insect’s nervous system. Bendiocarb is called a carbamate insecticide, sold under several brand names, which works in the same way. Both were first developed in the 1940s. These will kill many insects, not only pests such as leafminers, thrips, and cinch bugs, but also beneficial insects, such as bees. Bendiocarb is also toxic to earthworms, a major food source for some birds. Birds too can die from direct exposure to bendiocarb, as can fish. Both these chemicals can persist in the soil for weeks. Diazinon is another common pesticide used by homeowners on lawns and gardens. It is toxic to humans, birds, and other wildlife, and it has been banned for use on golf courses and turf farms. Nevertheless, homeowners may use it to kill pest insects such as fire ants. Harmful levels of diazinon and were found in metropolitan storm water systems in California in the early 1990s, leached there from orchard run-off. Diazinon is responsible for about half of all reported wildlife poisonings involving lawn and garden chemicals. Common lawn and garden herbicides appear to be much less toxic to humans and animals than pesticides. The herbicide 2,4-D, one of the earliest herbicides used in this country, can cause skin and eye irritation to people who apply it, and it is somewhat toxic to birds. It can be toxic to fish in some formulations. Although contamination with 2,4-D has been found in some urban waterways, it has only been in trace amounts not thought to be harmful to humans. Glyphosate is another common herbicide, sold under several brand names, including the well-known Roundup. It is considered non-toxic to humans and other animals. Unike 2,4D, which kills broadleaf plants, glyphosate is a broad spectrum herbicide used to control control a great variety of annual, biennial, and perennial grasses, sedges, broad leafed weeds and woody shrubs. Common lawn and garden fertilizers are generally not toxic unless ingested in sufficient doses, yet they can have serious environmental effects. Run-off from lawns can carry fertilizer into nearby waterways. The nitrogen and phosphorus in the fertilizer stimulates plant growth, principally algae and microscopic plants. These tiny plants bloom, die, and decay. Bacteria that feed off plant decay then also undergo a surge in population. The overabundant bacteria con-

Environmental Encyclopedia 3

LD50

sume oxygen, leading to oxygen-depleted water. This condition is called hypoxia. In some areas, fertilized run-off from lawns is as big a problem as run-off from agricultural fields. Lawn fertilizer is thought to be a major culprit in pollution of the Everglades in Florida. In 2001 the Minnesota legislature debated a bill to limit homeowners’ use of phosphorus in fertilizers because of problems with algae blooms on the state’s lakes. There are several viable alternatives to the use of chemicals for lawn care. Lawn care companies often recommend multiple applications of pesticides, herbicides, and fertilizers, but an individual lawn may need such treatment on a reduced schedule. Some insects such as thrips and mites are suceptible to insecticidal soaps and oils, which are not long-lasting in the environment. These could be used in place of diazinon, acephate and other pesticides. Weeds can be pulled by hand, or left alone. Homeowners can have their lawn evaluated and their soil tested to determine how much fertilizer is needed. Slow-release fertilizers or organic fertilizers such as compost or seaweed emulsion do not give off such a large concentration of nutrients at once, so these are gentler on the environment. Another way to cut back on the excess water and chemicals used on lawns is to reduce the size of the lawn. The lawn can be bordered with shrubbery and perennial plants, leaving just enough open grass as needed for recreation. Another alternative is to replace non-native turf grass with a native grass. Some native grasses stay green all summer, can be mown short, and look very much like a typical lawn. Native buffalo grass (Buchloe dactyloides) has been used successfully for lawns in the South and Southwest. Other native grass species are adapted to other regions. Another example is blue grama grass (Bouteloua gracilis), native to the Great Plains. This grass is tolerant of extreme temperatures and very little rainfall. Some native grasses are best left unmowed, and in some regions homeowners have replaced their lawns with native grass prairies or meadows. In some cases, homeowners have done away with their lawns altogether, using stone or bark mulch in its place, or planting a groundcover plant like ivy or wild ginger. These plants might grow between trees, shrubs and perennials, creating a very different look than the traditional green carpet. For areas with water shortages, or for those who are concerned about conserving natural resources, xeriscape landscaping should be considered. Xeriscape comes from the Greek word xeros, meaning dry. Xeriscaping takes advantage of using plants, such as cacti and grasses, such as Mexican feather grass and blue oat grass that thrive in desert conditions. Xeriscaping can also include rock gardening as part of the overall landscape plan. [Angela Woodward]

RESOURCES BOOKS Bormann, F. Herbert, Diana Balmori, and Gordon T. Geballe. Redesigning the American Lawn. New Haven and London: Yale University Press, 1993. Jenkins, Virginia Scott. The Lawn: A History of an American Obsession.Washington and London: Smithsonian Institution Press: 1994. Stein, Sara. Planting Noah’s Garden: Further Adventures in Backyard Ecology. Boston: Houghton Mifflin Co., 1997. Wasowski, Andy, and Sally Wasowski. The Landscaping Revolution. Chicago: Contemporary Books, 2000.

PERIODICALS Bourne, Joel. “The Killer in Your Yard.” Audobon (May-June 2000): 108. “Easy Lawns.” Brooklyn Botanic Garden Handbook 160 (Fall 1999). Simpson, Sarah. “Shrinking the Dead Zone.” Scientific American (July 2001): 18. Stewart, Doug. “Our Love Affair with Lawns.” Smithsonian (April 1999): 94. Xeriscaping Tips Page. 2002 [cited June 18, 2002]. .

LDC see Less developed countries

LD50 LD50 is the dose of a chemical that is lethal to 50 percent of a test population. It is therefore a measure of a particular median response which, in this case, is death. The term is most frequently used to characterize the response of animals such as rats and mice in acute toxicity tests. The term is generally not used in connection with aquatic or inhalation toxicity tests. It is difficult, if not impossible, to determine the dosage of an animal in such tests; results are most commonly represented in terms of lethal concentrations (LC), which refer to the concentration of the substance in the air or water surrounding an animal. In LD testing, dosages are generally administered by means of injection, food, water, or forced feeding. Injections are used when an animal is to receive only one or a few dosages. Greater numbers of injections would disturb the animal and perhaps generate some false-positive types of responses. Food or water may serve as a good medium for administering a chemical, but the amount of food or water wasted must be carefully noted. Developing a healthy diet for an animal which is compatible with the chemical to be tested can be as much art as science. The chemical may interact with the foods and become more or less toxic, or it may be objectionable to the animal due to taste or odor. Rats are often used in toxicity tests because they do not have the ability to vomit. The investigator therefore has the option of gavage, a way to force-feed rats with a stomach tube or other device when a chemical smells or tastes bad. 821

Environmental Encyclopedia 3

Leaching

Bioassay; Dose response; Ecotoxicology; Hazardous material; Toxic substance [Gregory D. Boardman]

RESOURCES BOOKS Casarett, L. J., J. Doull, and C. D. Klaassen, eds. Casarett and Doull’s Toxicology: The Basic Science of Poisons. 6th ed. New York: McGraw Hill, 2001. Hodgson, E., R. B. Mailman, and J. E. Chambers. Dictionary of Toxicology. 2nd ed. New York: John Wiley and Sons, 1997. Lu, F. C. Basic Toxicology: Fundamentals, Target Organs, and Risk Assessment. 3rd ed. Hebron, KY: Taylor & Francis, 1996. Rand, G. M., ed. Fundamentals of Aquatic Toxicology: Effects, Environmental Fate, and Risk Assessment. 2nd ed. Hebron, KY: Taylor & Francis, 1995.

Leachate see Contaminated soil; Landfill

Leaching A chart showing the increased dose of LD50. (McGraw-Hill Inc. Reproduced by permission.)

Toxicity and LD50 are inversely proportional, which means that high toxicity is indicated by a low LD50 and vice versa. LD50 is a particular type of effective dose (ED) for 50 percent of a population (ED50). The midpoint (or effect on half of the population) is generally used because some individuals in a population may be highly resistant to a particular toxicant, making the dosage at which all individuals respond a misleading data point. Effects other than death, such as headaches or dizziness, might be examined in some tests, so EDs would be reported instead of LDs. One might also wish to report the response of some other percent of the test population, such as the 20 percent response (LD20 or ED20) or 80 percent response (LD80 or ED80). The LD is expressed in terms of the mass of test chemical per unit mass of the test animals. In this way, dose is normalized so that the results of tests can be analyzed consistently and perhaps extrapolated to predict the response of animals that are heavier or lighter. Extrapolation of such data is always questionable, especially when extrapolating from animal response to human response, but the system appears to be serving us well. However, it is important to note that sometimes better dose-response relations and extrapolations can be derived through normalizing dosages based on surface area or the weight of target organs. See also 822

The process by which soluble substances are dissolved out of a material. When rain falls on farmlands, for example, it dissolves weatherable minerals, pesticides, and fertilizers as it soaks into the ground. If enough water is added to the soil to fill all the pores, then water carrying these dissolved materials moves to the groundwater—the soil becomes leached. In soil chemistry, leaching refers to the process by which nutrients in the upper layers of soil are dissolved out and carried into lower layers, where they can be a valuable nutrient for plant roots. Leaching also has a number of environmental applications. For example, toxic chemicals and radioactive materials stored in sealed containers underground may leach out if the containers break open over time. See also Landfill; Leaking underground storage tank

Lead One of the oldest metals known to humans, lead compounds were used by Egyptians to glaze pottery as far back as 7000 B.C. The toxic effects of lead also have been known for many centuries. In fact, the Romans limited the amount of time slaves could work in lead mines because of the element’s harmful effects. Some consequences of lead poisoning are anemia, headaches, convulsions, and damage to the kidneys and central nervous system. The widespread use of lead in plumbing, gasoline, and lead-acid batteries, for example, has made it a serious environmental health problem. Bans on the use of lead in motor fuels and paints attempt to deal

Environmental Encyclopedia 3 with this problem. See also Heavy metals and heavy metal poisoning; Lead shot

Lead management Lead, a naturally occurring bluish gray metal, is extensively

used throughout the world in the manufacture of storage batteries, chemicals including paint and gasoline, and various metal products including sheet lead, solder, pipes, and ammunition. Due to its widespread use, large amounts of lead exist in the environment, and substantial quantities of lead continue to be deposited into air, land, and water. Lead is a poison that has many adverse effects, and children are especially susceptible. At present, the production, use, and disposal of lead are regulated with demonstrably effective results. However, because of its previous widespread use and persistence in the environment, lead exposure is a pervasive problem that affects many populations. Effective management of lead requires an understanding of its effects, blood action levels, sources of exposure, and policy responses, topics reviewed in that order. Effects of Lead Lead is a strong toxicant that adversely affects many systems in the body. Severe lead exposures can cause brain and kidney damage to adults and children, coma, convulsions, and death. Lower levels, e.g., lead concentrations in blood (PbB) below 50 ␮g/dL, may impair hemoglobin synthesis, alter the central and peripheral nervous systems, cause hypertension, affect male and female reproductive systems, and damage the developing fetus. These effects depend on the level and duration of exposure and on the distribution and kinetics of lead in the body. Most lead is deposited in bone, and some of this stored lead may be released long after exposure due to a serious illness, pregnancy, or other physiological event. Lead has not been shown to cause cancer in humans, however, tumors have developed in rats and mice given large doses of lead and thus several United States agencies consider lead acetate and lead phosphate as human carcinogens. Children are particularly susceptible to lead poisoning. PbB levels as low as 10 ␮g/dL are associated with decreased intelligence and slowed neurological development. Low PbB levels also have been associated with deficits in growth, vitamin metabolism, and effects on hearing. The neurological effects of lead on children are profound and are likely persistent. Unfortunately, childhood exposures to chronic but low lead levels may not produce clinical symptoms, and many cases go undiagnosed and untreated. In recent years, the number of children with elevated blood lead levels has declined substantially. For example, the average PbB level has decreased from over 15 ␮g/dL in the

Lead management 1970s to about 5 ␮g/dL in the 1990s. As described later, these decreases can be attributed to the reduction or elimination of lead in gasoline, food can and plumbing solder, and residential paint. Still, childhood lead poisoning remains the most widespread and preventable childhood health problem associated with environmental exposures, and childhood lead exposure remains a public health concern since blood levels approach or exceed levels believed to cause effects. Though widely perceived as a problem of inner city minority children, lead poisoning affects children from all areas and from all socioeconomic groups. The definition of a PbB level that defines a level of concern for lead in children continues to be an important issue in the United States. The childhood PbB concentration of concern has been steadily lowered by the Centers for Disease Control (CDC) from 40 ␮g/dL in 1970 to 10 ␮g/ dL in 1991. The Environmental Protection Agency lowered the level of concern to 10 ␮g/dL ("10-15 and possibly lower") in 1986, and the Agency for Toxic Substances and Disease Registry (ATSDR) also identified 10 ␮g/dL in its 1988 Report to Congress on childhood lead poisoning. In the workplace, the medical removal PbB concentration is 50 ␮g/dL for three consecutive checks and 60 ␮g/ dL for any single check. Blood level monitoring is triggered by an air lead concentration above 30 ␮g/m3. A worker is permitted to return to work when his blood lead level falls below 40 ␮g/dL. In 1991, the National Institute for Occupational Safety and Health (NIOSH) set a goal of eliminating occupational exposures that result in workers having PbB levels greater than 25 ␮g/dL. Exposure and Sources Lead is a persistent and ubiquitous pollutant. Since it is an elemental pollutant, it does not dissipate, biodegrade, or decay. Thus, the total amount of lead pollutants resulting from human activity increases over time, no matter how little additional lead is added to the environment. Lead is a multi-media pollutant, i.e., many sources contribute to the overall problem, and exposures from air, water, soil, dust, and food pathways may be important. For children, an important source of lead exposure is from swallowing nonfood items, an activity known as pica (an abnormal eating habit e.g., chips of lead-containing paint), most prevalent in 2 and 3 year-olds. Children who put toys or other items in their mouths may also swallow lead if lead-containing dust and dirt are on these items. Touching dust and dirt containing lead is commonplace, but relatively little lead passes through the skin. The most important source of high-level lead exposure in the United States is household dust derived from deteriorated leadbased paint. Numerous homes contain lead-based paint and continue to be occupied by families with small children, including 21 million pre-1940 homes and rental units which, 823

Lead management

over time, are rented to different families. Thus, a single house with deteriorated lead-based paint can be the source of exposure for many children. In addition to lead-based paint in houses, other important sources of lead exposure include (1) contaminated soil and dust from deteriorated paints originally applied to buildings, bridges, and water tanks; (2) drinking water into which lead has leached from lead, bronze, or brass pipes and fixtures (including lead-soldered pipe joints) in houses, schools, and public buildings; (3) occupational exposures in smelting and refining industries, steel welding and cutting operations, battery manufacturing plants, gasoline stations, and radiator repair shops; (4) airborne lead from smelters and other point sources of air pollution, including vehicles burning leaded fuels; (5) hazardous waste sites which contaminate soil and water; (6) food cans made with leadcontaining solder and pottery made with lead-containing glaze; and (7) food consumption if crops are grown using fertilizers that contain sewage sludge or if much lead-containing dust is deposited onto crops. In the atmosphere, the use of leaded gasoline has been the single largest source of lead (90%) since the 1920s, although the use of leaded fuel has been greatly curtailed and gasoline contributions are now greatly reduced (35%). As discussed below, leaded fuel and many other sources have been greatly reduced in the United States, although drinking water and other sources remain important in some areas. A number of other countries, however, continue to use leaded fuel and other leadcontaining products. Government Responses Many agencies are concerned with lead management. Lead agencies in the United States include the Environmental Protection Agency, the Centers for Disease Control, the U.S. Department of Health and Human Services, the Department of Housing and Urban Development, the Food and Drug Administration, the Consumer Product Safety Commission, the National Institute for Occupational Safety and Health, and the Occupational Safety and Health Administration. These agencies have taken many actions to reduce lead exposures, several of which have been very successful. General types of actions include: (1) restrictions or bans on the use of many products containing lead where risks from these products are high and where substitute products are available, e.g., interior paints, gasoline fuels, and solder; (2) recycling and safer ultimate disposal strategies for products where risks are lower, or for which technically and economically feasible substitutes are not available, e.g., leadacid automotive batteries, lead-containing wastes, pigments and used oil; (3) emission controls for lead smelters, primary metal industries, and other industrial point sources, including the use of the best practicable control technology (BPCT) for new lead smelting and processing facilities and 824

Environmental Encyclopedia 3 reasonable available control technologies (RACT) for existing facilities, and; (4) education and abatement programs where exposure is based on past uses of lead. The current goals of the Environmental Protection Agency (EPA) strategy are to reduce lead exposures to the fullest extent practicable, to significantly reduce the incidence of PbB levels above 10 ␮g/dL in children, and to reduce lead exposures that are anticipated to pose risks to children, the general public, or the environment. Several specific actions of this and other agencies are discussed below. The Residential Lead-based Paint Hazard Reduction Act of 1992 (Title X) provides the framework to reduce hazards from lead-based paint exposure, primarily in housing. It establishes a national infrastructure of trained workers, training programs and proficient laboratories, and a public education program to reduce hazards from lead exposure in paint in the nation’s housing stock. Earlier, to help protect small children who might swallow chips of paint, the Consumer Product Safety Commission (CPSC) restricted the amount of lead in most paints to 0.06 percent by weight. CDC further suggests that inside and outside paint used in buildings where people live be tested for lead. If the level of lead is high, the paint should be removed and replaced with a paint that contains an allowable level of lead. CPSC published a consumer safety alert/brochure on lead paint in the home in 1990, and has evaluated lead test kits for safety, efficacy, and consumer-friendliness. These kits are potential screening devices that may be used by the consumer to detect lead in paint and other materials. Title X also requires EPA to promulgate regulations that ensure personnel engaged in abatement activities are trained, to certify training programs, to establish standards for abatement activities, to promulgate model state programs, to establish a laboratory accreditation program, to establish a information clearinghouse, and to disclose lead hazards at property transfer. The Department of Housing and Urban Development (HUD) has begun activities that include updating regulations dealing with lead-based paint in HUD programs and federal property; providing support for local screening programs; increasing public education; supporting research to reduce the cost and improve the reliability of testing and abatement; increasing state and local support; and providing more money to support abatement in low and moderate income households. HUD estimated that the total cost of testing and abatement in high-priority hazard homes will be $8 to 10 billion annually over 10 years, although costs could be substantially lowered by integrating abatement with other renovation activities. CPSC, EPA, and states are required by the Lead Contamination Control Act of 1988 to test drinking water in schools for lead and to remove lead if levels are too high.

Environmental Encyclopedia 3 Drinking water coolers must also be lead-free and any that contain lead must be removed. EPA regulations limit lead in drinking water to 0.015 mg/L. To manage environmental exposures resulting from inhalation, EPA regulations limit lead to 0.1 and 0.05 g/ gal (0.38 and 0.19 g/L) in leaded and unleaded gasoline, respectively. Also, the National Ambient Air Quality Standards set a maximum lead concentrations of 1.5 ␮g/m3 using a three month average, although typical levels are far lower, 0.1 or 0.2 ␮g/m3. To identify and mitigate sources of lead in the diet, the Food and Drug Administration (FDA) has undertaken efforts that include voluntary discontinuation of lead solder in food cans by the domestic food industry, and elimination of lead in glazing on ceramic ware. Regulatory measures are being introduced for wine, dietary supplements, crystal ware, food additives, and bottled water. For workers in lead-using industries, the Occupational Safety and Health Administration (OSHA) has established environmental and biological standards that include maximum air and blood levels. This monitoring must be conducted by the employer, and elevated PbB levels may require the removal of an individual from the work place (levels discussed previously). The Permissible Exposure Level (PEL) limits air concentrations of lead to 50 ␮g/m3, and, if 30 ␮g/m3 is exceeded, employers must implement a program that includes medical surveillance, exposure monitoring, training, regulated areas, respiratory protection, protective work clothing and equipment, housekeeping, hygiene facilities and practices, signs and labels, and record keeping. In the construction industry, the PEL is 200 ␮g/m3. The National Institute for Occupational Safety and Health (NIOSH) recommends that workers not be exposed to levels of more than 100 ␮g/m3 for up to 10 hours, and NIOSH has issued a health alert to construction workers regarding possible adverse health effects from long-term and low-level exposure. NIOSH has also published alerts and recommendations for preventing lead poisoning during blasting, sanding, cutting, burning, or welding of bridges and other steel structures coated with lead paint. Finally, lead screening for children has recently increased. The CDC recommends that screening (testing) for lead poisoning be included in health care programs for children under 72 months of age, especially those under 36 months of age. For a community with a significant number of children having PbB levels between 10-14 ␮g/dL, community-wide lead poisoning prevention activities should be initiated. For individual children with PbB levels between 15-19 ␮g/dL, nutritional and educational interventions are recommended. PbB levels exceeding 20 ␮g/dL should trigger investigations of the affected individual’s environment and medical evaluations. The highest levels, above 45 ␮g/

Leafy spurge

dL, require both medical and environmental interventions, including chelation therapy. CDC also conducts studies to determine the impact of interventions on children’s blood lead levels. These regulatory activities have resulted in significant reductions in average levels of lead exposure. Nevertheless, lead management remains an important public health problem. [Stuart Batterman]

RESOURCES BOOKS Breen, J. J., and C. R. Stroup, eds. Lead Poisoning: Exposure, Abatement, Regulation. Lewis Publishers, 1995. Kessel, I., J. T. O’Connor, and J. W. Graef. Getting the Lead Out: The Complete Resource for Preventing and Coping with Lead Poisoning. Rev. ed. Cambridge, MA: Fisher Books, 2001. Pueschel, S. M., J. G. Linakis, and A. C. Anderson. Lead Poisoning in Childhood. Baltimore: Paul H. Brookes Publishing Co., 1996.

OTHER Farley, Dixie. “Dangers of Lead Still Linger.” FDA Consumer JanuaryFebruary 1998 [cited July 2002]. .

Lead shot Lead shot refers to the small pellets that are fired by shotguns while hunting waterfowl or upland fowl, or while skeet

shooting. Most lead shots miss their target and are dissipated into the environment. Because the shot is within the particle-size range that is favored by medium-sized birds as grit, it is often ingested and retained in the gizzard to aid in the mechanical abrasion of plant seeds, the first step in avian digestion. However, the shot also abrades during this process, releasing toxic lead that can poison the bird. It has been estimated that as much as 2–3 percent of the North American waterfowl population, or several million birds, may die from shot-caused lead poisoning each year. This problem will decrease in intensity, however, because lead shot is now being substantially replaced by steel shot in North America. See also Heavy metals and heavy metal poisoning

Leafy spurge Leafy spurge (Euphorbia esula L.), a perennial plant from Europe and Asia, was introduced to North America through imported grain products by 1827. It is 12 in (30.5 cm) to 3 ft (1 m) in height. Stems, leaves, and roots contain milky white latex which contains toxic cardiac glycosides that is distasteful to cattle, who will not eat it. Considered a noxious, or destructive, weed in southern Canada and the northern 825

Environmental Encyclopedia 3

League of Conservation Voters

Great Plains of the United States, it crowds out native rangeland grasses, reducing the number of cattle that can graze the land. It is responsible for losses of approximately 35–45 million dollars per year to the United States cattle and hay industries. Its aggressive root system makes controlling spread difficult. Roots spread vertically to 15 ft (5 m) with up to 300 root buds, and horizontally to nearly 30 ft (9 m). It regenerates from small portions of root. Tilling, burning, and herbicide use are ineffective control methods as roots are not damaged and may prevent immediate regrowth of the desired species. The introduction of specific herbivores of the leafy spurge from its native range, including certain species of beetles and moths, may be an effective means of control, as may be certain pathogenic fungi. Studies also indicate sheep and Angora goats will eat it. To control this plant’s rampant spread in North America, a combination of methods seems most effective. [Monica Anderson]

League of Conservation Voters In 1970 Marion Edey, a House committee staffer, founded the League of Conservation Voters (LCV) as the non-partisan political action arm of the United States’ environmental movement. LCV works to establish a pro-environment—or “green”—majority in Congress and to elect environmentally conscious candidates throughout the country. Through campaign donations, volunteers, and endorsements, pro-environment advertisements, and annual publications such as the National Environmental Scorecard, the League raises voter awareness of the environmental positions of candidates and elected officials. Technically it has no formal membership, but the League’s supporters—who make donations and purchase its publications—number 100,000. The board of directors is comprised of 24 important environmentalists associated with such organizations as the Sierra Club, the Environmental Defense Fund, and Friends of the Earth. Because these organizations would endanger their charitable tax status if they participated directly in the electoral process, environmentalists developed the League. Since 1970 LCV has influenced many elections. From its first effort in 1970—wherein LCV successfully prevented Rep. Wayne Aspinall of Colorado from obtaining a democratic nomination—the League has grown to be a significant force in American politics. In the 1989–90 elections LCV supported 120 pro-environment candidates and spent approximately $250,000 on their campaigns. In 1990 the League developed new endorsement tactics. First 826

it invented the term “greenscam” to identify candidates who only appear green. Next LCV produced two generic television advertisements for candidates. One advertisement, entitled “Greenscam,” attacked the aforementioned candidates; the other, entitled “Decisions,” was an award-winning, positive advertisement in support of pro-environment candidates. By the 2000 campaign the League had attained an unprecedented degree of influence in the electoral process. That year LCV raised and donated 4.1 million dollars in support of both Democratic and Republican candidates in a variety of ways. In endorsing a candidate the League no longer simply contributes money to a campaign. It provides “in-kind” assistance—for example, it places a trained field organizer on a staff, creates radio and television advertisements, or develops grassroots outreach programs and campaign literature. In addition to supporting specific candidates, LCV holds all elected officials accountable for their track records on environmental issues. The League’s annual publication National Environmental Scorecard lists the voting records of House and Senate members on environmental legislation. Likewise, the Presidential Scorecard identifies the positions that presidential candidates have taken. Through these publications and direct endorsement strategies, the League continues to apply pressure in the political process and elicit support for the environment [Andrea Gacki]

RESOURCES ORGANIZATIONS League of Conservation Voters, 1920 L Street, NW, Suite 800, Washington, D.C. USA 20036 (202) 785-8683, Fax: (202) 835-0491,

Louis Seymour Bazett Leakey (1903 – 1972) African-born English paleontologist and anthropologist Louis Seymour Bazett Leakey was born on August 7, 1903, in Kabete, Kenya. His parents, Mary Bazett (d. 1948) and Harry Leakey (1868–1940) were Church of England missionaries at the Church Missionary Society, Kabete, Kenya. Louis spent his childhood in the mission, where he learned the Kikuyu language and customs (he later compiled a Kikuyu grammar book). As a child, while pursuing his interest in ornithology—the study of birds—he often found stone tools washed out of the soil by the heavy rains, which Leakey believed were of prehistoric origin. Stone tools were primary evidence of the presence of humans at a particular site, as toolmaking was believed at the time to be practiced only by

Environmental Encyclopedia 3 humans and was, along with an erect posture, one of the chief characteristics used to differentiate humans from nonhumans. Scientists at the time, however, did not consider East Africa a likely site for finding evidence of early humans; the discovery of Pithecanthropus in Java in 1894 (the socalled Java Man, now considered to be an example of Homo erectus) had led scientists to assume that Asia was the continent from which human forms had spread. Shortly after the end of World War I, Leakey was sent to a public school in Weymouth, England, and later attended St. John’s College, Cambridge. Suffering from severe headaches resulting from a sports injury, he took a year off from his studies and joined a fossil-hunting expedition to Tanganyika (now Tanzania). This experience, combined with his studies in anthropology at Cambridge (culminating in a degree in 1926), led Leakey to devote his time to the search for the origins of humanity, which he believed would be found in Africa. Anatomist and anthropologist Raymond A. Dart’s discovery of early human remains in South Africa was the first concrete evidence that this view was correct. Leakey’s next expedition was to northwest Kenya, near Lakes Nakuru and Naivasha, where he uncovered materials from the Late Stone Age; at Kariandusi he discovered a 200,000year-old hand ax. In 1928 Leakey married Henrietta Wilfrida Avern, with whom he had two children: Priscilla, born in 1930, and Colin, born in 1933; the couple was divorced in the mid-1930s. In 1931 Leakey made his first trip to Olduvai Gorge—a 350-mi (564-km) ravine in Tanzania—the site that was to be his richest source of human remains. He had been discouraged from excavating at Olduvai by Hans Reck, a German paleontologist who had fruitlessly sought evidence of prehistoric humans there. Leakey’s first discoveries at that site consisted of both animal fossils, important in the attempts to date the particular stratum (or layer of earth) in which they were found, and, significantly, flint tools. These tools, dated to approximately one million years ago, were conclusive evidence of the presence of hominids—a family of erect primate mammals that use only two feet for locomotion—in Africa at that early date; it was not until 1959, however, that the first fossilized hominid remains were found there. In 1932, near Lake Victoria, Leakey found remains of Homo sapiens (modern man), the so-called Kanjera skulls (dated to 100,000 years ago) and Kanam jaw (dated to 500,000 years ago); Leakey’s claims for the antiquity of this jaw made it a controversial find among other paleontologists, and Leakey hoped he would find other, independent, evidence for the existence of Homo sapiens from an even earlier period—the Lower Pleistocene. In the mid-1930s, a short time after his divorce from Wilfrida, Leakey married his second wife, Mary Douglas

Louis Seymour Bazett Leakey

Nicol; she was to make some of the most significant discoveries of Leakey’s team’s research. The couple eventually had three children: Philip, Jonathan, and Richard E. Leakey. During the 1930s, Leakey also became interested in the study of the Paleolithic period in Britain, both regarding human remains and geology, and he and Mary Leakey carried out excavations at Clacton in southeast England. Until the end of the 1930s, Leakey concentrated on the discovery of stone tools as evidence of human habitation; after this period he devoted more time to the unearthing of human and prehuman fossils. His expeditions to Rusinga Island, at the mouth of the Kavirondo Gulf in Kenya, during the 1930s and early 1940s produced a large number of finds, especially of remains of Miocene apes. One of these apes, which Leakey named Proconsul africanus, had a jaw lacking in the so-called simian shelf that normally characterized the jaws of apes; this was evidence that Proconsul represented a stage in the progression from ancient apes to humans. In 1948 Mary Leakey found a nearly complete Proconsul skull, the first fossil ape skull ever unearthed; this was followed by the unearthing of several more Proconsul remains. Louis Leakey began his first regular excavations at Olduvai Gorge in 1952; however, the Mau Mau (an antiwhite secret society) uprising in Kenya in the early 1950s disrupted his paleontological work and induced him to write Mau Mau and the Kikuyu, in an effort to explain the rebellion from the perspective of a European with an insider’s knowledge of the Kikuyu. A second work, Defeating Mau Mau, followed in 1954. During the late 1950s, the Leakeys continued their work at Olduvai. In 1959, while Louis was recuperating from an illness, Mary Leakey found substantial fragments of a hominid skull that resembled the robust australopithecines—African hominids possessing small brains and nearhuman dentition—found in South Africa earlier in the century. Louis Leakey, who quickly reported the find to the journal Nature, suggested that this represented a new genus, which he named Zinjanthropus boisei, the genus name meaning “East African man,” and the species name commemorating Charles Boise, one of Leakey’s benefactors. This species, now called Australopithecus boisei, was later believed by Leakey to have been an evolutionary dead end, existing contemporaneously with Homo rather than representing an earlier developmental stage. In 1961, at Fort Ternan, Leakey’s team located fragments of a jaw that Leakey believed were from a hitherto unknown genus and species of ape, one he designated as Kenyapithecus wickeri, and which he believed was a link between ancient apes and humans, dating from 14 million years ago; it therefore represented the earliest hominid. In 1967, however, an older skull, one that had been found two decades earlier on Rusinga Island and which Leakey had 827

Louis Seymour Bazett Leakey

Louis Leakey. (The Library of Congress.)

originally given the name Ramapithecus africanus, was found to have hominid-like lower dentition; he renamed it Kenyapithecus africanus, and Leakey believed it was an even earlier hominid than Kenyapithecus wickeri. Leakey’s theories about the place of these Lower Miocene fossil apes in human evolution have been among his most widely disputed. During the early 1960s, a member of Leakey’s team found fragments of the hand, foot, and leg bones of two individuals, in a site near where Zinjanthropus had been found, but in a slightly lower and, apparently, slightly older layer. These bones appeared to be of a creature more like modern humans than Zinjanthropus, possibly a species of Homo that lived at approximately the same time, with a larger brain and the ability to walk fully upright. As a result of the newly developed potassium-argon dating method, it was discovered that the bed from which these bones had come was 1.75 million years old. The bones were, apparently, the evidence for which Leakey had been searching for years: skeletal remains of Homo from the Lower Pleistocene. Leakey designated the creature whose remains these were as Homo habilis ("man with ability"), a creature who walked upright and had dentition resembling that of modern humans, hands capable of toolmaking, and a large cranial capacity. Leakey saw this hominid as a direct ancestor of Homo erectus and modern humans. Not unexpectedly, Leakey was attacked by other scholars, as this identification of the frag828

Environmental Encyclopedia 3 ments moved the origins of the genus Homo back substantially further in time. Some scholars felt that the new remains were those of australopithecines, if relatively advanced ones, rather than very early examples of Homo. Health problems during the 1960s curtailed Leakey’s field work; it was at this time that his Centre for Prehistory and Paleontology in Nairobi became the springboard for the careers of such paleontologists as Jane Goodall and Dian Fossey in the study of nonhuman primates. A request came in 1964 from the Israeli government for assistance with the technical as well as the fundraising aspects involved in the excavation of an early Pleistocene site at Ubeidiya. This produced evidence of human habitation dating back 700,000 years, the earliest such find outside Africa. During the 1960s, others, including Mary Leakey and the Leakeys’ son Richard, made significant finds in East Africa; Leakey turned his attention to the investigation of a problem that had intrigued him since his college days: the determination of when humans had reached the North American continent. Concentrating his investigation in the Calico Hills in the Mojave Desert, California, he sought evidence in the form of stone tools of the presence of early humans, as he had done in East Africa. The discovery of some pieces of chalcedony (translucent quartz) that resembled manufactured tools in sediment dated from 50,000 to 100,000 years old stirred an immediate controversy; at that time, scientists believed that humans had settled in North America approximately 20,000 years ago. Many archaeologists, including Mary Leakey, criticized Leakey’s California methodology—and his interpretations of the finds—as scientifically unsound, but Leakey, still charismatic and persuasive, was successful in obtaining funding from the National Geographic Society and, later, several other sources. Human remains were not found in conjunction with the supposed stone tools, and many scientists have not accepted these “artifacts” as anything other than rocks. Shortly before Louis Leakey’s death, Richard Leakey showed his father a skull he had recently found near Lake Rudolf (now Lake Turkana) in Kenya. This skull, removed from a deposit dated to 2.9 million years ago, had a cranial capacity of approximately 800 cubic centimeters, putting it within the range of Homo and apparently vindicating Leakey’s long-held belief in the extreme antiquity of that genus; it also appeared to substantiate Leakey’s interpretation of the Kanam jaw. Leakey died of a heart attack in early October, 1972, in London. Some scientists have questioned Leakey’s interpretations of his discoveries. Other scholars have pointed out that two of the most important finds associated with him were actually made by Mary Leakey, but became widely known when they were interpreted and publicized by him; Leakey had even encouraged criticism through his tendency to publi-

Environmental Encyclopedia 3

Mary Douglas Nicol Leakey

cize his somewhat sensationalistic theories before they had been sufficiently tested. Critics have cited both his tendency toward hyperbole and his penchant for claiming that his finds were the “oldest,” the “first,” the “most significant"; in a 1965 National Geographic article, for example, Melvin M. Payne pointed out that Leakey, at a Washington, D.C., press conference, claimed that his discovery of Homo habilis had made all previous scholarship on early humans obsolete. Leakey has also been criticized for his eagerness to create new genera and species for new finds, rather than trying to fit them into existing categories. Leakey, however, recognized the value of publicity for the fundraising efforts necessary for his expeditions. He was known as an ambitious man, with a penchant for stubbornly adhering to his interpretations, and he used the force of his personality to communicate his various finds and the subsequent theories he devised to scholars and the general public. Leakey’s response to criticism was that scientists have trouble divesting themselves of their own theories in the light of new evidence. “Theories on prehistory and early man constantly change as new evidence comes to light,” Leakey remarked, as quoted by Payne in National Geographic. “A single find such as Homo habilis can upset long-held— and reluctantly discarded—concepts. A paucity of human fossil material and the necessity for filling in blank spaces extending through hundreds of thousands of years all contribute to a divergence of interpretations. But this is all we have to work with; we must make the best of it within the limited range of our present knowledge and experience.” Much of the controversy derives from the lack of consensus among scientists about what defines “human"; to what extent are toolmaking, dentition, cranial capacity, and an upright posture defining characteristics, as Leakey asserted? Louis Leakey’s significance revolves around the ways in which he changed views of early human development. He pushed back the date when the first humans appeared to a time earlier than had been believed on the basis of previous research. He showed that human evolution began in Africa rather than Asia, as had been maintained. In addition, he created research facilities in Africa and stimulated explorations in related fields, such as primatology (the study of primates). His work is notable as well for the sheer number of finds—not only of the remains of apes and humans, but also of the plant and animal species that comprised the ecosystems in which they lived. These finds of Leakey and his team filled numerous gaps in scientific knowledge of the evolution of human forms. They provided clues to the links between prehuman, apelike primates, and early humans, and demonstrated that human evolution may have followed more than one parallel path, one of which led to modern humans, rather than a single line, as earlier scientists had maintained. [Michael Sims]

RESOURCES BOOKS Cole, S. Leakey’s Luck: The Life of Louis Seymour Bazett Leakey, 1903–1972. Harcourt, 1975. Isaac, G., and E. R. McCown, eds., Human Origins: Louis Leakey and the East African Evidence. Benjamin-Cummings, 1976. Johanson, D. C., and M. A. Edey. Lucy: The Beginnings of Humankind. Simon & Schuster, 1981. Leakey, M. Disclosing the Past. Doubleday, 1984. Leakey, R. One Life: An Autobiography. Salem House, 1984. Malatesta, A., and R. Friedland, The White Kikuyu: Louis S. B. Leakey. McGraw-Hill, 1978.

Mary Douglas Nicol Leakey (1913 – 1996) English paleontologist and anthropologist For many years Mary Leakey lived in the shadow of her husband, Louis Leakey, whose reputation, coupled with the prejudices of the time, led him to be credited with some of his wife’s discoveries in the field of early human archaeology. Yet she has established a substantial reputation in her own right and has come to be recognized as one of the most important paleoanthropologists of the twentieth century. It was Mary Leakey who was responsible for some of the most important discoveries made by Louis Leakey’s team. Although her close association with Louis Leakey’s work on Paleolithic sites at Olduvai Gorge—a 350-mi (564-km) ravine in Tanzania—has led to her being considered a specialist in that particular area and period, she has in fact worked on excavations dating from as early as the Miocene Age (an era dating to approximately 18 million years ago) to those as recent as the Iron Age of a few thousand years ago. Mary Leakey was born Mary Douglas Nicol on February 6, 1913, in London. Her mother was Cecilia Frere, the great-granddaughter of John Frere, who had discovered prehistoric stone tools at Hoxne, Suffolk, England, in 1797. Her father was Erskine Nicol, a painter who himself was the son of an artist, and who had a deep interest in Egyptian archaeology. When Mary was a child, her family made frequent trips to southwestern France, where her father took her to see the Upper Paleolithic cave paintings. She and her father became friends with Elie Peyrony, the curator of the local museum, and there she was exposed to the vast collection of flint tools dating from that period of human prehistory. She was also allowed to accompany Peyrony on his excavations, though the archaeological work was not conducted in what would now be considered a scientific way— artifacts were removed from the site without careful study of the place in the earth where each had been found, obscuring valuable data that could be used in dating the artifact and analyzing its context. On a later trip, in 1925, she was taken 829

Mary Douglas Nicol Leakey

to Paleolithic caves by the Abbe Lemozi of France, parish priest of Cabrerets, who had written papers on cave art. After her father’s death in 1926, Mary Nicol was taken to Stonehenge and Avebury in England, where she began to learn about the archaeological activity in that country and, after meeting the archaeologist Dorothy Liddell, to realize the possibility of archaeology as a career for a woman. By 1930 Mary Nicol had undertaken coursework in geology and archaeology at the University of London and had participated in a few excavations in order to obtain field experience. One of her lecturers, R. E. M. Wheeler, offered her the opportunity to join his party excavating St. Albans, England, the ancient Roman site of Verulamium; although she only remained at that site for a few days, finding the work there poorly organized, she began her career in earnest shortly thereafter, excavating Neolithic (early Stone Age) sites in Henbury, Devon, where she worked between 1930 and 1934. Her main area of expertise was stone tools, and she was exceptionally skilled at making drawings of them. During the 1930s Mary met Louis Leakey, who was to become her husband. Leakey was by this time well known because of his finds of early human remains in East Africa; it was at Mary and Louis’s first meeting that he asked her to help him with the illustrations for his 1934 book, Adam’s Ancestors: An Up-to-Date Outline of What Is Known about the Origin of Man. In 1934 Mary Nicol and Louis Leakey worked at an excavation in Clacton, England, where the skull of a hominid—a family of erect primate mammals that use only two feet for locomotion—had recently been found and where Louis was investigating Paleolithic geology as well as fauna and human remains. The excavation led to Mary Leakey’s first publication, a 1937 report in the Proceedings of the Prehistoric Society. By this time, Louis Leakey had decided that Mary should join him on his next expedition to Olduvai Gorge in Tanganyika (now Tanzania), which he believed to be the most promising site for discovering early Paleolithic human remains. On the journey to Olduvai, Mary stopped briefly in South Africa, where she spent a few weeks with an archaeological team and learned more about the scientific approach to excavation, studying each find in situ—paying close attention to the details of the geological and faunal material surrounding each artifact. This knowledge was to assist her in her later work at Olduvai and elsewhere. At Olduvai, among her earliest discoveries were fragments of a human skull; these were some of the first such remains found at the site, and it would be twenty years before any others would be found there. Mary Nicol and Louis Leakey returned to England. Leakey’s divorce from his first wife was made final in the mid-1930s, and he and Mary Nicol were then married; the couple returned to Kenya 830

Environmental Encyclopedia 3 in January of 1937. Over the next few years, the Leakeys excavated Neolithic and Iron Age sites at Hyrax Hill, Njoro River Cave, and the Naivasha Railway Rock Shelter, which yielded a large number of human remains and artifacts. During World War II, the Leakeys began to excavate at Olorgasailie, southwest of Nairobi, but because of the complicated geology of that site, the dating of material found there was difficult. It did prove to be a rich source of material, however; in 1942 Mary Leakey uncovered hundreds, possibly thousands, of hand axes there. Her first major discovery in the field of prehuman fossils was that of most of the skull of a Proconsul africanus on Rusinga Island, in Lake Victoria, Kenya, in 1948. Proconsul was believed by some paleontologists to be a common ancestor of apes and humans, an animal whose descendants developed into two branches on the evolutionary tree: the Pongidae (great apes) and the Hominidae (who eventually evolved into true humans). Proconsul lived during the Miocene Age, approximately 18 million years ago. This was the first time a fossil ape skull had ever been found—only a small number have been found since— and the Leakeys hoped that this would be the ancestral hominid that paleontologists had sought for decades. The absence of a “simian shelf,” a reinforcement of the jaw found in modern apes, is one of the features of Proconsul that led the Leakeys to infer that this was a direct ancestor of modern humans. Proconsul is now generally believed to be a species of Dryopithecus, closer to apes than to humans. Many of the finds at Olduvai were primitive stone hand axes, evidence of human habitation; it was not known, however, who had made them. Mary’s concentration had been on the discovery of such tools, while Louis’s goal had been to learn who had made them, in the hope that the date for the appearance of toolmaking hominids could be moved back to an earlier point. In 1959 Mary unearthed part of the jaw of an early hominid she designated Zinjanthropus (meaning “East African Man") and whom she referred to as “Dear Boy"; the early hominid is now considered to be a species of Australopithecus—apparently related to the two kinds of australopithecine found in South Africa, Australopithecus africanus and Australopithecus robustus— and given the species designation boisei in honor of Louis Leakey’s sponsor Charles Boise. By means of potassium-argon dating, recently developed, it was determined that the fragment was 1.75 million years old, and this realization pushed back the date for the appearance of hominids in Africa. Despite the importance of this find, however, Louis Leakey was slightly disappointed, as he had hoped that the excavations would unearth not another australopithecine, but an example of Homo living at that early date. He was seeking evidence for his theory that more than one hominid form lived at Olduvai at the same time; these forms were the australopithecines, who eventually died out, and some early form of Homo, which

Environmental Encyclopedia 3 survived—owing to toolmaking ability and larger cranial capacity—to evolve into Homo erectus and, eventually, the modern human. Leakey hoped that Mary Leakey’s find would prove that Homo existed at that early level of Olduvai. The discovery he awaited did not come until the early 1960s, with the identification of a skull found by their son Jonathan Leakey that Louis designated as Homohabilis ("man with ability"). He believed this to be the true early human responsible for making the tools found at the site. In her autobiography, Disclosing the Past, released in 1984, Mary Leakey reveals that her professional and personal relationship with Louis Leakey had begun to deteriorate by 1968. As she increasingly began to lead the Olduvai research on her own, and as she developed a reputation in her own right through her numerous publications of research results, she believes that her husband began to feel threatened. Louis Leakey had been spending a vast amount of his time in fundraising and administrative matters, while Mary was able to concentrate on field work. As Louis began to seek recognition in new areas, most notably in excavations seeking evidence of early humans in California, Mary stepped up her work at Olduvai, and the breach between them widened. She became critical of his interpretations of his California finds, viewing them as evidence of a decline in his scientific rigor. During these years at Olduvai, Mary made numerous new discoveries, including the first Homo erectus pelvis to be found. Mary Leakey continued her work after Louis Leakey’s death in 1972. From 1975 she concentrated on Laetoli, Tanzania, which was a site earlier than the oldest beds at Olduvai. She knew that the lava above the Laetoli beds was dated to 2.4 million years ago, and the beds themselves were therefore even older; in contrast, the oldest beds at Olduvai were two million years old. Potassium-argon dating has since shown the upper beds at Laetoli to be approximately 3.5 million years old. In 1978 members of her team found two trails of hominid footprints in volcanic ash dated to approximately 3.5 million years ago; the form of the footprints gave evidence that these hominids walked upright, thus moving the date for the development of an upright posture back significantly earlier than previously believed. Mary Leakey considers these footprints to be among the most significant finds with which she has been associated. In the late 1960s Mary Leakey received an honorary doctorate from the University of the Witwatersrand in South Africa, an honor she accepted only after university officials had spoken out against apartheid. Among her other honorary degrees are a D.S.Sc. from Yale University and a D.Sc. from the University of Chicago. She received an honorary D.Litt. from Oxford University in 1981. She has also received the Gold Medal of the Society of Women Geographers. Louis Leakey was sometimes faulted for being too quick to interpret the finds of his team and for his propensity

Richard Erskine Frere Leakey

for developing sensationalistic, publicity-attracting theories. In recent years Mary Leakey had been critical of the conclusions reached by her husband—as well as by some others— but she did not add her own interpretations to the mix. Instead, she has always been more concerned with the act of discovery itself; she wrote that it is more important for her to continue the task of uncovering early human remains to provide the pieces of the puzzle than it is to speculate and develop her own interpretations. Her legacy lies in the vast amount of material she and her team have unearthed; she leaves it to future scholars to deduce its meaning. [Michael Sims]

RESOURCES BOOKS Isaac, G., and E. R. McCown, eds. Human Origins: Louis Leakey and the East African Evidence. Benjamin-Cummings, 1976. Reader, J. Missing Links. Little, Brown, 1981. Moore, R. E., Man, Time, and Fossils: The Story of Evolution. Knopf, 1961. Malatesta, A., and R. Friedland, The White Kikuyu: Louis S. B. Leakey. McGraw-Hill, 1978. Leakey, R. One Life: An Autobiography. Salem House, 1984. Johanson, D. C., and M. A. Edey, Lucy: The Beginnings of Humankind. Simon & Schuster, 1981. Cole, S. Leakey’s Luck: The Life of Louis Seymour Bazett Leakey, 1903–1972. Harcourt, 1975. Leakey, L. By the Evidence: Memoirs, 1932–1951. Harcourt, 1974.

Richard Erskine Frere Leakey (1944 – ) African-born English paleontologist and anthropologist Richard Erskine Frere Leakey was born on December 19, 1944, in Nairobi, Kenya. Continuing the work of his parents, Leakey has pushed the date for the appearance of the first true humans back even further than they had, to nearly three million years ago. This represents nearly a doubling of the previous estimates. Leakey also has found more evidence to support his father’s still controversial theory that there were at least two parallel branches of human evolution, of which only one was successful. The abundance of human fossils uncovered by Richard Leakey’s team has provided an enormous number of clues as to how the various fossil remains fit into the puzzle of human evolution. The team’s finds have also helped to answer, if only speculatively, some basic questions: When did modern human’s ancestors split off from the ancient apes? On what continent did this take place? At what point did they develop the characteristics now considered as defining human attributes? What is the relationship among and the chronology of the various genera and species of the fossil remains that have been found? 831

Richard Erskine Frere Leakey

While accompanying his parents on an excavation at Kanjera near Lake Victoria at the age of six, Richard Leakey made his first discovery of fossilized animal remains, part of an extinct variety of giant pig. Richard Leakey, however, was determined not to “ride upon his parents’ shoulders,” as Mary Leakey wrote in her autobiography, Disclosing the Past. Several years later, as a young teenager in the early 1960s, Richard demonstrated a talent for trapping wildlife, which prompted him to drop out of high school to lead photographic safaris in Kenya. His paleontological career began in 1963, when he led a team of paleontologists to a fossil-bearing area near Lake Natron in Tanganyika (now Tanzania), a site that was later dated to approximately 1.4 million years ago. A member of the team discovered the jaw of an early hominid—a member of the family of erect primate mammals that use only two feet for locomotion—called an Australopithecus boisei (then named Zinjanthropus).) This was the first discovery of a complete Australopithecus lower jaw and the only Australopithecus skull fragment found since Mary Leakey’s landmark discovery in 1959. Jaws provide essential clues about the nature of a hominid, both in terms of its structural similarity to other species and, if teeth are present, its diet. Richard Leakey spent the next few years occupied with more excavations, the most important result of which was the discovery of a nearly complete fossil elephant. In 1964 Richard married Margaret Cropper, who had been a member of his father’s team at Olduvai the year before. It was at this time that he became associated with his father’s Centre for Prehistory and Paleontology in Nairobi. In 1968, at the age of 23, he became administrative director of the National Museum of Kenya. While his parents had mined with great success the fossil-rich Olduvai Gorge, Richard Leakey concentrated his efforts in northern Kenya and southern Ethiopia. In 1967 he served as the leader of an expedition to the Omo Delta area of southern Ethiopia, a trip financed by the National Geographic Society. In a site dated to approximately 150,000 years ago, members of his team located portions of two fossilized human skulls believed to be from examples of Homo sapiens, or modern humans. While the prevailing view at the time was that Homo sapiens emerged around 60,000 years ago, these skulls were dated at 130,000 years old. While on an airplane trip, Richard Leakey flew over the eastern portion of Lake Rudolf (now Lake Turkana) on the Ethiopia-Kenya border, and he noticed from the air what appeared to be ancient lake sediments, a kind of terrain that he felt looked promising as an excavation site. He used his next National Geographic Society grant to explore this area. The region was Koobi Fora, a site that was to become Richard Leakey’s most important area for excavation. At Koobi Fora his team uncovered more than four hundred hominid fossils and an abundance of stone tools, such tools 832

Environmental Encyclopedia 3 being a primary indication of the presence of early humans. Subsequent excavations near the Omo River in Kenya, from 1968, unearthed more examples of early humans, the first found being another Australopithecus lower jaw fragment. At the area of Koobi Fora known as the KBS tuff (tuff being volcanic ash; KBS standing for the Kay Behrensmeyer Site, after a member of the team) stone tools were found. Preliminary dating of the site placed the area at 2.6 million years ago; subsequent tests over the following few years determined the now generally accepted age of 1.89 million years. In July of 1969, Richard Leakey came across a virtually complete Australopithecus boisei skull—lacking only the teeth and lower jaw—lying in a river bed. A few days later a member of the team located another hominid skull nearby, comprising the back and base of the cranium. The following year brought the discovery of many more fossil hominid remains, at the rate of nearly two per week. Among the most important finds was the first hominid femur to be found in Kenya, which was soon followed by several more. It was at about this time that Leakey obtained a divorce from his first wife, and in October of 1970, he married Meave Gillian Epps, who had been on the 1969 expedition. In 1972, Richard Leakey’s team uncovered a skull that appeared to be similar to the one identified by his father and called Homo habilis ("man with ability"). This was the early human that Louis Leakey maintained had achieved the toolmaking skills that precipitated the development of a larger brain capacity and led to the development of the modern human—Homo sapiens. This skull was more complete and apparently somewhat older than the one Louis Leakey had found and was thus the earliest example of the species Homo yet discovered. They labeled the new skull, which was found below the KBS tuff, “Skull 1470,” and this proved to among Richard Leakey’s most significant discoveries. The fragments consisted of small pieces of all sides of the cranium, and, unusually, the facial bones, enough to permit a reasonably complete reconstruction. Larger than the skulls found in 1969 and 1970, this example had approximately twice the cranial capacity of Australopithecus and more than half that of a modern human—nearly 800 cubic centimeters. At the time, Leakey believed the fragments to be 2.9 million years old (although a more recent dating of the site would place them at less than 2 million years old). Basing his theory in part on these data, Leakey developed the view that these early hominids may have lived as early as 2.5 or even 3.5 million years ago and gave evidence to the theory that Homo habilis was not a descendant of the australopithecines, but a contemporary. By the late 1960s, relations between Richard Leakey and his father had become strained, partly because of real or imagined competition within the administrative structure of the Centre for Prehistory, and partly because of some

Environmental Encyclopedia 3 divergences in methodology and interpretation. Shortly before Louis Leakey’s death, however, the discovery of Skull 1470 by Richard Leakey’s team allowed Richard to present his father with apparent corroboration of one of his central theories. Richard Leakey did not make his theories of human evolution public until 1974. At this time, scientists were still grappling with Louis Leakey’s interpretation of his findings that there had been at least two parallel lines of human evolution, only one of which led to modern humans. After Louis Leakey’s death, Richard Leakey reported that, based on new finds, he believed that hominids diversified between 3 and 3.5 million years ago. Various lines of australopithecines and Homo coexisted, with only one line, Homo, surviving. The australopithecines and Homo shared a common ancestor; Australopithecus was not ancestral to Homo. As did his father, Leakey believes that Homo developed in Africa, and it was Homo erectus who, approximately 1.5 million years ago, developed the technological capacity to begin the spread of humans beyond their African origins. In Richard Leakey’s scheme, Homo habilis developed into Homo erectus, who in turn developed into Homo sapiens, the present-day human. As new finds are made, new questions arise. Are newly discovered variants proof of a plurality of species, or do they give evidence of greater variety within the species that have already been identified? To what extent is sexual dimorphism responsible for the apparent differences in the fossils? In some scientific circles, the discovery of fossil remains at Hadar in Ethiopia by archaeologist Donald Carl Johanson and others, along with the more recent revised dating of Skull 1470, cast some doubt on Leakey’s theory in general and on his interpretation of Homo habilis in particular. Johanson believed that the fossils he found at Hadar and the fossils Mary Leakey found at Laetoli in Tanzania, and which she classified as Homo habilis, were actually all australopithecines; he termed them Australopithecus afarensis and claimed that this species is the common ancestor of both the later australopithecines and Homo. Richard Leakey has rejected this argument, contending that the australopithecines were not ancestral to Homo and that an earlier common ancestor would be found, possibly among the fossils found by Mary Leakey at Laetoli. The year 1975 brought another significant find by Leakey’s team at Koobi Fora: the team found what was apparently the skull of a Homo erectus, according to Louis Leakey’s theory a descendent of Homo habilis and probably dating to 1.5 million years ago. This skull, labeled “3733,” represents the earliest known evidence for Homo erectus in Africa. Richard Leakey began to suffer from health problems during the 1970s, and in 1979 he was diagnosed with a serious kidney malfunction. Later that year he underwent a

Leaking underground storage tank

kidney transplant operation, his younger brother Philip being the donor. During his recuperation Richard completed his autobiography, One Life, which was released in 1984, and following his recovery, he renewed his search for the origins of the human species. The summer of 1984 brought another major discovery: the so-called Turkana boy, a nearly complete skeleton of a Homo erectus, missing little but the hands and feet, and offering, for the first time, the opportunity to view many bones of this species. It was shortly after the unearthing of Turkana boy—whose skeletal remains indicate that he was a twelve-year-old youngster who stood approximately five-and-a-half feet tall—that the puzzle of human evolution became even more complicated. The discovery of a new skull, called the Black Skull, with an Australopithecus boisei face but a cranium that was quite apelike, introduced yet another complication, possibly a fourth branch in the evolutionary tree. Leakey became the Director of the Wildlife Conservation and Management Department for Kenya (Kenya Wildlife Service) in 1989 and in 1999 became head of the Kenyan civil service. [Michael Sims]

RESOURCES BOOKS Leakey, M. Disclosing the Past: An Autobiography. Doubleday, 1984. Leakey, R., and R. Lewin. Origins Reconsidered: In Search of What Makes Us Human. Doubleday, 1992. Reader, J. Missing Links. Little, Brown, 1981.

Leaking underground storage tank Leaking underground storage tanks (LUST) that hold toxic substances have come under new regulatory scrutiny in the United States because of the health and environmental hazards posed by the materials that can leak from them. These storage tanks typically hold petroleum products and other toxic chemicals beneath gas stations and other petroleum facilities. An estimated 63,000 of the nation’s underground storage tanks have been shown to leak contaminants into the environment or are considered to have the potential to leak at any time. One reason for the instability of underground storage tanks is their construction. Only five percent of underground storage tanks are made of corrosion-protected steel, while 84 percent are made of bare steel, which corrodes easily. Another 11 percent of underground storage tanks are made of fiberglass. Hazardous materials seeping from some of the nation’s six million LUSTs can contaminate aquifers, the waterbearing rock units that supply much of the earth’s drinking water. An aquifer, once contaminated, can be ruined as a source of fresh water. In particular, benzene has been found 833

Environmental Encyclopedia 3

Aldo Leopold

to be a contaminant of groundwater as a result of leaks from underground gasoline storage tanks. Benzene and other volatile organic compounds have been detected in bottled water despite manufacturers’ claims of purity. According to the Environmental Protection Agency (EPA), more than 30 states reported groundwater contamination from petroleum products leaking from underground storage tanks. States also reported water contamination from radioactive waste leaching from storage containment facilities. Other reported pollution problems include leaking hazardous substances that are corrosive, explosive, readily flammable, or chemically reactive. While water pollution may be the most visible consequence of leaks from underground storage tanks, fires and explosions are dangerous and sometimes real possibilities in some areas. The EPA is charged with exploring, developing, and disseminating technologies and funding mechanisms for cleanup. The primary job itself, however, is left to state and local governments. Actual cleanup is sometimes funded by the Leaking Underground Storage Tank trust fund established by Congress in 1986. Under the Superfund Amendment and Reauthorization Act, owners and operators of underground storage tanks are required to take corrective action to prevent leakage. See also Comprehensive Environmental Response, Compensation and Liability Act; Groundwater monitoring; Groundwater pollution; Storage and transport of hazardous materials; Toxic Substances Control Act [Linda Rehkopf]

RESOURCES BOOKS Epstein, L., and K. Stein. Leaking Underground Storage Tanks—Citizen Action: An Ounce of Prevention. New York: Environmental Information Exchange (Environmental Defense Fund), 1990.

PERIODICALS Breen, B. “A Mountain and a Mission.” Garbage 4 (May-June 1992): 52–57. Hoffman, R. D. R. “Stopping the Peril of Leaking Tanks.” Popular Science 238 (March 1991): 77–80.

OTHER U.S. Environmental Protection Agency. Office of Underground Storage Tanks (OUST). June 13, 2002 [June 21, 2002]. .

Aldo Leopold (1886 – 1978) American conservationist, ecologist, and writer Leopold was a noted forester, game manager, conservationist, college professor, and ecologist. Yet he is known worldwide for A Sand County Almanac, a little book considered an important, influential work to conservation movement 834

of the twentieth century. In it, Leopold established the land ethic, guidelines for respecting the land and preserving its integrity. Leopold grew up in Iowa, in a house overlooking the Mississippi River, where he learned hunting from his father and an appreciation of nature from his mother. He received a master’s degree in forestry from Yale and spent his formative professional years working for the United States Forest Service in the American Southwest. In the Southwest, Leopold began slowly to consider preservation as a supplement to Gifford Pinchot’s “conservation as wise use—greatest good for the greatest number” land management philosophy that he learned at Yale and in the Forest Service. He began to formulate arguments for the preservation of wilderness and the sustainable development of wild game. Formerly a hunter who encouraged the elimination of predators to save the “good” animals for hunters, Leopold became a conservationist who remembered with sadness the “dying fire” in the eyes of a wolf he had killed. In the Journal of Forestry, he began to speculate that perhaps Pinchot’s principle of highest use itself demanded “that representative portions of some forests be preserved as wilderness.” Leopold must be recognized as one of a handful of originators of the wilderness idea in American conservation history. He was instrumental in the founding of the Wilderness Society in 1935, which he described in the first issue of Living Wilderness as “one of the focal points of a new attitude—an intelligent humility toward man’s place in nature.” In a 1941 issue of that same journal, he asserted that wilderness also has critical practical uses “as a base-datum of normality, a picture of how healthy land maintains itself,” and that wilderness was needed as a living “land laboratory.” This thinking led to the first large area designated as wilderness in the United States. In 1924, some 574,000 acres (232,000 ha) of the Gila National Forest in New Mexico was officially named a wilderness area. Four years before, the much smaller Trappers Lake valley in Colorado was the first area designated “to be kept roadless and undeveloped.” Aldo Leopold is also widely acknowledged as the founder of wildlife management in the United States. His classic text on the subject, Game Management (1933), is still in print and widely read. Leopold tried to write a general management framework, drawing upon and synthesizing species monographs and local manuals. “Details apply to game alone, but the principles are of general import to all fields of conservation,” he wrote. He wanted to coordinate “science and use” in his book and felt strongly that land managers could either try to apply such principles, or be reduced to “hunting rabbits.” Here can be found early uses of concepts still central to conservation and management, such as limiting factor, niche, saturation point, and carrying capacity. Leopold later became the first professor of game

Environmental Encyclopedia 3 management in the United States at the University of Wisconsin. Leopold’s A Sand County Almanac, published in 1949, a year after his death, is often described as “the bible of the environmental movement” of the second half of the twentieth century. The Almanac is a beautifully written source of solid ecological concepts such as trophic linkages and biological community. The book extends basic ecological concepts, forming radical ideas to reformulate human thinking and behavior. It exhibits an ecological conscience, a conservation aesthetic, and a land ethic. He advocated his concept of ecological conscience to fill in a perceived gap in conservation education: “Obligations have no meaning without conscience, and the problem we face is the extension of the social conscience from people to land.” Lesser known is his attention to the aesthetics of land: according to the Almanac, an acceptable land aesthetic emerges only from learned and sensitive perception of the connections and needs of natural communities. The last words in the Almanac are that a true conservation aesthetic is developed “not of building roads into lovely country, but of building receptivity into the still unlovely human mind.” Leopold derived his now famous land ethic from an ecological conception of community. All ethics, he maintained, “rest upon a single premise: that the individual is a member of a community of interdependent parts.” He argued that “the land ethic simply enlarges the boundaries of the community to include soils, waters, plants, and animals, or collectively: the land.” Perhaps the most widely quoted statement from the book argues that “a thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise.” Leopold’s land ethic was first proposed in the Journal of Forestry article in 1933 and later expanded in the Almanac. It is a plea to care for land and its biological complex, instead of considering it a commodity. As Wallace Tegner noted, Leopold’s ideas were heretical in 1949, and to some people still are. “They smack of socialism and the public good,” he wrote. “They impose limits and restraints. They are antiProgress. They dampen American initiative. They fly in the face of the faith that land is a commodity, the very foundation stone of American opportunity.” As a result, Stegner and others do not think Leopold’s ethic had much influence on public thought, though the book has been widely read. Leopold recognized this. “The case for a land ethic would appear hopeless but for the minority which is in obvious revolt against these ’modern’ trends,” he commented. Nevertheless, the land ethic is alive and still flourishing, in an ever-growing minority. Even Stegner argued that “Leopold’s land ethic is not a fact but [an on-going] task.” Leopold did not shrink from that task, being actively involved in many

Aldo Leopold

Aldo Leopold examining a gray partridge. (Photograph by Robert Oetking. University of Wisconsin-Madison Archives. Reproduced by permission.)

conservation associations, teaching management principles and the land ethic to his classes, bringing up all five of his children to become conservationists, and applying his beliefs directly to his own land, a parcel of “logged, fire-swept, overgrazed, barren” land in Sauk County, Wisconsin. As his work has become more recognized and more influential, many labels have been applied to Leopold by contemporary writers. He is a “prophet” and “intellectual touchstone” to Roderick Nash a “founding genius” to J. Baird Callicott “an American Isaiah” to Stegner the “Moses of the new conservation impulse” to Donald Fleming. In a sense, he may have been all of these, but more than anything else, Leopold was an applied ecologist who tried to put into practice the principles he learned from the land. [Gerald R. Young Ph.D.]

RESOURCES BOOKS Callicott, J. B., ed. Companion to A Sand County Almanac: Interpretive and Critical Essays. Madison: University of Wisconsin Press, 1987. Flader, S. L., and J. B. Callicott, eds. The River of the Mother of God and Other Essays by Aldo Leopold. Madison: University of Wisconsin Press, 1991.

835

Environmental Encyclopedia 3

Less developed countries

Fritzell, P. A. “A Sand County Almanac and The Conflicts of Ecological Conscience.” In Nature Writing and America: Essays Upon a Cultural Type. Ames: Iowa State University Press, 1990. Leopold, A. Game Management. New York: Charles Scribner’s Sons, 1933. ———. A Sand County Almanac. New York: Oxford University Press, 1949. Meine, C. Aldo Leopold: His Life and Work. Madison: University of Wisconsin Press, 1988. Oelschlaeger, M. “Aldo Leopold and the Age of Ecology.” In The Idea of Wilderness: From Prehistory to the Age of Ecology. New Haven, CT: Yale University Press, 1991. Strong, D. H. “Aldo Leopold.” In Dreamers and Defenders: American Conservationists. Lincoln: University of Nebraska Press, 1988.

Less developed countries Less developed countries (LDCs) have lower levels of economic prosperity, health care, and education than most other countries. Development or improvement in economic and social conditions encompasses various aspects of general welfare, including infant survival, expected life span, nutrition, literacy rates, employment, and access to material goods. Less developed countries (LDCs) are identified by their relatively poor ratings in these categories. In addition, most LDCs are marked by high population growth, rapidly expanding cities, low levels of technological development, and weak economies dominated by agriculture and the export of natural resources. Because of their limited economic and technological development, LDCs tend to have relatively little international political power compared to more developed countries (MDC) such as Japan, the United States, and Germany. A variety of standard measures, or development indices, are used to assess development stages. These indices are generalized statistical measures of quality of life for individuals in a society. Multiple indices are usually considered more accurate than a single number such as Gross National Product, because such figures tend to give imprecise and simplistic impressions of conditions in a country. One of the most important of the multiple indices is the infant mortality rate. Because children under five years old are highly susceptible to common diseases, especially when they are malnourished, infant mortality is a key to assessing both nutrition and access to health care. Expected life span, the average age adults are able to reach, is used as a measure of adult health. Daily calorie and protein intake per person are collective measures that reflect the ability of individuals to grow and function effectively. Literacy rates, especially among women, who are normally the last to receive an education, indicate access to schools and preparation for technologically advanced employment. Fertility rates are a measure of the number of children produced per family or per woman in a population and are regarded as an important measure of the confidence parents 836

have in their childrens’ survival. High birth rates are associated with unstable social conditions because a country with a rapidly growing population often cannot provide its citizens with food, water, sanitation, housing space, jobs, and other basic needs. Rapidly growing populations also tend to undergo rapid urbanization. People move to cities in search of jobs and educational opportunities, but in poor countries the cost of providing basic infrastructure in an expanding city can be debilitating. As most countries develop, they pass from a stage of high birth rates to one of low birth rates, as child survival becomes more certain and a family’s investment in educating and providing for each child increases. Most LDCs were colonies under foreign control during the past 200 years. Colonial powers tended to undermine social organization, local economies, and natural resource bases, and many recently independent states are still recovering from this legacy. Thus, much of Africa, which provided a wealth of natural resources to Europe between the seventeenth and twentieth centuries, now lacks the effective and equitable social organization necessary for continuing development. Similarly, much of Central America (colonized by Spain in the fifteenth century) and portions of South and Southeast Asia (colonized by England, France, the Netherlands, and others) remain less developed despite their wealth of natural resources. The development processes necessary to improve standards of living in LDCs may involve more natural resource extraction, but usually the most important steps involve carefully choosing the goods to be produced, decreasing corruption among government and business leaders, and easing the social unrest and conflicts that prevent development from proceeding. All of these are extraordinarily difficult to do, but they are essential for countries trying to escape from poverty. See also Child survival revolution; Debt for nature swap; Economic growth and the environment; Indigenous peoples; Shanty towns; South; Sustainable development; Third World; Third World pollution; Tropical rain forest; World Bank [Muthena Naseri]

RESOURCES BOOKS Gill, S., and D. Law. The Global Political Economy. Baltimore: Johns Hopkins University Press, 1991. World Bank. World Development Report: Development and the Environment. Oxford, England: Oxford University Press, 1992. World Bank. World Development Report 2000/2001: Attacking Poverty. Oxford, England: Oxford University Press, 2000.

Leukemia Leukemia is a disease of the blood-forming organs. Primary tumors are found in the bone marrow and lymphoid tissues,

Environmental Encyclopedia 3 specifically the liver, spleen, and lymph nodes. The characteristic common to all types of leukemia is the uncontrolled proliferation of leukocytes (white blood cells) in the blood stream. This results in a lack of normal bone marrow growth, and bone marrow is replaced by immature and undifferentiated leukocytes or “blast cells.” These immature and undifferentiated cells then migrate to various organs in the body, resulting in the pathogenesis of normal organ development and processing. Leukemia occurs with varying frequencies at different ages, but it is most frequent among the elderly who experience 27,000 cases a year in the United States to 2,200 cases a year for younger people. Acute lymphoblastic leukemia, most common in children, is responsible for two-thirds of all cases. Acute nonlymphoblastic leukemia and chronic lymphocytic leukemia are most common among adults— they are responsible for 8,000 and 9,600 cases a year respectively. The geographical sites of highest concentration are the United States, Canada, Sweden, and New Zealand. While there is clear evidence that some leukemias are linked to genetic traits, the origins of this disease in most cases is mysterious. It seems clear, however, that environmental exposure to radiation, toxic substances, and other risk factors plays an important role in many leukemias. See also Cancer; Carcinogen; Radiation exposure; Radiation sickness

Lichens Lichens are composed of fungi and algae. Varying in color from pale whitish green to brilliant red and orange, lichens usually grow attached to rocks and tree trunks and appear as thin, crusty coatings, as networks of small, branched strands, or as flattened, leaf-like forms. Some common lichens are reindeer moss and the red “British soldiers.” There are approximately 20,000 known lichen species. Because they often grow under cold, dry, inhospitable conditions, they are usually the first plants to colonize barren rock surfaces. The fungus and the alga form a symbiotic relationship within the lichen. The fungus forms the body of the lichen, called the thallus. The thallus attaches itself to the surface of a rock or tree trunk, and the fungal cells take up water and nutrients from the environment. The algal cells grow inside the fungal cells and perform photosynthesis, as do other plant cells, to form carbohydrates. Lichens are essential in providing food for other organisms, breaking down rocks, and initiating soil building. They are also important indicators and monitors of air pollution effects. Since lichens grow attached to rock and tree surfaces, they are fully exposed to airborne pollutants, and chemical analysis of lichen tissues can be used to measure the quantity

Life cycle assessment

of pollutants in a particular area. For example, sulfur dioxide, a common emission from power plants, is a major air pollutant. Many studies show that as the concentrations of sulfur dioxide in the air increase, the number of lichen species decreases. The disappearance of lichens from an area may be indicative of other, widespread biological impacts. Sometimes, lichens are the first organisms to transfer contaminants to the food chain. Lichens are abundant through vast regions of the arctic tundra and form the main food source for caribou (Rangifer tarandus) in winter. The caribou are hunted and eaten by northern Alaskan Eskimos in spring and early summer. When the effects of radioactive fallout from weapons-testing in the arctic tundra were studied, it was discovered that lichens absorbed virtually all of the radionuclides that were deposited on them. Strontium90 and cesium-137 were two of the major radionuclide contaminants. As caribou grazed on the lichens, these radionuclides were absorbed into the caribous’ tissues. At the end of the winter, caribou flesh contained three to six times as much cesium-137 as it did in the fall. When the caribou flesh was consumed by the Eskimos, the radionuclides were transferred to them as well. See also Indicator organism; Symbiosis [Usha Vedagiri]

RESOURCES BOOKS Connell, D. W., and G. J. Miller. Chemistry and Ecotoxicology of Pollution. New York: Wiley, 1984. Smith, R. L., and T. M. Smith Ecology and Field Biology. 6th ed. Upper Saddle River, NJ: Prentice Hall, 2002. Weier, T. E., et al. Botany: An Introduction to Plant Biology. New York: Wiley, 1982.

Life cycle assessment Life cycle assessment (or LCA) refers to a process in industrial ecology by which the products, processes, and facilities used to manufacture specific products are each examined for their environmental impacts. A balance sheet is prepared for each product that considers: the use of materials; the consumption of energy; the recycling, re-use, and/or disposal of non-used materials and energy (in a less-enlightened context, these are referred to as “wastes"); and the recycling or re-use of products after their commercial life has passed. By taking a comprehensive, integrated look at all of these aspects of the manufacturing and use of products, life cycle assessment finds ways to increase efficiency, to re-use, reduce, and recycle materials, and to lessen the overall environmental impacts of the process. 837

Limits to Growth (1972) and Beyond the Limits (1992)

Lignite see Coal

Limits to Growth (1972) and Beyond the Limits (1992) Published at the height of the oil crisis in the 1970s, the Limits to Growth study is credited with lifting environmental concerns to an international and global level. Its fundamental conclusion is that if rapid growth continues unabated in the five key areas of population, food production, industrialization, pollution, and consumption of nonrenewable natural resources, the planet will reach the limits of growth within 100 years. The most probable result will be a “rather sudden and uncontrollable decline in both population and industrial capacity.” The study grew out of an April 1968 meeting of 30 scientists, educators, economists, humanists, industrialists, and national and international civil servants who had been brought together by Dr. Aurelio Peccei, an Italian industrial manager and economist. Peccei and the others met at the Accademia dei Lincei in Rome to discuss the “present and future predicament of man,” and from their meeting came the Club of Rome. Early meetings of the club resulted in a decision to initiate the Project on the Predicament of Mankind, intended to examine the array of problems facing all nations. Those problems ranged from poverty amidst plenty and environmental degradation to the rejection of traditional values and various economic disturbances. In the summer of 1970, Phase One of the project took shape during a series of meetings in Bern, Switzerland and Cambridge, Massachusetts. At a two-week meeting in Cambridge, Professor Jay Forrester of the Massachusetts Institute of Technology (MIT) presented a global model for analyzing the interacting components of world problems. Professor Dennis Meadows led an international team in examining the five basic components, mentioned above, that determine growth on this planet and its ultimate limits. The team’s research culminated in the 1972 publication of the study, which touched off intense controversy and further research. Underlying the study’s dramatic conclusions is the central concept of exponential growth, which occurs when a quantity increases by a constant percentage of the whole in a constant time period. “For instance, a colony of yeast cells in which each cell divides into two cells every ten minutes is growing exponentially,” the study explains. The model used to capture the dynamic quality of exponential growth is a System Dynamics model, developed over a 30-year period at MIT, which recognizes that the structure of any 838

Environmental Encyclopedia 3

system determines its behavior as much as any individual parts of the system. The components of a system are described as “circular, interlocking, sometimes time-delayed.” Using this model (called World3), the study ran scenarios— what-if analyses—to reach its view of how the world will evolve if present trends persist. “Dynamic modeling theory indicates that any exponentially growing quantity is somehow involved with a positive feedback loop,” the study points out. “In a positive feedback loop a chain of cause-and-effect relationships closes on itself, so that increasing any one element in the loop will start a sequence of changes that will result in the originally changed element being increased even more.” In the case of world population growth, the births per year act as a positive feedback loop. For instance, in 1650, world population was half a billion and was growing at a rate of 0.3 percent a year. In 1970, world population was 3.6 billion and was growing at a rate of 2.1 percent a year. Both the population and the rate of population growth have been increasing exponentially. But in addition to births per year, the dynamic system of population growth includes a negative feedback loop: deaths per year. Positive feedback loops create runaway growth, while negative feedback loops regulate growth and hold a system in a stable state. For instance, a thermostat will regulate temperature when a room reaches a certain temperature, the thermostat shuts off the system until the temperature decreases enough to restart the system. With population growth, both the birth and death rates were relatively high and irregular before the Industrial Revolution. But with the spread of medicines and longer life expectancies, the death rate has slowed while the birth rate has risen. Given these trends, the study predicted a worldwide jump in population of seven billion over 30 years. This same dynamic of positive and negative feedback loops applies to the other components of the world system. The growth in world industrial capital, with the positive input of investment, creates rising industrial output, such as houses, automobiles, textiles, consumer goods, and other products. On the negative feedback side, depreciation, or the capital discarded each year, draws down the level of industrial capital. This feedback is “exactly analogous to the death rate loop in the population system,” the study notes. And, as with world population, the positive feedback loop is “strongly dominant,” creating steady growth in worldwide industrial capital and the use of raw materials needed to create products. This system in which exponential growth is occurring, with positive feedback loops outstripping negative ones, will push the world to the limits of exponential growth. The study asks what will be needed to sustain world economic and population growth until and beyond the year 2000 and concludes that two main categories of ingredients can be

Environmental Encyclopedia 3 defined. First, there are physical necessities that support all physiological and industrial activity: food, raw materials, fossil and nuclear fuels, and the ecological systems of the planet that absorb waste and recycle important chemical substances. Arable land, fresh water, metals, forests, and oceans are needed to obtain those necessities. Second, there are social necessities needed to sustain growth, including peace, social stability, education, employment, and steady technological progress. Even assuming that the best possible social conditions exist for the promotion of growth, the earth is finite and therefore continued exponential growth will reach the limits of each physical necessity. For instance, about 1 acre (0.4 ha) of arable land is needed to grow enough food per person. With that need for arable land, even if all the world’s arable land were cultivated, current population growth rates will still create a “desperate land shortage before the year 2000,” the study concludes. The availability of fresh water is another crucial limiting factor, the study points out. “There is an upper limit to the fresh water runoff from the land areas of the earth each year, and there is also an exponentially increasing demand for that water.” This same analysis is applied to nonrenewable resources, such as metals, coal, iron, and other necessities for industrial growth. World demand is rising steadily and at some point demand for each nonrenewable resource will exceed supply, even with recycling of these materials. For instance, the study predicts that even if 100 percent recycling of chromium from 1970 onward were possible, demand would exceed supply in 235 years. Similarly, while it is not known how much pollution the world can take before vital natural processes are disrupted, the study cautions that the danger of reaching those limits is especially great because there is usually a long delay between the time a pollutant is released and the time it begins to negatively affect the environment. While the study foretells worldwide collapse if exponential growth trends continue, it also argues that the necessary steps to avert disaster are known and are well within human capabilities. Current knowledge and resources could guide the world to a sustainable equilibrium society provided that a realistic, long-term goal and the will to achieve that goal are pursued. The sequel to the 1972 study, Beyond the Limits, was not sponsored by the Club of Rome, but it is written by three of the original authors. While the basic analytical framework remains the same in the later work—drawing upon the concepts of exponential growth and feedback loops to describe the world system—its conclusions are more severe. No longer does the world only face a potential of “overshooting” its limits. “Human use of many essential resources and generation of many kinds of pollutants have

Limnology

already surpassed rates that are physically sustainable,” according to the 1992 study. “Without significant reductions in material and energy flows, there will be in the coming decades an uncontrolled decline in per capita food output, energy use, and industrial output.” However, like its predecessor, the later study sounds a note of hope, arguing that decline is not inevitable. To avoid disaster requires comprehensive reforms in policies and practices that perpetuate growth in material consumption and population. It also requires a rapid, drastic jump in the efficiency with which we use materials and energy. Both the earlier and the later study were received with great controversy. For instance, economists and industrialists charged that the earlier study ignored the fact that technological innovation could stretch the limits to growth through greater efficiency and diminishing pollution levels. When the sequel was published, some critics charged that the World3 model could have been refined to include more realistic distinctions between nations and regions, rather than looking at all trends on a world scale. For instance, different continents, rich and poor nations, North, South, and East, various regions—all are different, but those differences are ignored in the model, thereby making it unrealistic even though modeling techniques have evolved significantly since World3 was first developed. See also Sustainable development [David Clarke]

RESOURCES BOOKS Meadows, D., et al. The Limits to Growth: A Report for The Club of Rome’s Project on the Predicament of Mankind. New York: Universe Books, 1972. Meadows, D., D. L. Meadows, and J. Randers. Beyond the Limits: Confronting Global Collapse, Envisioning a Sustainable Future. Post Mills, VT: Chelsea Green, 1992.

Limnology Derived from the Greek word limne, meaning marsh or pond, the term limnology was first used in reference to lakes by F. A. Forel (1841–1912) in 1892 in a paper titled “Le Le´man: Monographie Limnology,” a study of what we now call Lake Geneva in Switzerland. Limnology, also known as aquatic ecology, refers to the study of fresh water communities within continental boundaries. It can be subdivided into the study of lentic (standing water habitats such as lakes, ponds, bogs, swamps, and marshes) and lotic (running water habitats such as rivers, streams, and brooks) environments. Collectively, limnologists study the morphological, physical, chemical, and biological aspects of these habitats. 839

Environmental Encyclopedia 3

Raymond L. Lindeman

Raymond L. Lindeman

(1915 – 1942)

American ecologist Few scholars or scientists, even those much published and long-lived, leave singular, indelible imprints on their disciplines,. Yet, Raymond Lindeman, in 26 short years, who published just six articles, was described shortly after his death by G. E. Hutchinson as “one of the most creative and generous minds yet to devote itself to ecological science,” and the last of those six papers, “The Trophic-Dynamic Aspect of Ecology,"—published posthumously in 1942— continues to be considered one of the foundational papers in ecology, an article “path-breaking in its general analysis of ecological succession in terms of energy flow through the ecosystem,” an article based on an idea that Edward Kormondy has called “the most significant formulation in the development of modern ecology.” Immediately after completing his doctorate at the University of Minnesota, Lindeman accepted a one-year Sterling fellowship at Yale University to work with G. Evelyn Hutchinson, the Dean of American limnologists. He had published chapters of his thesis one by one and at Yale worked to revise the final chapter, refining it with ideas drawn from Hutchinson’s lecture notes and from their discussions about the ecology of lakes. Lindeman submitted the manuscript to Ecology with Hutchinson’s blessings, but it was rejected based on reviewers’ claims that it was speculation far beyond the data presented from research on three lakes, including Lindeman’s own doctoral research on Cedar Bog Lake. After input from several well-known ecologists, further revisions, and with further urging from Hutchinson, the editor finally overrode the reviewers’ comments and accepted the manuscript; it was published in the October, 1942 issue of Ecology, a few months after Lindeman died in June of that year. The important advances made by Lindeman’s seminal article included his use of the ecosystem concept, which he was convinced was “of fundamental importance in interpreting the data of dynamic ecology,” and his explication of the idea that “all function, and indeed all life” within ecosystems depends on the movement of energy through such systems by way of trophic relationships. His use of ecosystem went beyond the little attention paid to it by Hutchinson and beyond Tansley’s labeling of the unit seven years earlier to open up “new directions for the analysis of the functioning of ecosystems.” More than half a century after Lindeman’s article, and despite recent revelations on the uncertainty and unpredictability of natural systems, a majority of ecologists probably still accept ecosystem as the basic unit in ecology and, in those systems, energy exchange as the basic process. Lindeman was able to effectively demonstrate a way to bring together or synthesize two quite separate traditions 840

in ecology, autecology, dependent on physiological studies of individual organisms and species, and synecology focused on studies of communities, aggregates of individuals. He believed, and demonstrated, that ecological research would benefit from a synthesis of these organism-based approaches and focus on the energy relationships that tied organism and environment into one unit—the ecosystem—suggesting as a result that biotic and abiotic could not realistically be disengaged, especially in ecology. Half a decade or so of work on cedar bog lakes, and half a dozen articles would seem a thin stem on which to base a legacy. But it really boils down to Lindeman’s synthesis of that work in that one singular, seminal paper, in which he created one of the significant stepping stones from a mostly descriptive discipline toward a more sophisticated and modern theoretical ecology. [Gerald J. Young Ph.D.]

RESOURCES PERIODICALS Cook, Robert E. “Raymond Lindeman and the Trophic-Dynamic Concept in Ecology.” Science 198, no. 4312 (October 1977): 22–26. Lindsey, Alton A. “The Ecological Way.” Naturalist-Journal of the Natural History Society of Minnesota 31 (Spring 1980): 1–6. Reif, Charles B. “Memories of Raymond Laurel Lindeman.” The Bulletin of the Ecological Society of America 67, no. 1 (March 1986): 20–25.

Liquid metal fast breeder reactor The liquid metal fast breeder reactor (LMFBR) is a nuclear reactor that has been modified to increase the efficiency at which non-fissionable uranium-238 is converted to fissionable plutonium-239, which can be used as fuel in the production of nuclear power. The reactor uses “fast” rather than “slow” neutrons to strike a uranium-238 nucleus, resulting in the formation of plutonium-239. In a second modification, it uses a liquid metal, usually sodium, rather than neutronabsorbing water as a more efficient coolant. Since the reactor produces new fuel as it operates, it is called a breeder reactor. The main appeal of breeder reactors is that they provide an alternative way of obtaining fissionable materials. The supply of natural uranium in the earth’s crust is fairly large, but it will not last forever. Plutonium-239 from breeder reactors might become the major fuel used in reactors built a few hundred or thousand years from now. However, the potential of LMFBRs has not as yet been realized. One serious problem involves the use of liquid sodium as coolant. Sodium is a highly corrosive metal and in an LMFBR it is converted into a radioactive form, sodium-24. Accidental release of the coolant from such a plant could, therefore, constitute a serious environmental hazard.

Environmental Encyclopedia 3

Liquified natural gas

Liquid metal fast breeder reactor. (McGraw-Hill Inc. Reproduced by permission.)

In addition, plutonium itself is difficult to work with. It is one of the most toxic substances known to humans, and its half-life of 24,000 years means that its release presents longterm environmental problems. Small-scale pilot LMFBR reactors have been tested in the United States, Saudi Arabia, Great Britain, and Germany since 1966, and all have turned out to be far more expensive than had been anticipated. The major United States research program based at Clinch, Tennessee, began in 1970. By 1983, the U. S. Congress refused to continue funding the project due to its slow and unsatisfactory progress. See also Nuclear fission; Nuclear Regulatory Commission; Radioactivity; Radioactive waste management [David E. Newton]

RESOURCES BOOKS Cochran, Thomas B. The Liquid Metal Fast Breeder Reactor: An Environmental and Economic Critique. Baltimore: Johns Hopkins University Press, 1974. Mitchell III, W., and S. E. Turner. Breeder Reactors. Washington, DC: U.S. Atomic Energy Commission, 1971.

OTHER International Nuclear Information System. Links to Fast Reactor Related Sites. June 7, 2002 [June 21, 2002]. .

Liquified natural gas Natural gas is a highly desirable fuel in many respects. It burns with the release of a large amount of energy, producing almost entirely carbon dioxide and water as waste products. Except for possible greenhouse effects of carbon dioxide, these compounds produce virtually no environmental hazard. Transporting natural gas through transcontinental pipelines is inexpensive and efficient where topography allows the laying of pipes. Oceanic shipping is difficult, however, because of the flammability of the gas and the high volumes involved. The most common way of dealing with these problems is to condense the gas first and then transport it in the form of liquified natural gas (LNG). But LNG must be maintained at temperatures of about -260°F (-160°C) and protected from leaks and flames during loading and unloading. See also Fossil fuels

841

Lithology

Lithology Lithology is the study of rocks, emphasizing their macroscopic physical characteristics, including grain size, mineral composition, and color. Lithology and its related field, petrography (the description and systematic classification of rocks), are subdisciplines of petrology, which also considers microscopic and chemical properties of minerals and rocks as well as their origin and decay.

Littoral zone In marine systems, littoral zone is synonymous with intertidal zone and refers to the area on marine shores that is periodically exposed to air during low tide. The freshwater littoral zone is that area near the shore characterized by submerged, floating, or emergent vegetation. The width of a particular littoral zone may vary from several miles to a few feet. These areas typically support an abundance of organisms and are important feeding and nursery areas for fishes, crustaceans, and birds. The distribution and abundance of individual species in the littoral zone is dependent on predation and competition as well as tolerance of physical factors. See also Neritic zone; Pelagic zone

Loading The term loading has a wide variety of specialized meanings in various fields of science. In general, all refer to the addition of something to a system, just as loading a truck means filling it with objects. In the science of acoustics, for example, loading refers to the process of adding materials to a speaker in order to improve its acoustical qualities. In environmental science, loading is used to describe the contribution made to any system by some component. One might analyze, for example, how an increase in chlorofluorocarbon (CFC) loading in the stratosphere might affect the concentration of ozone there.

Logging Logging is the systematic process of cutting down trees for lumber and wood products. The method of logging called clearcutting, in which entire areas of forests are cleared, is the most prevalent practice used by lumber companies. Clearcutting is the cheapest and most efficient way to harvest a forest’s available resources. This practice drastically alters the forest ecosystem, and many plants and animals are displaced or destroyed by it. After clearcutting is performed on forests, forestry management techniques may be intro842

Environmental Encyclopedia 3 duced in order to manage the growth of new trees on the cleared land. Selective logging is an alternative to clearcutting. In selective logging, only certain trees in a forest are chosen to be logged, usually on the basis of their size or species. By taking a smaller percentage of trees, the forest is protected from destruction and fragile plants and animals in the forest ecosystem are more likely to survive. New, innovative techniques offer alternatives for preserving the forest. For example, the Shelterwood Silvicultural System harvests mature trees in phases. First, part of the original stand is removed to promote growth of the remaining trees. After this occurs, regeneration naturally follows using seeds provided by the remaining trees. Once regeneration has occurred, the remaining mature trees are harvested. Early logging equipment included long, two-man straight saws and teams of animals to drag trees away. After World War II, technological advances made logging easier. The bulldozer and the helicopter allowed loggers to enter into new and previously untouched areas. The chainsaw allowed loggers to cut down many more trees each day. Today, enormous machines known as feller-bunchers take the place of human loggers. These machines use a hydraulic clamp that grasps the individual tree and huge shears that cut through it in one swift motion. High demands for lumber and forest products have caused prolific and widespread commercial logging. Certain methods of timber harvesting allow for subsequent regeneration, while others cause deforestation, or the irreversible creation of a non-forest condition. Deforestation significantly changed the landscape of the United States. Some observers remarked as early as the mid-1700s upon the rapid changes made to the forests from the East Coast to the Ohio River Valley. Often, the lumber in forests was burned away so that the early settlers could build farms upon the rich soil that had been created by the forest ecosystem. The immediate results of deforestation are major changes to Earth’s landscapes and diminishing wildlife habitats. Longer-range results of deforestation, including unrestrained commercial logging, may include damage to Earth’s atmosphere and the unbalancing of living ecosystems. Forests help to remove carbon dioxide from the air. Through the process of photosynthesis, forests release oxygen into the air. A single acre of temperate forest releases more than six tons of oxygen into the atmosphere every year. In the last 150 years, deforestation, together with the burning of fossil fuels, has raised the amount of carbon dioxide in the atmosphere by more than 25%. It has been theorized that this has contributed to global warming, which is the accumulation of gasses leading to a gradual increase in Earth’s surface temperature. Human beings are still learning how to measure their need for wood against their need for a viable environment for themselves and other life forms.

Environmental Encyclopedia 3 Although human activity, especially logging, has decimated many of the world’s forests and the life within them, some untouched forests still remain. These forests are known as old-growth or ancient-growth forests. Old-growth forests are at the center of a heated debate between environmentalists, who wish to preserve them, and the logging industry, which continually seeks new and profitable sources of lumber and other forest products. Very little of the original uncut North American forest still remains. It has been estimated that the United States has lost over 96% of its old-growth forests. This loss continues as logging companies become more attracted to ancient-growth forests, which contain larger, more profitable trees. A majority of old-growth forests in the United States are in Alaska and Pacific Northwest. On the global level, barely 20% of the old-growth forests still remain, and the South American rainforests account for a significant portion of these. About 1% of the Amazon rainforest is deforested each year. At the present rate of logging around the world, old-growth forests could be gone within the first few decades of the twentyfirst century unless effective conservation programs are instituted. As technological advancements of the twentieth century dramatically increased the efficiency of logging, there was also a growth in understanding about the contribution of the forest to the overall health of the environment, including the effect of logging upon that health. Ecologists, who are scientists that study the complex relationships within natural systems, have determined that logging can affect the health of air, soil, water, plant life, and animals. For instance, clearcutting was at one time considered a healthy forestry practice, as proponents claimed that clearing a forest enabled the growth of new plant life, sped the process of regeneration, and prevented fires. The American Forest Institute, an industry group, ran an ad in the 1970s that stated, “I’m clearcutting to save the forest.” Ecologists have come to understand that clearcutting old-growth forests has a devastating effect on plant and animal life, and affects the health of the forest ecosystem from its rivers to its soil. Old-growth trees, for example, provide an ecologically diverse habitat including woody debris and fungi that contribute to nutrient-rich soil. Furthermore, many species of plants and wildlife, some still undiscovered, are dependent upon old-growth forests for survival. The huge canopies created by old-growth trees protect the ground from water erosion when it rains, and their roots help to hold the soil together. This in turn maintains the health of rivers and streams, upon which fish and other aquatic life depend. In the Pacific Northwest, for example, ecologists have connected the health of the salmon population with the health of the forests and the logging practices therein. Ecologists now understand that clearcutting and the planting of new trees, no matter how scientif-

Logging

ically managed, cannot replace the wealth of biodiversity maintained by old-growth forests. The pace of logging is dictated by the consumer demand for lumber and wood products. In the United States, for instance, the average size of new homes doubled between 1970 and 2000, and the forests ultimately bear the burden of the increasing consumption of lumber. In the face of widespread logging, environmentalists have become more desperate to protect ancient forests. There is a history of controversy between the timber industry and environmentalists regarding the relationship between logging and the care of forests. On the one hand, the logging industry has seen forests as a source of wealth, economic growth, and jobs. On the other hand, environmentalists have viewed these same forests as a source of recreation, spiritual renewal, and as living systems that maintain the overall environmental health. In the 1980s, a controversy raged between environmentalists and the logging industry over the protection of the northern spotted owl, a threatened species of bird whose habitat is the old-growth forest of the Pacific Northwest. Environmentalists appealed to the Endangered Species Act of 1973 to protect some of these old-growth forests. In other logging controversies, some environmentalists chained themselves to old-growth trees to prevent their destruction, and one activist, Julia Butterfly Hill, lived in an old-growth California redwood tree for two years in the 1990s to prevent it from being cut down. The clash between environmentalists and the logging industry may become more intense as the demand for wood increases and supplies decrease. However, in recent years these opposing views have been tempered by discussion of concepts such as responsible forest management to create sustainable growth, in combination with preservation of protected areas. Most of the logging in the United States occurs in the national forests. From the point of view of the U.S. Forest Service, logging provides jobs, helps manage the forest in some respects, prevents logging in other parts of the world, and helps eliminate the danger of forest fires. To meet the demands of the logging industry, the national forests have been developed with a labyrinth of logging roads and contain vast areas that have been devastated by clearcutting. In the United States there are enough logging roads in the National Forests to circle the earth 15 times, roads that speed up soil erosion that then washes away fertile topsoil and pollutes streams and rivers. The Roadless Initiative was established in 2001 to protect 60 million acres (24 million ha) of national forests. The initiative was designed by the Clinton administration to discourage logging and taxpayer-supported road building on public lands. The goal was to establish total and permanent protection for designated roadless areas. Advocates of the initiative contended that roadless areas encompassed 843

Environmental Encyclopedia 3

Logistic growth

some of the best wildlife habitats in the nation, while forest service officials argued that banning road building would significantly reduce logging in these areas. Under the initiative, more than half of the 192 million acres (78 million ha) of national forest would still remain available for logging and other activities. This initiative was considered one of the most important environmental protection measures of the Clinton administration. Illegal logging has become a problem with the growing worldwide demand for lumber. For example, the World Bank predicted that if Indonesia does not halt all current logging, it would lose its entire forest within the next 10 to 15 years. Estimates indicate that up to 70% of the wood harvested in Indonesia comes from illegal logging practices. Much of the timber being taken is sent to the United States. Indigenous peoples of Indonesia are being displaced from their traditional territories. Wildlife, including endangered tigers, elephants, rhinos, and orangutans are also being displaced and may be threatened with extinction. In 2002 Indonesia placed a temporary moratorium on logging in an effort to stop illegal logging. Other countries around the world were addressing logging issues in the early twenty-first century. In China, 160 million acres (65 million ha) out of 618 million acres (250 million ha) were put under state protection. Loggers turned in their tools to become forest rangers, working for the government in order to safeguard trees from illegal logging. China has set aside millions of acres of forests for protection, particularly those forests that are crucial sources of fresh water. China also announced that it was planning to further reduce its timber output in order to restore and enhance the life-sustaining abilities of its forests. [Douglas Dupler]

RESOURCES BOOKS Dietrich, William.The Final Forest: The Battle for the Last Great Trees of the Pacific Northwest. New York: Simon & Schuster, 1992. Durbin, Kathie.Tree Huggers: Victory, Defeat and Renewal in the Northwest Ancient Forest Campaign. Seattle, WA: Mountaineers, 1996. Hill, Julia Butterfly. The Legacy of Luna: The Story of a Tree, a Woman, and the Struggle to Save the Redwoods. San Francisco: Harper, 2000. Luoma, Jon R. The Hidden Forest. New York: Henry Holt and Co., 1999. Nelson, Sharlene P., and Ted Nelson. Bull Whackers to Whistle Punks: Logging in the Old West. New York: Watts, 1996.

Murphy, Dan. “The Rise of Robber Barons Speeds Forest Decline.” Christian Science Monitor, August 14, 2001, 8.

OTHER

American Lands Home Page. [cited July 2002]. . Global Forest Watch Home Page. [cited July 2002]. . SmartWood Program of the Rainforest Alliance. [cited July 2002]. .

Logistic growth Assuming the rate of immigration is the same as emigration, population size increases when births exceed deaths. As population size increases, population density increases, and the supply of limited available resources per organism decreases. There is thus less food and less space available for each individual. As food, water, and space decline, fewer births or more deaths may occur, and this imbalance continues until the number of births are equal to the number of deaths at a population size that can be sustained by the available resources. This equilibrium level is called the carrying capacity for that environment. A temporary and rapid increase in population may be due to a period of optimum growth conditions including physical and biological factors. Such an increase may push a population beyond the environmental carrying capacity. This sudden burst will be followed by a decline, and the population will maintain a steady fluctuation around the carrying capacity. Other population controls, such as predators and weather extremes (drought, frost, and floods), keep populations below the carrying capacity. Some environmentalists believe that the human population has exceeded the earth’s carrying capacity. Logistic growth, then, refers to growth rates that are regulated by internal and external factors that establish an equilibrium with environmental resources. The sigmoid (idealized S-shaped) curve illustrates this logistic growth where environmental factors limit population growth. In this model, a low-density population begins to grow slowly, then goes through an exponential or geometric phase, and then levels off at the environmental carrying capacity. See also Exponential growth; Growth limiting factors; Sustainable development; Zero population growth [Muthena Naseri]

PERIODICALS Alcock, James. “Amazon Forest Could Disappear, Soon.” Science News, July 14, 2001. De Jong, Mike. “Optimism Over Lumber.” Maclean’s, November 29, 2001, 16. Kerasote, Ted. “The Future of our Forests.” Audubon, January/February 2001, 44.

844

Dr. Bjørn Lomborg (1965 – ) Danish political scientist In 2001, Cambridge University Press published The Skeptical Environmentalist: Measuring the Real State of the World by

Environmental Encyclopedia 3 the Danish statistician Bjørn Lomborg. The book triggered a firestorm of criticism, with many well-known scientists denouncing it as an effort to “confuse legislators and regulators, and poison the well of public environmental information.” In January 2002, Scientific American published a series of articles by five distinguished environmental scientists contesting Lomborg’s claims. To some observers, the ferocity of the attack was surprising. Why so much furor over a book that claims to have good news about our environmental condition? Lomborg portrays himself as an “left-wing, vegetarian, Greenpeace member,” but says he worries about the unrelenting “doom and gloom” of mainstream environmentalism. He describes what he regards as an all-pervasive ideology that says, among other things, “Our resources are running out. The population is ever growing, leaving less and less to eat. The air and water are becoming ever more polluted. The planet’s species are becoming extinct in vast numbers. The forests are disappearing, fish stocks are collapsing, and coral reefs are dying.” This ideology has pervaded the environmental debate so long, Lomborg says, “that blatantly false claims can be made again and again, without any references, and yet still be believed.” In fact, Lomborg tells us, these allegations of the collapse of ecosystems are “simply not in keeping with reality. We are not running out of energy or natural resources. There will be more and more food per head of the world’s population. Fewer and fewer people are starving. In 1900 we lived for an average of 30 years; today we live 67. According to the UN we have reduced poverty more in the last 50 years than in the preceding 500, and it has been reduced in practically every country.” He goes on to challenge conventional scientific assessment of global warming, forest losses, fresh water scarcity, energy shortages, and a host of other environmental problems. Is Lomborg being deliberately (and some would say, hypocritically) optimistic, or are others being unreasonably pessimistic? Is this simply a case of regarding the glass as half full versus half empty? The inspiration to look at environmental statistics, Lomborg says, was a 1997 interview with the controversial economist Dr. Julian L. Simon in Wired magazine. Simon, who died in 1998, spent a good share of his career arguing that the “litany” of the Green movement—human overpopulation leading to starvation and resource shortages—was premeditated hyperbole and fear mongering. The truth, Simon, claimed is that the quality of human life is improving, not declining. Lomborg felt sure that Simon’s allegations were “simple American right-wing propaganda.” It should be a simple matter, he thought, to gather evidence to show how wrong Simon was. Back at his university in Denmark, Lomborg set out with 10 of his sharpest students to study Simon’s

Dr. Bjørn Lomborg

claims. To their surprise, the group found that while not everything Simon said was correct, his basic conclusions seemed sound. When Lomborg began to publish these findings in a series of newspaper articles in the London Guardian in 1998, he stirred up a hornet’s nest. Some of his colleagues at the University of Aarhus set up a website to denounce the work. When the whole book came out, their fury only escalated. Altogether, between 1998 and 2002, more than 400 articles appeared in newspapers and popular magazines either attacking or defending Lomborg and his conclusions. In general, the debate divides between mostly conservative supporters on one side and progressive, environmental activists and scientists on the other. The Wall Street Journal described the Skeptical Environmentalist as “superbly documented and readable.” The Economist called it “a triumph.” A review in the Daily Telegraph (London) declared it “the most important book on the environment ever written.” A review in the Washington Post said it is a “richly informative, lucid book, a magnificent achievement.” And, The Economist, which started the debate by publishing his first articles, announced that, “this is one of the most valuable books on public policy—not merely on environmental policy—to have been written in the past ten years.” Among most environmentalists and scientists, on the other hand, Lomborg has become an anathema. A widely circulated list of “Ten things you should know about the Skeptical Environmentalist” charged that the book is full of pseudo-scholarship, statistical fallacies, distorted quotations, inaccurate or misleading citations, misuse of data, interpretations that contradict well-established scientific work, and many other serious errors. This list accuses Lomborg of having no professional credentials or training—and having done no professional research—in ecology, climate science, resource economic, environmental policy, or other fields covered by his book. In essence, they complain, “Who is this guy, and how dare he say all this terrible stuff?” Harvard University Professor E. O. Wilson, one of the world’s most distinguished biologists, deplores what he calls “the Lomborg scam,” and says that he and his kind “are the parasite load on scholars who earn success through the slow process of peer review and approval.” It often seems that more scorn and hatred is focused on those, like Lomborg, who are viewed as a turncoats and heretics, than for those who are actually out despoiling the environment and squandering resources. Perhaps the most withering criticism of Lomborg comes from his reporting of statistics and research results. Stephen Schneider, a distinguished climate scientist from Stanford University, for instance, writes in Scientific American “most of [Lomborg’s] nearly 3,000 citations are to secondary literature and media articles. Moreover, even when cited, the peer-reviewed articles come elliptically from those studies 845

Environmental Encyclopedia 3

Dr. Bjørn Lomborg

that support his rosy view that only the low end of the uncertainty ranges [of climate change] will be plausible. IPCC authors, in contrast, were subjected to three rounds of review by hundreds of outside experts. They didn’t have the luxury of reporting primarily from the part of the community that agrees with their individual views.” Lomborg also criticizes extinction rate estimates as much too large, citing evidence from places like Brazil’s Atlantic Forest, where about 90% of the forest has been cleared without large numbers of recorded extinctions. Thomas Lovejoy, chief biodiversity adviser to the World Bank, responds, “First, this is a region with very few field biologists to record either species or their extinction. Second, there is abundant evidence that if the Atlantic forest remains as reduced and fragmented as it is, will lose a sizable fraction of the species that at the moment are able to hang on.” Part of the problem is that Lomborg is unabashedly anthropocentric. He dismisses the value of biodiversity, for example. As long as there are plants and animals to supply human needs, what does it matter if a few non-essential species go extinct? In Lomborg’s opinion, poverty, hunger, and human health problems are much more important problems than endangered species or possible climate change. He isn’t opposed to reducing greenhouse gas emissions, for instance, but argues that rather than spend billions of dollars per year to try to meet Kyoto standards, we could provide a healthy diet, clean water, and basic medical services to everyone in the world, thereby saving far more lives than we might do by reducing global climate change. Furthermore, Lomborg believes, solar energy will probably replace fossil fuels within 50 years anyway, making worries about increasing CO2 concentrations moot. Lomborg infuriates many environmentalists by being intentionally optimistic, cheerfully predicting that progress in population control, use of renewable energy, and unlimited water supplies from desalination technology will spread to the whole world, thus avoiding crises in resource supplies and human impacts on our environment. Others, particularly Lester Brown of the Worldwatch Institute and Professor Paul Ehrlich of Stanford University, according to Lomborg, seem to deliberately adopt worst-case scenarios. Protagonists on both sides of this debate use statistics selectively and engage in deliberate exaggeration to make their points. As Stephen Schneider, one of the most prominent anti-Lomborgians, said in an interview in Discover in 1989, “[We] are not just scientists but human beings as well. And like most people we’d like to see the world a better place. To do that we need to get some broad-based support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. Each of us has 846

to decide what the right balance is between being effective and being honest.” As is often the case in complex social issues, there are both truth and error on both sides in this debate. It takes good critical thinking skills to make sense out of the flurry of charges and counter charges. In the end, what you believe depends on your perspective and your values. Future events will show us whether Bjørn Lomborg or his critics are correct in their interpretations and predictions. In the meantime, it’s probably healthy to have the vigorous debate engendered by strongly held beliefs and articulate partisans from many different perspectives. In November 2001, Lomborg was selected Global Leader for Tomorrow by the World Economic Forum, and in February 2002, he was named director of Denmark’s national Environmental Assessment Institute. In addition to the use of statistics in environmental issues, his professional interests are simulation of strategies in collective action dilemmas, simulation of party behavior in proportional voting systems, and use of surveys in public administration. [William Cunningham Ph.D.]

RESOURCES BOOKS Lomborg, Bjorn. The Skeptical Environmentalist: Measuring the Real State of the World. Cambridge University Press, 2001.

PERIODICALS Bell, Richard C. “Media Sheep: How did The Skeptical Environmentalist Pull the Wool over the Eyes of so Many Editors?” Worldwatch 15, no. 2 (2002): 11–13. Dutton, Denis. “Greener than you think.” Washington Post, October 21, 2001. Schneider, Stephen. “Global Warming: Neglecting the Complexities.” Scientific American 286 (2002): 62–65. Wade, Nicholas. “Bjørn Lomborg: A Chipper Environmentalist.” The New York Times, August 7, 2001.

OTHER Anti-Lomborgian Web Site. December 2001 [cited July 9, 2002]. . Bjørn Lomborg Home Page. 2002 [cited July 9, 2002]. . Regis, Ed. “The Doomslayer: The environment is going to hell, and human life is doomed to only get worse, right? Wrong. Conventional wisdom, meet Julian Simon, the Doomslayer.” February 1997. Wired. [cited July 9, 2002]. . Wilson, E. O. “Vanishing Point: On Bjørn Lomborg and Extinction.” Grist December 12, 2001 [cited July 9, 2002]. . World Resources Institute and World Wildlife Fund. Ten Things Environmental Educators Should Know About The Skeptical Environmentalist. January 2002 [cited July 9, 2002]. .

Environmental Encyclopedia 3

London Dumping Convention see Convention on the Prevention of Marine Pollution by Dumping of Waste and Other Matter (1972)

Barry Holstun Lopez

(1945 – )

American environmental writer Barry Lopez has often called his own nonfiction writing natural history, and he is often categorized as a nature writer. This partially describes his work, but limiting him to that category is misleading, partly because his work transcends the kinds of subjects implicit in that classification. He could as well, for example, be called a travel writer, though that label also does not completely describe his work. He has in addition published a number of unusual works of fiction, and even has one children’s book (on Crow and Weasel) to his credit. Barry Lopez was born in Port Chester, New York, but he spent several early childhood years in rural southern California and, at the age of 10, returned to the East to grow up in New York City. He earned a BA degree from the University of Notre Dame, followed by an MAT from the University of Oregon in 1968. His initial goal was to teach, but in the late 1960s he set out to become a professional writer and since 1970 has earned his living writing (as well as by lecturing and giving readings). Lopez’s nonfiction writing transcends the category of natural history because his real topic, as Rueckert suggests, is the search for human relationships with nature, relationships that are “dignified and honorable.” Natural history as a category of literature implies a focus on primeval nature undisturbed by human activity, or at least on nature as it exists in its own right rather than as humans relate to it. He is a practitioner of what some have called “the new naturalism— a search for the human as mirrored in nature.” Lopez’s focus then is human ecology, the interactions of human beings with the world around them, especially the natural world. Even his most “natural” book of natural history, Of Wolves and Men, is not just about the natural history of wolves but about how that species’ existence or “being” in the wild is affected by human perceptions and actions, and it is as well about the image of the wolf in human minds. His fiction works can be called unusual, partly because they are often presented as brief notes or sketches, and partly because they are frequently blended into and combined with legends, factual observations of nature, and personal meditations. Everything Lopez writes, however, is in the form of a story, whether fiction, natural history, folklore, or travel writing. His story-telling makes all of his writing enjoyable to read, easy to access and ingest, and often memorable.

Los Angeles Basin

Lopez as a story teller, occupies the spaces between truth-teller and mystic, between natural scientist and folklorist. He has written of wolves and humans, and then of people with blue skins who could not speak and, apparently, subsisted only on clean air. He writes of the land in reality and the land in our imaginations, frequently in the same text. His writings on natural history provide the reader with great detail about the places in the world he describes, but his fiction can force the shock of recognition of places in the mind. In 1998, Lopez was a National Magazine Award in Fiction finalist for The Letters of Heaven and in 1999 he received the Lannan residency fellowship. Barry Lopez is in part a naturalist, in the best sense of that word. He is also something of an anthropologist. He is a student of folklore and mythology. He travels widely, but he studies his own home place and local environment intently. And, of course, he is a writer. [Gerald L. Young Ph.D.]

RESOURCES BOOKS Lopez, Barry. About this Life: Journeys on the Threshold of Memory. Random, 1998. ———. Arctic Dreams: Imagination and Desire in a Northern Landscape. New York: Scribner, 1986. ———. Crossing Open Ground. New York: Scribner, 1988. ———. Desert Notes; Reflections in the Eye of a Raven. Kansas City: Sheed, Andrews & McMeel. ———. Lessons from the Wolverine. Illustrated by Tom Pohrt. University of Georgia Press, 1997. ———. Light Action in the Caribbean. Knopf, 2000. Rueckert, W. H. “Barry Lopez and the Search for a Dignified and Honorable Relationship With Nature.” In Earthly Words: Essays on Contemporary American Nature and Environmental Writers. Ed. J. Cooley. Ann Arbor: University of Michigan Press, 1994.

PERIODICALS Paul, S. “Barry Lopez.” Hewing to Experience: Essays and Reviews on Recent American Poetry and Poetics, Nature and Culture. Iowa City: University of Iowa Press, 1989.

Los Angeles Basin The second most populous city in the United States, Los Angeles has perhaps the most fascinating environmental history of any urban area in the country. The Los Angeles Basin, into which more than 80 communities of Los Angeles County are crowded, is a trough-shaped region bounded on three sides by the Santa Monica, Santa Susana, San Gabriel, San Bernadino, and Santa Ana Mountains. On its fourth side, the county looks out over the Pacific Ocean. The earliest settlers arrived in the Basin in 1769 when Spaniard Gaspar de Portola´ and his expedition set up camp 847

Los Angeles Basin

along what is now known as the Los Angeles River. The site was eventually given the name El Pueblo de la Reyna de Los Angeles (the Town of the Queen of the Angels). For the first century of its history, Los Angeles grew very slowly. Its population in 1835 was only 1,250. By the end of the century, however, the first signs of a new trend appeared. In response to the promises of sunshine, warm weather, and “easy living,” immigrants from the East Coast began to arrive in the Basin. Its population more than quadrupled between 1880 and 1890, from 11,183 to 50,395. The rush was on, and it has scarcely abated today. The metropolitan population grew from 102,000 in 1900 to 1,238,000 in 1930 to 3,997,000 in 1950 to 9,838,861 in 2000. The pollution facing Los Angeles today results from a complex mix of natural factors and intense population growth. The first reports of Los Angeles’s famous photochemical smog go back to 1542. The “many smokes” described by Juan Cabrillo in that year were not the same as today’s smog, but they occurred because of geographic and climatic conditions that are responsible for modern environmental problems. The Los Angeles Basin has one of the highest probabilities of experiencing thermal inversions of any area in the United States. An inversion is an atmospheric condition in which a layer of cold air becomes trapped beneath a layer of warm air. That situation is just the reverse of the most normal atmospheric condition in which a warm layer near the ground is covered by a cooler layer above it. The warm air has a tendency to rise, and the cool air has a tendency to sink. As a result, natural mixing occurs. In contrast, when a thermal inversion occurs, the denser cool air remains near the ground while the less dense air above it tends to stay there. Smoke and other pollutants released into a thermal inversion are unable to rise upward and tend to be trapped in the cool lower layer. Furthermore, horizontal movements of air, which might clear out pollution in other areas, are blocked by the mountains surrounding LA county. The lingering haze of the “many smokes” described by Cabrillo could have been nothing more than the smoke from campfires trapped by inversions that must have existed even in 1542. As population and industrial growth occurred in Los Angeles during the second half of the twentieth century, the amount of pollutants trapped in thermal inversions also grew. By the 1960s, Los Angeles had become a classic example of how modern cities were being choked by their own wastes. The geographic location of the Los Angeles Basin contributes another factor to Los Angeles’s special environmental problems. Sunlight warms the Basin for most of the 848

Environmental Encyclopedia 3 year and attracts visitors and new residents. Solar energy fuels reactions between components of Los Angeles’s polluted air, producing chemicals even more toxic than those from which they came. The complex mixture of noxious compounds produced in Los Angeles has been given the name smog, reflecting the combination of human (smoke) and natural factors (fog) that make it possible. Smog, also called ground level ozone, can cause a myriad of health problems including breathing difficulties, coughing, chest pains, and congestion. It may also exacerbate asthma, heart disease, and emphysema. As Los Angeles grew in area and population, conditions which guaranteed a continuation of smog increased. The city and surrounding environs eventually grew to cover 400 square miles (1,036 square kilometers), a widespread community held together by freeways and cars. A major oil company bought the city’s public transit system, then closed it down, ensuring the wide use of automobile transportation. Thus, gases produced by the combustion of gasoline added to the city’s increasing pollution levels. Los Angeles and the State of California have been battling air pollution for over 20 years. California now has some of the strictest emission standards of any state in the nation, and LA has begun to develop mass transit systems once again. For an area that has long depended on the automobile, however, the transition to public transportation has not been an easy one. But some measurable progress has been made in controlling ground level ozone. In 1976, smog was detectable at levels above the state standard acceptable average of 0.09 ppm a staggering 237 days out of the year. By 2001, the number had dropped to 121 days. Still, much work remains to be done; in 2000, 2001, and 2002 Los Angeles topped the American Lung Association’s annual list of most ozone polluted cities and counties. Another of Los Angeles’s population-induced problems is its enormous demand for water. As early as 1900, it was apparent that the Basin’s meager water resources would be inadequate to meet the needs of the growing urban area. The city turned its sights on the Owens Valley, 200 mi (322 km) to the northeast in the Sierra Nevada. After a lengthy dispute, the city won the right to tap the water resources of this distant valley. A 200-mile water diversion public works project, the Los Angeles Aqueduct, was completed in 1913. This development did not satisfy the area’s growing need for water, however, and in the 1930s, a second canal was built. This canal, the Colorado River Aqueduct, carries water from the Colorado River to Los Angeles over a distance of 444 mi (714 km). Even this proved to be inadequate, however, and the search for additional water sources has gone on almost without stop. In fact, one of the great ongoing debates in California is between legislators from

Environmental Encyclopedia 3 Northern California, where the state’s major water resources are located, and their counterparts from Southern California, where the majority of the state’s people live. Since the latter contingent is larger in number, it has won many of the battles so far over distribution of the state’s water resources. Of course, Los Angeles has also experienced many of the same problems as urban areas in other parts of the world, regardless of its special geographical character. For example, the Basin was at one time a lush agricultural area, with some of the best soil and growing conditions found anywhere. From 1910 to 1950, Los Angeles County was the wealthiest agricultural region in the nation. But as urbanization progressed, more and more farmland was sacrificed for commercial and residential development. During the 1950s, an average of 3,000 acres (1,215 hectares) of farmland per day was taken out of production and converted to residential, commercial, industrial, or transportation use. One of the mixed blessings faced by residents of the Los Angeles Basin is the existence of large oil reserves in the area. On the one hand, the oil and natural gas contained in these reserves is a valuable natural resource. On the other hand, the presence of working oil wells in the middle of a modern metropolitan area creates certain problems. One is aesthetic, as busy pumps in the midst of barren or scraggly land contrasts with sleek new glass and steel buildings. Another petroleum-related difficulty is that of land subsidence. As oil and gas are removed from underground, land above it begins to sink. This phenomenon was first observed as early as 1937. Over the next two decades, subsidence had reached 16 ft (5 m) at the center of the Wilmington oil fields. Horizontal shifting of up to 9 ft (2.74 m) was also recorded. Estimates of subsidence of up to 45 ft (14 m) spurred the county to begin remedial measures in the 1950s. These measures included the construction of levees to prevent seawater from flowing into the subsided area and the repressurizing of oil zones with water injection. These measures have been largely successful, at least to the present time, in halting the subsidence of the oil field. Los Angeles’ annual ritual of pumping and storing water into underground aquifers in anticipation of the long, dry summer season has also been responsible for elevation shifts in the region. Researchers with the United States Geological Survey (USGS) observed that the ground surface of a 20 by 40 km area of Los Angeles rises and falls approximately 10–11 centimeters annually in conjunction with the water storage activities. As if population growth itself were not enough, the Basin poses its own set of natural challenges to the community. For example, the area has a typical Mediterranean climate with long hot summers, and short winters with little rain. Summers are also the occasion of Santa Ana winds,

Love Canal

severe windstorms in which hot air sweeps down out of the mountains and across the Basin. Urban and forest fires that originate during a Santa Ana wind not uncommonly go out of control causing enormous devastation to both human communities and the natural environment. The Los Angeles Basin also sits within a short distance of one of the most famous fault systems in the world, the San Andreas Fault. Other minor faults spread out around Los Angeles on every side. Earthquakes are common in the Basin, and the most powerful earthquake in Southern California history struck Los Angeles in 1857 (8.25 magnitude). Sixty miles (97 kilometers) from the quake’s epicenter, the tiny community of Los Angeles lost the military base at Fort Tejon although only two lives were lost in the disaster. Like San Franciscans, residents of the Los Angeles Basin live not wondering if another earthquake will occur, but only when “The Big One” will hit. See also Air quality; Atmospheric inversion; Environmental Protection Agency (EPA); Mass transit; Oil drilling [David E. Newton and Paula Anne Ford-Martin]

RESOURCES BOOKS Davis, Mike. Ecology of Fear: Los Angeles and the Imagination of Disaster. New York: Vintage Books, 1999. Gumprecht, Blake. The Los Angeles River: Its Life, Death, and Possible Rebirth Baltimore: John Hopkins University Press, 2001.

PERIODICALS Hecht, Jeff. “Finding Fault” New Scientist 171 (August 2001): 8.

OTHER American Lung Association. State of the Air 2002 Report. [cited July 9, 2002]. . South Coast Air Quality Management District. Smog Levels. [cited July 9, 2002]. . United States Geological Survey (USGS). Earthquake Hazards Program: Northern California. [cited July 9, 2002]. .

Love Canal Probably the most infamous of the nation’s hazardous waste sites, the Love Canal neighborhood of Niagara Falls, New York, was largely evacuated of its residents in 1980 after testing revealed high levels of toxic chemicals and genetic damage. Between 1942 and 1953, the Olin Corporation and the Hooker Chemical Corporation buried over 20,000 tons of deadly chemical waste in the canal, much of which is known to be capable of causing cancer, birth defects, miscarriages, and other health disorders. In 1953, Hooker deeded the land to the local board of education but did not clearly warn of the deadly nature of the chemicals buried 849

Love Canal

Environmental Encyclopedia 3

Weeds grow around boarded up homes in Love Canal, New York in 1980. (Corbis-Bettmann. Reproduced by permission.)

there, even when homes and playgrounds were built in the area. The seriousness of the situation became apparent in 1976, when years of unusually heavy rains raised the water table and flooded basements. As a result, houses began to reek of chemicals, and children and pets experienced chemical burns on their feet. Plants, trees, gardens, and even some pets died. Soon neighborhood residents began to experience an extraordinarily high number of illnesses, including cancer, miscarriages, and deformities in infants. Alarmed by the situation, and frustrated by inaction on the part of local, state, and federal governments, a 27-year-old housewife named Lois Gibbs began to organize her neighbors. In 1978 they formed the Love Canal Homeowners Association and began a two-year fight to have the government relocate them into another area. In August 1978 the New York State Health Commissioner recommended that pregnant women and young children be evacuated from the area, and subsequent studies documented the extraordinarily high rate of birth defects, miscarriages, genetic damage and other health affects. In 850

1979, for example, of 17 pregnant women in the neighborhood, only two gave birth to normal children. Four had miscarriages, two suffered stillbirths, and nine had babies with defects. Eventually, the state of New York declared the area “a grave and imminent peril” to human health. Several hundred families were moved out of the area, and the others were advised to leave. The school was closed and barbed wire placed around it. In October 1980 President Jimmy Carter declared Love Canal a national disaster area. In the end, some 60 families decided to remain in their homes, rejecting the government’s offer to buy their properties. The cost for the cleanup of the area has been estimated at $250 million. Ironically, twelve years after the neighborhood was abandoned, the state of New York approved plans to allow families to move back to the area, and homes were allowed to be sold. Love Canal is not the only hazardous waste site in the country that has become a threat to humans--only the best known. Indeed, the United States Environmental Protection Agency has estimated that up to 2,000 hazardous waste disposal sites in the United States may pose “significant risks

Environmental Encyclopedia 3

Sir James Ephraim Lovelock

to human health or the environment,” and has called the toxic waste problem “one of the most serious problems the nation has ever faced.” See also Contaminated soil; Hazardous waste site remediation; Hazardous waste siting; Leaching; Storage and transport of hazardous material; Stringfellow Acid Pits; Toxic substance [Lewis G. Regenstein]

RESOURCES BOOKS Gibbs, Lois. Love Canal: My Story. Albany: State University of New York Press, 1982. Regenstein, L. G. How to Survive in America the Poisoned. Washington, DC: Acropolis Books, 1982.

PERIODICALS Brown, M. H. “Love Canal Revisited.” Amicus Journal 10 (Summer 1988): 37–44. Kadlecek, M. “Love Canal—10 Years Later.” Conservationist 43 (November-December 1988): 40–43. ———. “A Toxic Ghost Town: Ten Years Later, Scientists Are Still Assessing the Damage From Love Canal.” The Atlantic 263 (July 1989): 23–26.

Sir James Ephraim Lovelock (1919 – ) English chemist Sir James Lovelock is the framer of the Gaia hypothesis and developer of, among many other devices, the electron capture gas chromatographic detector. The highly selective nature and great sensitivity of this detector made possible not only the identification of chlorofluorocarbons in the atmosphere, but led to the measurement of many pesticides, thus providing the raw data that underlie Rachel Carson’s Silent Spring. Sir James Lovelock was born in Letchworth Garden City, earned his degree in chemistry from Manchester University, and took a Ph.D. in medicine from the London School of Hygiene and Tropical Medicine. His early studies in medical topics included work at Harvard University Medical School and Yale University. He spent three years as a professor of chemistry at Baylor University College of Medicine in Houston, Texas. It was from that position that his work with the Jet Propulsion Laboratory for NASA began. The Gaia hypothesis, Sir James Lovelock’s most significant contribution to date, grew out of the work of Sir James Lovelock and his colleagues at the lab. While attempting to design experiments for life detection on Mars, Sir James Lovelock, Dian Hitchcock, and later Lynn Margulis, posed the question, “If I were a Martian, how would

Sir James E. Lovelock at his home in Cornwall with thedevice he invented to measure chlorofluorocarbons in the atmosphere. (Photograph by Anthony Howarth. Photo Researchers Inc. Reproduced by permission.)

I go about detecting life on Earth?” Looking in this way, the team soon realized that our atmosphere is a clear sign of life and it is totally impossible as a product of strictly chemical equilibria. One consequence of viewing life on this or another world as a single homeostatic organism is that energy will be found concentrated in certain locations rather than spread evenly, frequently, or even predominantly, as chemical energy. Thus, against all probability, the earth has an atmosphere containing about 21% free oxygen and has had about this much for millions of years. Sir James Lovelock bestowed on this superorganism comprising the whole of the biosphere the name “Gaia,” one spelling of the name of the Greek earth-goddess, at the suggestion of a neighbor, William Golding, author of Lord of the Flies. Sir James Lovelock’s hypothesis was initially attacked as requiring the whole of life on earth to have purpose, and hence in some sense, common intelligence. Sir James Lovelock then developed a computer model called “Daisyworld” in which the presence of black and white daisies controlled the global temperature of the planet to a nearly constant value despite a major increase in the heat output of its sun. The concept that the biosphere keeps the environment constant has been attacked as sanctioning 851

Environmental Encyclopedia 3

Amory Bloch Lovins

environmental degradation, and accusers took a cynical view of Sir James Lovelock’s service to the British petrochemical industry. However, the hypothesis has served the environmental community well in suggesting many ideas for further studies, virtually all of which have given results predicted by the hypothesis. Since 1964, Sir James Lovelock has operated a private consulting practice, first out of his home in Bowerchalke, near Salisbury, England, and later from a home near Launceston, Cornwall. He has authored over 200 scientific papers, covering research that ranged from techniques for freezing and successfully reviving hamsters to global systems science, which he has proposed to call geophysiology. Sir James Lovelock has been honored by his peers worldwide with numerous awards and honorary degrees, including Fellow of the Royal Society. He was also named a Commander of the British Empire by Queen Elizabeth in 1990.

[James P. Lodge Jr.]

RESOURCES BOOKS Joseph, L. E. Gaia: The Growth of an Idea. New York: St. Martin’s Press, 1990. Lovelock, J. The Ages of Gaia, A Biography of Our Living Earth. New York: Norton, 1988. ———. Gaia, a New Look at Life on Earth. Oxford: Oxford University Press, 1979. ———, and M. Allaby. The Greening of Mars. New York: St. Martin’s Press, 1984.

Amory Bloch Lovins (1947 – ) American physicist and energy conservationist Amory Lovins is a physicist specializing in environmentally safe and sustainable energy sources. Born in 1947 in Washington, D.C., Lovins attended Harvard and Oxford universities. He has had a distinguished career as an educator and scientist. After resigning his academic post at Oxford in 1971, Lovins became the British representative of Friends of the Earth. He has been Regents’ Lecturer at the University of California at Berkeley and has served as a consultant to the United Nations and other international and environmental organizations. Lovins is a leading critic of hard energy paths and an outspoken proponent of soft alternatives. According to Lovins, an energy path is “hard” if the route from source to use is complex and circuitous requires extensive, expensive, and highly complex technological means and centralized power to produce, transmit, and store the energy produces toxic wastes or other unwanted side effects has hazardous social uses or implications and tends over time to harden even more, as other options are fore852

closed or precluded as the populace becomes ever more dependent on energy from a particular source. A hard energy path can be seen in the case of fossil fuel energy. Once readily abundant and cheap, petroleum fueled the internal combustion engines and other machines on which humans have come to depend, but as that energy source becomes scarcer, oil companies must go farther afield to find it, potentially causing more environmental damage. As oil supplies run low and become more expensive, the temptation is to sustain the level of energy use by turning to another, and even harder, energy path—nuclear energy. With its complex technology, its hazards, its longlived and highly toxic wastes, its myriad military uses, and the possibility of its falling into the hands of dictators or terrorists, nuclear power is perhaps the hardest energy path. No less important are the social and political implications of this hard path: radioactive wastes will have to be stored somewhere nuclear power plants and plutonium transport routes must be guarded we must make trade-offs between the ease, convenience, and affluence of people presently living and the health and well-being of future generations and so on. A hard energy path is also one that, once taken, forecloses other options because, among other considerations, the initial investment and costs of entry are so high as to render the decision, once made, nearly irreversible. The longer term economic and social costs of taking the hard path, Lovins argues, are astronomically high and incalculable. Soft energy paths, by contrast, are shorter, more direct, less complex, cheaper (at least over the long run), are inexhaustible and renewable, have few if any unwanted sideeffects, have minimal military uses, and are compatible with decentralized local forms of community control and decision-making. The old windmill on the family farm offers an early example of such a soft energy source newer versions of the windmill, adapted to the generation of electricity, supply a more modern example. Other soft technologies include solar energy, biomass furnaces burning peat, dung or wood chips, and methane from the rotting of vegetable matter, manure, and other cheap, plentiful, and readily available organic material. Much of Lovins’s work has dealt with the technical and economic aspects, as well as the very different social impacts and implications, of these two competing energy paths. A resource consultant agency, The Rocky Mountain Institute, was founded by Lovins and his wife, Hunter in 1982. In 1989, Lovins and his wife won the Onassis Foundation’s first Delphi Prize for their “essential contribution towards finding alternative solutions to energy problems.” [Terence Ball]

Environmental Encyclopedia 3

Low-head hydropower

standard. See also Air quality; Best Available Control Technology (BAT); Emission standards

Low-head hydropower

Amory Lovins. (Reproduced by permission of the Rocky Mountain Institute.)

RESOURCES BOOKS Nash, H., ed. The Energy Controversy: Amory B. Lovins and His Critics. San Francisco: Friends of the Earth, 1979. Lovins, A. B. Soft Energy Paths. San Francisco: Friends of the Earth, 1977. ———, and L. H. Lovins. Energy Unbound: Your Invitation to Energy Abundance. San Francisco: Sierra Club Books, 1986.

PERIODICALS Louma, J. R. “Generate ’Nega-Watts’ Says Fossil Fuel Foe.” New York Times (April 2, 1993): B5, B8.

Lowest Achievable Emission Rate Governments have explored a number of mechanisms for reducing the amount of pollutants released to the air by factories, power plants, and other stationary sources. One mechanism is to require that a new or modified installation releases no more pollutants than determined by some law or regulation determining the lowest level of pollutants that can be maintained by existing technological means. These limits are known as the Lowest Achievable Emission Rate (LAER). The Clean Air Act of 1970 required, for example, that any new source in an area where minimum air pollution standards were not being met had to conform to the LAER

The term hydropower often suggests giant dams capable of transmitting tens of thousands of cubic feet of water per minute. Such dams are responsible for only about six percent of all the electricity produced in the United States today. Hydropower facilities do not have to be massive buildings. At one time in the United States--and still, in many places around the world--electrical power is generated at low-head facilities, dams where the vertical drop through which water passes is a relatively short distance and/or where water flow is relatively modest. Indeed, the first commercial hydroelectric facility in the world consisted of a waterwheel on the Fox River in Appleton, Wisconsin. The facility, opened in 1882, generated enough electricity to operate lighting systems at two paper mills and one private residence. Electrical demand grew rapidly in the United States during the early twentieth century, and hydropower supplied much of that demand. By the 1930s, nearly 40 percent of the electricity used in this country was produced by hydroelectric facilities. In some Northeastern states, hydropower accounted for 55-85 percent of the electricity produced. A number of social, economic, political, and technical changes soon began to alter that pattern. Perhaps most important was the vastly increased efficiency of power plants operated by fossil fuels. The fraction of electrical power from such plants rose to more than 80 percent by the 1970s. In addition, the United States began to move from a decentralized energy system in which many local energy companies met the needs of local communities, to large, centralized utilities that served many counties or states. In the 1920s, more than 6,500 electric power companies existed in the nation. As the government recognized power companies as monopolies, that number began to drop rapidly. Companies that owned a handful of low-head dams on one or more rivers could no longer compete with their giant cousins that operated huge plants powered by oil, natural gas, or coal. As a result, hundreds of small hydroelectric plants around the nation were closed down. According to one study, over 770 low-head hydroelectric plants were abandoned between 1940 and 1980. In some states, the loss of low-head generating capacity was especially striking. Between 1950 and 1973, Consumers Power Company, one of Michigan’s two electric utilities, sold off 44 hydroelectric plants. Some experts believe that low-head hydropower should receive more attention today. Social and technical factors still prevent low-head power from seriously compet853

Environmental Encyclopedia 3

Low-level radioactive waste

ing with other forms of energy on a national scale. But it may meet the needs of local communities in special circumstances. For example, a project has been undertaken to rehabilitate four low-head dams on the Boardman River in northwestern Michigan. The new facility is expected to increase the electrical energy available to nearby Traverse City and adjoining areas by about 20 percent. Low-head hydropower appears to have a more promising future in less-developed parts of the world. For example, China has more than 76,000 low-head dams that generate a total of 9,500 megawatts of power. An estimated 50 percent of rural townships depend on such plants to meet their electrical needs. Low-head hydropower is also of increasing importance in nations with fossil-fueled plants and growing electricity needs. Among the fastest growing of these are Peru, India, the Philippines, Costa Rica, Thailand, and Guatemala. See also Alternative energy sources; Electric utilities; Wave power [David E. Newton]

RESOURCES BOOKS Lapedes, D. N., ed. McGraw-Hill Encyclopedia of Energy. New York: McGraw-Hill, 1976.

PERIODICALS Kakela, P., G, Chilson, and W. Patric. “Low-Head Hydropower for Local Use.” Environment (January-February 1984): 31–38.

Low-inut agriculture see Sustainable agriculture

Low-level radioactive waste Low-level radioactive waste consists of materials used in a variety of medical, industrial, commercial, and research applications. They tend to release a low level of radiation that dissipates in a relatively short period of time. Although care must be taken in handling such materials, they pose little health or environmental risk. Among the most common low level radioactive materials are rags, papers, protective clothing, and filters. Such materials are often stored temporarily in sealed containers at their use site. They are then disposed of by burial at one of three federal sites: Barnwell, South Carolina, Beatty, Nevada, or Hanford, Washington. See also Hanford Nuclear Reservation; High-level radioactive waste; Radioactive waste; Radioactivity

LUST see Leaking underground storage tank 854

Sir Charles Lyell (1797 – 1875) Scottish geologist Lyell was born in Kinnordy, Scotland, the son of well-todo parents. When Lyell was less than a year old, his father moved his family to the south of England where he leased a house near the New Forest in Hampshire. Lyell spent his boyhood there, surrounded by his father’s collection of rare plants. At the age of seven, Lyell became ill with pleurisy and while recovering began to collect and study insects. As a young man he entered Oxford to study law, but he also became interested in mineralogy after attending lectures by the noted geologist William Buckland. Buckland advocated the theories of Abraham Gottlob Werner, a neptunist who postulated that a vast ocean once covered the earth and that the various rocks resulted from chemical and mechanical deposition underwater, over a long period of time. This outlook was more in keeping with the Biblical story of Genesis than that of the vulcanists or plutonists who suscribed to the idea that volcanism, along with erosion and deposition, were the major forces sculpting the Earth. While on holidays with his family, Lyell made the first of many observations in hopes of confirming the views of Buckland and Werner. However, he continued to study law and was eventually called to the bar in 1822. Lyell practiced law until 1827 while still devoting time to geology. Lyell traveled to France and Italy where he collected extensive data which caused him to reject the neptunist philosophy. He instead drew the conclusion that volcanic activity and erosion by wind and weather were primarily responsible for the different strata rather than the deposition of sediments from a “world ocean.” He also rejected the catastrophism of Georges Cuvier, who believed that global catastrophes, such as the biblical Great Flood, periodically destroyed life on Earth, thus accounting for the different fossils found in each rock layer. Lyell believed change was a gradual process that occurred over a long period of time at a constant rate. This theory, known as uniformitarianism, had been postulated 50 years earlier by Scottish geologist James Hutton. It was Lyell, though, who popularized uniformitarianism in his work The Principles of Geology, which is now considered a classic text in this field. By 1850 his views and those of Hutton had become the standard among geologists. However, unlike many of his colleagues, Lyell adhered so strongly to uniformitarianism that he rejected the possibility of even limited catastrophe. Today most scientists accept that catastrophes such as meteor impacts played an important, albeit supplemental, role in the earth’s evolution. In addition to his championing of uniformitarianism, Lyell named several divisions of geologic time such as the Eocene, Miocene, and Pliocene Epochs. He also estimated the age of some of the oldest fossil-bearing rocks known at

Environmental Encyclopedia 3 that time, assigning them the then startling figure of 240 million years. Even though Lyell came closer than his contemporaries to guessing the correct age, it is still less than half the currently accepted figure used by geologists today. While working on The Principles of Geology, Lyell formed a close friendship with Charles Darwin who had outlined his evolutionary theory in The Origin of Species. Both scientists quickly accepted the work of the other (Lyell was one of two scientists who presented Darwin’s work to the influential Linnaean Society). Lyell even extended evolutionary theory to include humans at a time when Darwin was unwilling to do so. In his The Antiquity of Man (1863), Lyell argued that humans were much more ancient than creationists (those who interpreted Book of Genesis literally) and catastrophists believed, basing his ideas on archaeological artifacts such as ancient ax heads. Lyell was knighted for his work in 1848 and created a baronet in 1864. He also served as president of the Geological Society and set up the Lyell Medal and the Lyell Fund. He died in 1875 while working on the twelfth edition of his Principles of Geology.

Lysimeter

Wilson, Leonard G. Charles Lyell—The Years to 1841: The Revolution in Geology. New Haven, CN: Yale University Press, 1972.

PERIODICALS Camardi, Giovanni. “Charles Lyell and the Uniformity Principle.” Biology and Philosophy 14, no. 4 (October 1999): 537–560. Kennedy, Barbara A. “Charles Lyell and ’Modern Changes of the Earth’: the Milledgeville Gully.” Geomorphology 40 (2001): 91–98.

Lysimeter A device for 1) measuring percolation and leaching losses from a column of soil under controlled conditions, or 2) for measuring gains (precipitation, irrigation, and condensation) and losses (evapotranspiration) by a column of soil. Many kinds of lysimeters exist: weighing lysimeters record the weight changes of a block of soil; non-weighing lysimeters enclose a block of soil so that losses or gains in the soil must occur through the surface suction lysimeters are devises for removing water and dissolved chemicals from locations within the soil.

RESOURCES BOOKS Adams, Alexander B. “Reading the Earth’s Story: Charles Lyell—1979– 1875.” In Eternal Quest: The Story of the Great Naturalists. NY: G. P. Putnam’s Sons, 1969.

Lythrum salicaria see Purple loosestrife

855

This Page Intentionally Left Blank

M

Robert Helmer MacArthur 1972)

(1930 –

Canadian biologist and ecologist Few scientists have combined the skills of mathematics and biology to open new fields of knowledge the way Robert H. MacArthur did in his pioneering work in evolutionary ecology. Guided by a wide-ranging curiosity for all things natural, MacArthur had a special interest in birds and much of his work dealt primarily with bird populations. His conclusions, however, were not specific to ornithology but transformed both population biology and biogeography in general. Robert Helmer MacArthur was born in Toronto, Ontario, Canada, on April 7, 1930, the youngest son of John Wood and Olive (Turner) MacArthur. While Robert spent his first seventeen years attending public schools in Toronto, his father shuttled between the University of Toronto and Marlboro College in Marlboro, Vermont, as a professor of genetics. Robert MacArthur graduated from high school in 1947 and immediately immigrated to the United States to attend Marlboro College. He received his undergraduate degree from Marlboro in 1951 and a master’s degree in mathematics from Brown University in 1953. Upon receiving his doctorate in 1957 from Yale University under the direction of G. Evelyn Hutchinson, MacArthur headed for England to spend the following year studying ornithology with David Lack at Oxford University. When he returned to the United States in 1958, he was appointed Assistant Professor of Biology at the University of Pennsylvania. As a doctoral student at Yale, MacArthur had already proposed an ecological theory that encompassed both his background as a mathematician and his growing knowledge as a naturalist. While at Pennsylvania, MacArthur developed a new approach to the frequency distribution of species. One of the problems confronting ecologists is measuring the numbers of a specific species within a geographic area— one cannot just assume that three crows in a 10-acre corn field means that in a 1000-acre field there will be 300 crows.

Much depends on the number of species occupying a habitat, species competition within the habitat, food supply, and other factors. MacArthur developed several ideas relating to the measurement of species within a known habitat, showing how large masses of empirical data relating to numbers of species could be processed in a single model by employing the principles of information theory. By taking the sum of the product of the frequencies of occurrences of a species and the logarithms of the frequencies, complex data could be addressed more easily. The most well-known theory of frequency distribution MacArthur proposed in the late 1950s is the so-called broken stick model. This model had been suggested by MacArthur as one of three competing models of frequency distribution. He proposed that competing species divide up available habitat in a random fashion and without overlap, like the segments of a broken stick. In the 1960s, MacArthur noted that the theory was obsolete. The procedure of using competing explanations and theories simultaneously and comparing results, rather than relying on a single hypothesis, was also characteristic of MacArthur’s later work. In 1958, MacArthur initiated a detailed study of warblers in which he analyzed their niche division, or the way in which the different species will to be best suited for a narrow ecological role in their common habitat. His work in this field earned him the Mercer Award of the Ecological Society of America. In the 1960s, he studied the so-called “species-packing problem.” Different kinds of habitat support widely different numbers of species. A tropical rain forest habitat, for instance, supports a great many species, while arctic tundra supports relatively few. MacArthur proposed that the number of species crowding a given habitat correlates to niche breadth. The book The Theory of Island Biogeography, written with biodiversity expert Edward O. Wilson and published in 1967, applied these and other ideas to isolated habitats such as islands. The authors explained the species-packing problem in an evolutionary light, as an equilibrium between the rates at which new species arrive or develop and the extinction rates of species already present. 857

Mad cow disease

These rates vary with the size of the habitat and its distance from other habitats. In 1965 MacArthur left the University of Pennsylvania to accept a position at Princeton University. Three years later, he was named Henry Fairfield Osborn Professor of Biology, a chair he held until his death. In 1971, MacArthur discovered that he suffered from a fatal disease and had only a few years to live. He decided to concentrate his efforts on encapsulating his many ideas in a single work. The result, Geographic Ecology: Patterns in the Distribution of Species, was published shortly before his death the following year. Besides a summation of work already done, Geographic Ecology was a prospectus of work still to be carried out in the field. MacArthur was a Fellow of the American Academy of Arts and Science. He was also an Associate of the Smithsonian Tropical Research Institute, and a member of both the Ecological Society and the National Academy of Science. He married Elizabeth Bayles Whittemore in 1952; they had four children: Duncan, Alan, Donald, and Elizabeth. Robert MacArthur died of renal cancer in Princeton, New Jersey, on November 1, 1972, at the age of 42. RESOURCES BOOKS Carey, C. W. “MacArthur, Robert Helmer.” Vol. 14, American National Biography edited by J. A. Garraty and M. C. Carnes. NY: Oxford University Press, 1999. Gillispie, Charles Coulson, ed. Dictionary of Scientific Biography. Vol. 17– 18: Scribner, 1990. MacArthur, Robert. Geographic Ecology: Patterns in the Distribution of Species. Harper, 1972. ———. The Biology of Populations. Wiley, 1966. ———. The Theory of Island Biogeography. Princeton University Press, 1967. Notable Scientists: From 1900 to the Present. Farmington Hills, MI: Gale Group, 2002.

Mad cow disease Mad cow disease, a relatively newly discovered malady, was first identified in Britain in 1986, when farmers noticed that their cows’ behavior had changed. The cows began to shake and fall, became unable to walk or even stand, and eventually died or had to be killed. It was later determined that a variation of this fatal neurological disease, formally known as Bovine Spongiform Encephalopathy (BSE), could be passed on to humans. It is still not known to what extent the population of Britain and perhaps other countries is at risk from consumption of contaminated meat and animal by-products. The significance of the BSE problem lies in its as yet unquantifiable potential to not only damage Britain’s $7.5 billion beef 858

Environmental Encyclopedia 3 industry, but also to endanger millions of people with the threat of a fatal brain disease. A factor that stood out in the autopsies of infected animals was the presence of holes and lesions in the brains, which were described as resembling a sponge or Swiss cheese. This was the first clue that BSE was a subtype of untreatable, fatal brain diseases called transmissible spongiform encephalopathies (TSEs). These include a very rare human malady known as Creutzfeldt-Jakob Disease (CJD), which normally strikes just one person in a million, usually elderly or middleaged. In contrast to previous cases of CJD, the new bovinerelated CJD in humans is reported to affect younger people, and manifests with unusual psychiatric and sensory abnormalities that differentiate it from the endemic CJD. The BSE-related CJD has a delayed onset that includes shaky limb movements, sudden muscle spasms, and dementia. As the epidemic of BSE progressed, the number of British cows diagnosed began doubling almost yearly, growing from some 7,000 cases in 1989, to 14,000 in 1990, to over 25,000 in 1991. The incidence of CJD in Britain was simultaneously increasing, almost doubling between 1990 and 1994 and reaching 55 cases by 1994. In response to the problem and growing public concern, the government’s main strategy was to issue reassurances. However, it did undertake two significant measures to try to safeguard public health. In July 1988, it ostensibly banned meat and bone meal from cow feed, but failed to strictly enforce the action. In November 1989, a law that intended to remove those bovine body parts considered to be the most highly infective (brain, spinal cord, spleen, tonsils, intestines, and thymus) from the public food supply was passed. A 1995 government report revealed that half of the time, the law was not being adhered to by slaughterhouses. Thus, livestock—and the public—continued to be potentially exposed to BSE. As the disease continued to spread, so did public fears that it might be transmissible to humans, and could represent a serious threat to human health. But the British government, particularly the Ministry of Agriculture, Fisheries, and Food (MAFF), anxious to protect the multibillion dollar cattle industry, insisted that there was no danger to humans. However, on March 20, 1996, in an embarrassing reversal, the government officially admitted that there could be a link between BSE and the unusual incidence of CJD among young people. At the time, 15 people had been newly diagnosed with CJD. Shocking the nation and making headlines around the world, the Minister of Health Stephen Dorrell announced to the House of Commons that consumption of contaminated beef was “the most likely explanation” for the outbreak of a new variant CJD in 10 people under the age of 42, including several teenagers. Four dairy farmers, including some with infected herds, had also contracted CJD, as had a Frenchman who died in January 1996.

Environmental Encyclopedia 3 British authorities estimated that some 163,000 British cows had contracted BSE. But other researchers, using the same database, put the figure at over 900,000, with 729,000 of them having been consumed by humans. In addition, an unknown number had been exported to Europe, traditionally a large market for British cattle and beef. Many non-meat products may also have been contaminated. Gelatin, made from ligaments, bones, skin, and hooves, is found in ice cream, lipstick, candy, and mayonnaise; keratin, made from hooves, horns, nails, and hair, is contained in shampoo; fat and tallow are used in candles, cosmetics, deodorants, soap, margarine, detergent, lubricants, and pesticides; and protein meal is made into medical and pharmaceutical products, fertilizer, and food additives. Bone meal from dead cows is used as fertilizer on roses and other plants, and is handled and often inhaled by gardeners. In reaction to the government announcement, sales of beef dropped by 70%, cattle markets were deserted, and even hamburger chains stopped serving British beef. Prime Minister Major called the temporary reaction “hysteria” and blamed the press and opposition politicians for fanning it. On March 25, 1996, the European Union banned the import of British beef, which had since 1990 been excluded from the United States and 14 other countries. Shortly afterwards, in an attempt to have the European ban lifted, Britain announced that it would slaughter all of its 1.2 million cows over the age of 30 months (an age before which cows do not show symptoms of BSE), and began the arduous task of killing and incinerating 22,000 cows a week. The government later agreed to slaughter an additional l00,000 cows considered most at risk from BSE. A prime suspect in causing BSE is a by-product derived from the rendering process, in which the unusable parts of slaughtered animals are boiled down or “cooked” at high temperatures to make animal feed and other products. One such product, called meat and bone meal (MBM), is made from the ground-up, cooked remains of slaughtered livestock—cows, sheep, chicken, and hogs—and made into nuggets of animal feed. Some of the cows and sheep used in this process were infected with fatal brain disease. (Although MBM was ostensibly banned as cattle feed in 1988, spinal cords continued to be used.) It is theorized that sheep could have played a major role in initially infecting cows with BSE. For over 200 years, British sheep have been contracting scrapie, another TSE that results in progressive degeneration of the brain. Scrapie causes the sheep to tremble and itch, and to “scrape” or rub up against fences, walls, and trees to relieve the sensation. The disease, first diagnosed in British sheep in 1732, may have recently jumped the species barrier when cows ate animal feed that contained brain and spinal cord tissue from diseased sheep. In 1999 the World Health Organization

Mad cow disease

(WHO) implored high-risk countries to assess outbreaks of BSE-like manifestations in sheep and goat stocks. In August 2002, sheep farms in the United Kingdom demonstrated to the WHO that no increase in illnesses potentially linked to BSE occurred in non-cattle livestock. However, that same year, the European Union Scientific Steering Committee (SSC) on the risk of BSE identified the United Kingdom and Portugal as hotspots for BSE infection of domestic cattle relative to other European nations. Scrapie and perhaps these other spongiform brain diseases are believed to be caused not by a virus (as originally thought) but rather by a form of infectious protein-like particles called prions, which are extremely tenacious, surviving long periods of high intensity cooking and heating. They are, in effect, a new form of contagion. The first real insights into the origins of these diseases were gathered in the 1950s by Dr. D. Carleton Gajdusek, who was awarded the 1976 Nobel Prize in Medicine for his work. His research on the fatal degenerative disease “kuru” among the cannibals of Papua, New Guinea, which resulted in the now-familiar brain lesions and cavities, revealed that the malady was caused by consuming or handling the brains of relatives who had just died. In the United States, Department of Agriculture officials say that the risk of BSE and other related diseases is believed to be small, but cannot be ruled out. No BSE has been detected in the United States, and no cattle or processed beef is known to have been imported from Britain since 1989. However, several hundred mink in Idaho and Wisconsin have died from an ailment similar to BSE, and many of them ate meat from diseased “downer” cows, those that fall and cannot get up. Some experts believe that BSE can occur spontaneously, without apparent exposure to the disease, in one or two cows out of a million every year. This would amount to an estimated 150–250 cases annually among the United States cow population of some 150 million. Moreover, American feed processors render the carcasses of some 100,000 downer cows every year, thus utilizing for animal feed cows that are possibly seriously and neurologically diseased. In June 1997, the Food and Drug Administration (FDA) announced a partial ban on using in cattle feed remains from dead sheep, cattle, and other animals that chew their cud. But the ruling exempts from the ban some animal protein, as well as feed for poultry, pigs, and pets. In March of that year, a coalition of consumer groups, veterinarians, and federal meat inspectors had urged the FDA to include pork in the animal feed ban, citing evidence that pigs can develop a form of TSE, and that some may already have done so. The coalition had recommended that the United States adopt a ban similar to Britain’s, where protein from 859

Environmental Encyclopedia 3

Mad cow disease

The United States Centers for Disease Control (CDC) has reclassified CJD that is associated with the interspecies transmission of BSE disease-causing factor. The current categorization of CJD is termed new variant CJD (nvCJD) to distinguish it from the extremely rare form of CJD that is not associated with BSE contagion. According to the CDC, there have been 79 nvCJD deaths reported worldwide. By April 2002, the global incidence of nvCJD increased to 125 documented reports. Of these, most (117) were from the United Kingdom. Other countries reporting nvCJD included France, Ireland, and Italy. The CDC stresses that nvCJD should not be confused with the endemic form of CJD. In the United States, CJD seldom occurs in adults under 30 years old, having a median age of death of 68 years. In contrast, nvCJD, associated with BSE, tends to affect a much younger segment of society. In the United Kingdom, the median age of death from nvCJD is 28 years. As of April 2002, no cases of nvCJD have been reported in the United States, and all known worldwide cases of nvCJD have been associated with countries where BSE is known to exist. The first possible infection of a U.S. resident was documented and reported by the CDC in 2002. A 22-yearold citizen of the United Kingdom living in Florida was preliminarily diagnosed with nvCJD during a visit abroad. Unfortunately, the only way to verify a diagnosis with nvCJD is via brain biopsy or autopsy. If confirmed, the CDC and Florida Department of Health claim that this case would be the first reported in the United States. The outlook for BSE is uncertain. Since tens of millions of people in Britain may have been exposed to the infectious agent that causes BSE, plus an unknown number in other countries, some observers fear that a latent epidemic of serious proportions could be in the offing. (There is also concern that some of the four million Americans alone now diagnosed with Alzheimer’s disease may actually be suffering from CJD.) There are others who feel that a general removal of most infected cows and animal brain tissue from the food supply has prevented a human health disaster. But since the incubation period for CJD is thought to be 7–40 years, it will be some time before it is known how many people are already infected and the extent of the problem becomes apparent. French farmers protest against an allowance they must pay to bring dead animals to the knackery—a service that was free of charge prior to mad cow disease.

[Lewis G. Regenstein]

RESOURCES BOOKS Rhodes, R. Deadly Feasts. New York: Simon & Schuster, 1997.

all mammals is excluded from animal feed, and some criticized the FDA’s action as “totally inadequate in protecting consumers and public health.” 860

PERIODICALS Blakeslee, S. “Fear of Disease Prompts New Look at Rendering.” The New York Times, March 11, 1997.

Environmental Encyclopedia 3 Lanchester, J. “A New Kind of Contagion.” The New Yorker, December 2, 1996.

Madagascar Described as a crown jewel among earth’s ecosystems, this 1,000-mi long (1,610-km) island-continent is a microcosm of Third World ecological problems. It abounds with unique species which are being threatened by the exploding human population. Many scientists consider Madagascar the world’s foremost conservation priority. Since 1984 united efforts have sought to slow the island’s deterioration, hopefully providing a model for treating other problem areas. Madagascar is the world’s fourth largest island, with a rain forest climate in the east, deciduous forest in the west, and thorn scrub in the south. Its Malagasy people are descended from African and Indonesian seafarers who arrived about 1,500 years ago. Most farm the land using ecologically devastating slash and burn agriculture which has turned Madagascar into the most severely eroded land on earth. It has been described as an island with the shape, color, and fertility of a brick; second growth forest does not do well. Having been separated from Africa for 160 million years, this unique land was sufficiently isolated during the last 40 million years to become a laboratory of evolution. There are 160,000 unique species, mostly in the rapidly disappearing eastern rain forests. These include 65 percent of its plants, half of its birds, and all of its reptiles and mammals. Sixty percent of the earth’s chameleons live here. Lemurs, displaced elsewhere by monkeys, have evolved into 26 species. Whereas Africa has only one species of baobab tree, Madagascar has six, and one is termite resistant. The thorn scrub abounds with potentially useful poisons evolved for plant defense. One species of periwinkle provides a substance effective in the treatment of childhood (lymphocytic) leukemia. Humans have been responsible for the loss of 93 percent of tropical forest and two-thirds of rain forest. Fourfifths of the land is now barren as the result of habitat destruction set in motion by the exploding human population (3.2 percent growth per year). Although nature reserves date from 1927, few Malagasy have ever experienced their island’s biological wonders; urbanites disdain the bush, and peasants are driven by hunger. If they can see Madagascar’s rich ecosystems first hand, it may engender respect which, in turn, may encourage understanding and protection. The people are awakening to their loss and the impact this may have on all Madagascar’s inhabitants. Pride in their island’s unique biodiversity is growing. The World Bank has provided $90 million to develop and implement a 15year Environmental Action Plan. One private preserve in

Malaria

the south is doing well and many other possibilities exist for the development of ecotourism. If population growth can be controlled, and high yield farming replaces slash and burn agriculture, there is yet hope for preserving the diversity and uniqueness of Madagascar. See also Deforestation; Erosion; Tropical rain forest [Nathan H. Meleen]

RESOURCES BOOKS Attenborough, D. Bridge to the Past: Animals and People of Madagascar. New York: Harper, 1962. Harcourt, C., and J. Thornback. Lemurs of Madagascar and the Comoros: The IUCN Red Data Book. Gland, Switzerland: IUCN, 1990. Jenkins, M. D. Madagascar: An Environmental Profile. Gland, Switzerland: IUCN, 1987.

PERIODICALS Jolly, A. “Madagascar: A World Apart.” National Geographic 171 (February 1987): 148–83.

Magnetic separation An on-going problem of environmental significance is solid waste disposal. As the land needed to simply throw out solid wastes becomes less available, recycling becomes a greater priority in waste management programs. One step in recycling is the magnetic separation of ferrous (ironcontaining) materials. In a typical recycling process, wastes are first shredded into small pieces and then separated into organic and inorganic fractions. The inorganic fraction is then passed through a magnetic separator where ferrous materials are extracted. These materials can then be purified and reused as scrap iron. See also Iron minerals; Resource recovery

Malaria Malaria is a disease that affects hundreds of millions of people worldwide. In the developing world malaria contributes to a high infant mortality rate and a heavy loss of work time. Malaria is caused by the single-celled protozoan parasite, Plasmodium. The disease follows two main courses: tertian (three day) malaria and quartan (four day) malaria. Plasmodium vivax causes benign tertian malaria with a low mortality (5%), while Plasmodium falciparum causes malignant tertian malaria with a high mortality (25%) due to interference with the blood supply to the brain (cerebral malaria). Quartan malaria is rarely fatal. Plasmodium is transmitted from one human host to another by female mosquitoes of the genus Anopheles. Thou861

Environmental Encyclopedia 3

Male contraceptives

sands of parasites in the salivary glands of the mosquito are injected into the human host when the mosquito takes blood. The parasites (in the sporozoite stage) are carried to the host’s liver where they undergo massive multiplication into the next stage (cryptozoites). The parasites are then released into the blood stream, where they invade red blood cells and undergo additional division. This division ruptures the red blood cells and releases the next stage (the merozoites), which invade and destroy other red blood cells. This red blood cell destruction phase is intense but short-lived. The merozoites finally develop into the next stage (gametocytes) which are ingested by the biting mosquito. The pattern of chills and fever characteristic of malaria is caused by the massive destruction of the red blood cells by the merozoites and the accompanying release of parasitic waste products. The attacks subside as the immune response of the human host slows the further development of the parasites in the blood. People who are repeatedly infected gradually develop a limited immunity. Relapses of malaria long after the original infection can occur from parasites that have remained in the liver, since treatment with drugs kills only the parasites in the blood cells and not in the liver. Malaria can be prevented or cured by a wide variety of drugs (quinine, chloroquine, paludrine, proguanil, or pyrimethamine). However, resistant strains of the common species of Plasmodium mean that some prophylactic drugs (chloroquine and pyrimethamine) are no longer totally effective. Malaria is controlled either by preventing contact between humans and mosquitoes or by eliminating the mosquito vector. Outdoors, individuals may protect themselves from mosquito bites by wearing protective clothing, applying mosquito repellents to the skin, or by burning mosquito coils that produce smoke containing insecticidal pyrethrins. Inside houses, mosquito-proof screens and nets keep the vectors out, while insecticides (DDT) applied inside the house kill those that enter. The aquatic stages of the mosquito can be destroyed by eliminating temporary breeding pools, by spraying ponds with synthetic insecticides, or by applying a layer of oil to the surface waters. Biological control includes introducing fish (Gambusia) that feed on mosquito larvae into small ponds. Organized campaigns to eradicate malaria are usually successful, but the disease is sure to return unless the measures are vigilantly maintained. See also Epidemiology; Pesticide [Neil Cumberlidge Ph.D.]

RESOURCES BOOKS Bullock, W. L. People, Parasites, and Pestilence: An Introduction to the Natural History of Infectious Disease. Minneapolis: Burgess Publishing Company, 1982.

862

Knell, A. J., ed. Malaria: A Publication of the Tropical Programme of the Wellcome Trust. New York: Oxford University Press, 1991. Markell, E. K., M. Voge, and D. T. John. Medical Parasitology. 7th ed. Philadelphia: Saunders, 1992. Phillips, R. S. Malaria. Institute of Biology’s Studies in Biology, No. 152. London: E. Arnold, 1983.

Male contraceptives Current research into male contraceptives will potentially increase the equitability of family planning between males and females. This shift will also have the potential to address issues of population growth and its related detrimental effects on the environment. While prophylactic condoms provide good barrier protection from unwanted pregnancies, they are not as effective as oral contraceptives for women. Likewise, vasectomies are very effective, but few men are willing to undergo the surgery. There are three general categories of male contraceptives that are being explored. The first category functionally mimics a vasectomy by physically blocking the vas deferens, the channel that carries sperm from the seminiferous tubules to the ejaculatory duct. The second uses heat to induce temporary sterility. The third involves medications to halt sperm production. In essence, this third category concerns the development of “The Pill” for men. Despite its near 100% effectiveness, there are two major disadvantages to vasectomy that make it unattractive to many men as an option for contraception. The first is the psychological component relating to surgery. Although vasectomies are relatively non-invasive, when compared to taking a pill the procedure seems drastic. Second, although vasectomies are reversible, the rate of return to normal fertility is only about 40%. Therefore, newer “vas occlusive” methods offer alternatives to vasectomy with completely reversible effects. Vas occlusive devices block the flow of or render dysfunctional the sperm in the vas deferens. The most recent form of vas occlusive male contraception, called Reversible Inhibition of Sperm Under Guidance (RISUG), involves the use of a styrene that is combined with the chemical DMSO (dimethyl sulfoxide). The complex is injected into the vas deferens. The complex then partially occludes passage of sperm and also causes disruption of sperm cell membranes. As sperm cells contact the RISUG complex, they rupture. It is believed that a single injection of RISUG may provide contraception for up to 10 years. Large safety and efficacy trials examining RISUG are being conducted in India. Two additional vas occlusive methods of male contraception involve the injection of polymers into the vas deferens. Both methods involve injection of a liquid form of polymer, microcellular polyurethane (MPU) or medicalgrade silicon rubber (MSR), into the vas deferens where it

Environmental Encyclopedia 3 hardens within 20 minutes. The resulting plug provides a barrier to sperm. The technique was developed in China, and since 1983 some 300,000 men have reportedly undergone this method of contraception. Reversal of MPU and MSR plugs requires surgical removal of the polymers. Another method involving silicon plugs (called the Shug for short) offers an alternative to injectable plugs. This doubleplug design offers a back-up plug should sperm make their way past the first. Human sperm is optimally produced at a temperature that is a few degrees below body temperature. Infertility is induced if the temperature of the testes is elevated. For this reason, men trying to conceive are often encouraged to avoid wearing snugly-fitting undergarments. The thermal suspensory method of male contraception utilizes specially designed suspensory briefs to use natural body heat or externally applied heat to suppress spermatogenesis. Such briefs hold the testes close to the body during the day, ideally near the inguinal canal where local body heat is greatest. Sometimes this method is also called artificial cryptorchidism since is simulates the infertility seen in men with undescended testicles. When worn all day, suspensory briefs lead to a gradual decline in sperm production. The safety of briefs that contain heating elements to warm the testes is being evaluated. Externally applied heat in such briefs would provide results in a fraction of the time required using body heat. Other forms of thermal suppression of sperm production utilize simple hot water heated to about 116°F(46.7°C). Immersion of the testicles in the warm water for 45 minutes daily for three weeks is said to result in six months of sterility followed by a return to normal fertility. A newer, but essentially identical, method of thermal male contraception uses ultrasound. This simple, painless, and convenient method using ultrasonic waves to heat water results in six-month, reversible sterility within only 10 minutes. Drug therapy is also being evaluated as a potential form of male contraception. Many drugs have been investigated in male contraception. An intriguing possibility is the observation that a particular class of blood pressure medications, called calcium channel blockers, induces reversible sterility in many men. One such drug, nifedipine, is thought to induce sterility by blocking calcium channels of sperm cell membranes. This reportedly results in cholesterol deposition and membrane instability of the sperm, rendering them incapable of fertilization. Herbal preparations have also been used as male contraceptives. Gossypol, a constituent of cottonseed oil, was found to be an effective and reliable male contraceptive in very large-scale experiments conducted in China. Unfortunately, an unacceptable number of men experienced persistent sterility when gossypol therapy was discontinued. Additionally, up to 10% of men treated with gossypol experienced kidney problems in the studies conducted in

Man and the Biosphere Program

China. Because of the potential toxicity of gossypol, the World Health Organization concluded that research on this form of male contraception should be abandoned. Most recently, a form of sugar that sperm interact with in the fertilization process has been isolated from the outer coating of human eggs. An enzyme in sperm, called N-acetyl-betaD-hexosaminidase (HEX-B) cuts through the protective outer sugar layer of the egg during fertilization. A decoy sugar molecule that mimics the natural egg coating is being investigated. The synthetic sugar would bind specifically to sperm HEX-B enzyme, curtailing the sperm’s ability to penetrate the egg’s outer coating. Related experiments in male rats have shown effective and reversible contraceptive properties. Perhaps one of the most researched methods of male contraception using drugs involves the use of hormones. Like female contraceptive pills, Male Hormone Contraceptives (MHCs) seek to stop the production of sperm by stopping the production of hormones that direct the development of sperm. Many hormones in the human body work by feedback mechanisms. When levels of one hormone are low, another hormone is released that results in an increase in the first. The goal of MHCs is to artificially raise the levels of hormone that would result in suppression of hormone release required for sperm production. The best MHC produced only provides about 90% sperm suppression, which is not enough to reliably prevent conception. Also, for poorly understood reasons, some men do not respond to the MHC preparations under investigation. Despite initial promise, more research is needed to make MHCs competitive with female contraception. Response failure rates for current MHC drugs range from 5–20%. [Terry Watkins]

RESOURCES ORGANIZATIONS Contraceptive Research and Development Program (CONRAD), Eastern Virginia Medical School, 1611 North Kent Street, Suite 806, Arlington, VA USA 22209 (703) 524-4744, Fax: (703) 524-4770, Email: [email protected],

Malignant tumors see Cancer

Man and the Biosphere Program The Man and the Biosphere (MAB) program is a global system of biosphere reserves begun in 1986 and organized by the United Nations Educational, Social, and Cultural Organization (UNESCO). MAB reserves are designed to conserve natural ecosystems and biodiversity and to incor863

Environmental Encyclopedia 3

Manatees

porate the sustainable use of natural ecosystems by humans in their operation. The intention is that local human needs will be met in ways compatible with resource conservation. Furthermore, if local people benefit from tourism and the harvesting of surplus wildlife, they will be more supportive of programs to preserve wilderness and protect wildlife. MAB reserves differ from traditional reserves in a number of ways. Instead of a single boundary separating nature inside from people outside, MAB reserves are zoned into concentric rings consisting of a core area, a buffer zone, and a transition zone. The core area is strictly managed for wildlife and all human activities are prohibited, except for restricted scientific activity such as ecosystem monitoring. Surrounding the core area is the buffer zone, where nondestructive forms of research, education, and tourism are permitted, as well as some human settlements. Sustainable light resource extraction such as rubber tapping, collection of nuts, or selective logging is permitted in this area. Preexisting settlements of indigenous peoples are also allowed. The transition zone is the outermost area, and here increased human settlements, traditional land use by native peoples, experimental research involving ecosystem manipulations, major restoration efforts, and tourism are allowed. The MAB reserves have been chosen to represent the world’s major types of regional ecosystems. Ecologists have identified some 14 types of biomes and 193 types of ecosystems around the world and about two-thirds of these ecosystem types are represented so far in the 276 biosphere reserves now established in 72 countries. MAB reserves are not necessarily pristine wilderness. Many include ecosystems that have been modified or exploited by humans, such as rangelands, subsistence farmlands, or areas used for hunting and fishing. The concept of biosphere reserves has also been extended to include coastal and marine ecosystems, although in this case the use of core, buffer, and transition areas is inappropriate. The establishment of a global network of biosphere reserves still faces a number of problems. Many of the MAB reserves are located in debt-burdened developing nations, because many of these countries lie in the biologically rich tropical regions. Such countries often cannot afford to set aside large tracts of land, and they desperately need the short-term cash promised by the immediate exploitation of their lands. One response to this problem is the debt for nature swaps in which a conservation organization buys the debt of a nation at a discount rate from banks in exchange for that nation’s commitment to establish and protect a nature reserve. Many reserves are effectively small, isolated islands of natural ecosystems surrounded entirely by developed land. The protected organisms in such islands are liable to suffer genetic erosion, and many have argued that a single large 864

reserve would suffer less genetic erosion than several smaller reserves which cumulatively protect the same amount of land. It has also been suggested that reserves sited as close to each other as possible, and corridors that allow movement between them, would increase the habitat and gene pool available to most species. [Neil Cumberlidge Ph.D.]

RESOURCES BOOKS Gregg, W. P., and S. L. Krugman, eds. Proceedings of the Symposium on Biosphere Reserves. Atlanta, GA: U.S. National Park Service, 1989. Office of Technology Assessment. Technologies to Maintain Biological Diversity. Philadelphia: Lippincott, 1988.

PERIODICALS Batisse, M. “Developing and Focusing the Biosphere Reserve Concept. Nature and Resources 22 (1986): 1–10.

Manatees A relative of the elephant, manatees are totally aquatic, herbivorous mammals of the family Trichechidae. This group arose 15–20 million years ago during the Miocene period, a time which also favored the development of a tremendous diversity of aquatic plants along the coast of South America. Manatees are adapted to both marine and freshwater habitats and are divided into three distinct species: the Amazonian manatee (Trichechus inunguis), restricted to the freshwaters of the Amazon River; the West African manatee (Trichechus senegalensis), found in the coastal waters from Senegal to Angola; and the West Indian manatee (Trichechus manatus), ranging from the northern South American coast through the Caribbean to the southeastern coastal waters of the United States. Two other species, the dugong (Dugong dugon) and Steller’s sea cow (Hydrodamalis gigas), along with the manatees, make up the order Sirenia. Steller’s sea cow is now extinct, having been exterminated by man in the mid-1700s for food. These animals can weigh 1,000–1,500 lb (454–680 kg) and grow to be more than 12 ft (3.7 m) long. Manatees are unique among aquatic mammals because of their herbivorous diet. They are non-ruminants, therefore, unlike cows and sheep, they do not have a chambered stomach. They do have, however, extremely long intestines (up to 150 ft/46 m) that contain a paired blind sac where bacterial digestion of cellulose takes place. Other unique traits of the manatee include horizontal replacement of molar teeth and the presence of only six cervical, or neck, vertebrae, instead of seven as in all other mammals. The intestinal sac and tooth replacement are adaptations designed to counteract the defenses evolved by the plants that the manatees eat. Several plant

Environmental Encyclopedia 3

Manatees

Manatee with a researcher, Homosassa Springs, Florida. (Photograph by Faulkner. Photo Researchers Inc. Reproduced by permission.)

species contain tannins, oxalates, and nitrates, which are toxic, but which may be detoxified in the manatee’s intestine. Other plant species contain silica spicules, which, due to their abrasiveness, wear down the manatee’s teeth, necessitating the need for tooth replacement. The life span of manatees is long, greater than 30 years, but their reproductive rate is low, with gestation being 13 months and females giving birth to one calf every two years. Because of this the potential for increasing the population is low, thus leaving the population vulnerable to environmental problems. Competition for food is not a problem. In contrast to terrestrial herbivores, which have a complex division of food resources and competition for the high-energy level land plants, manatees have limited competition from sea turtles. This is minimized by different feeding strategies employed within the two groups. Sea turtles eat blades of seagrasses at greater depths than manatees feed, and manatees tend to eat not only the blades, but also the rhizomes of these plants, which contain more energy for the warm-blooded mammals. Because manatees are docile creatures and a source of food, they have been exploited by man to the point of

extinction. There are currently between 1,500 and 3,000 in

the U.S. Also because manatees are slow moving, a more recent threat is taking its toll on these shallow-swimming animals. Power boat propellers have struck hundreds of manatees in recent years, causing 90% of the man-related manatee deaths. This has also resulted in permanent injury or scarring to others. Conservation efforts, such as the Marine Mammals Protection Act of 1972 and the Endangered Species Act of 1973, have helped reduce some of these problems but much more will have to be done to prevent the extirpation of the manatees. [Eugene C. Beckham]

RESOURCES BOOKS Ridgway, S. H., and R. Harrison, eds. Handbook of Marine Mammals. Vol. 3, The Sirenians and Baleen Whales. London: Academic Press, 1985.

OTHER

Manatees of Florida. [cited May 2002]. . Save the Manatees Club. [cited May 2002]. .

865

Environmental Encyclopedia 3

Mangrove swamp

Mangrove swamp Mangrove swamps or forests are the tropical equivalent of temperate salt marshes. They grow in protected coastal embayments in tropical and subtropical areas around the world, and some scientists estimate that 60-75 percent of all tropical shores are populated by mangroves. The term “mangrove” refers to individual trees or shrubs that are angiosperms (flowering plants) and belong to more than 80 species within 12 genera and five families. Though unrelated taxonomically, they share some common characteristics. Mangroves only grow in areas with minimal wave action, high salinity, and low soil oxygen. All of the trees have shallow roots, form pure stands, and have adapted to the harsh environment in which they grow. The mangrove swamp or forest community as a whole is called a mangal. Mangroves typically grow in a sequence of zones from seaward to landward. This zonation is most highly pronounced in the Indo-Pacific regions, where 30-40 species of mangroves grow. Starting from the shore-line and moving inland, the sequence of genera there is Avicennia followed by Rhizophora, Bruguiera, and finally Ceriops. In the Caribbean, including Florida, only three species of trees normally grow: red mangroves (Rhizophora mangle) represent the pioneer species growing on the water’s edge, black mangroves (Avicennia germinans) are next, and white mangroves (Laguncularia racemosa) grow mostly inland. In addition, buttonwood (Conocarpus erectus) often grows between the white mangroves and the terrestrial vegetation. Mangrove trees have made special adaptations to live in this environment. Red mangroves form stilt-like prop roots that allow them to grow at the shoreline in water up to several feet deep. Like cacti, they have thick succulent leaves which store water and help prevent loss of moisture. They also produce seeds which germinate directly on the tree, then drop into the water, growing into a long, thin seedling known as a “sea pencil.” These seedlings are denser at one end and thus float with the heavier hydrophilic (water-loving) end down. When the seedlings reach shore, they take root and grow. One acre of red mangroves can produce three tons of seeds per year, and the seeds can survive floating on the ocean for more than 12 months. Black mangroves produce straw-like roots called pneumatophores which protrude out of the sediment, thus enabling them to take oxygen out of the air instead of the anaerobic sediments. Both white and black mangroves have salt glands at the base of their leaves which help in the regulation of osmotic pressure. Mangrove swamps are important to humans for several reasons. They provide water-resistant wood used in construction, charcoal, medicines, and dyes. The mass of prop roots at the shoreline also provides an important habitat for a rich assortment of organisms, such as snails, barnacles, 866

Mangrove creek in the Everglades National Park. (Photograph by Max & Bea Hunn. Visuals Unlimited. Reproduced by permission.)

oysters, crabs, periwinkles, jellyfish, tunicates, and many species of fish. One group of these fish, called mud skippers (Periophthalmus), have large bulging eyes, seem to skip over the mud, and crawl up on the prop roots to catch insects and crabs. Birds such as egrets and herons feed in these productive waters and nest in the tree branches. Prop roots tend to trap sediment and can thus form new land with young mangroves. Scientists reported a growth rate of 656 feet (200 m) per year in one area near Java. These coastal forests can be helpful buffer zones to strong storms. Despite their importance, mangrove swamps are fragile ecosystems whose ecological importance is commonly unrecognized. They are being adversely affected worldwide by increased pollution, use of herbicides, filling, dredging, channelizing, and logging. See also Marine pollution; Wetlands [John Korstad]

RESOURCES BOOKS Castro, P., and M. E. Huber. Marine Biology. St. Louis: Mosby, 1992. Nybakken, J. W. Marine Biology: An Ecological Approach. 2d ed. New York: Harper & Row, 1988. Tomlinson, P. B. The Botany of Mangroves. Cambridge: Cambridge University Press, 1986.

Environmental Encyclopedia 3 Smith, R. E. Ecology and Field Biology. 4th ed. New York: Harper & Row, 1990.

PERIODICALS Lugo, A. E., and S. C. Snedaker. “The Ecology of Mangroves.” Annual Review of Ecology and Systematics 5 (1974): 39–64. Ru¨tzler, K., and C. Feller. “Mangrove Swamp Communities.” Oceanus 30 (1988): 16–24.

Manure see Animal waste

Manville Corporation see Asbestos

Marasmus A severe deficiency of all nutrients, categorized along with other protein energy malnutrition disorders. Marasmus, which means “to waste” can occur at any age but is most commonly found in neonates (children under one year old). Starvation resulting from marasmus is a result of protein and carbohydrate deficiencies. In developing countries and impoverished populations, early weaning from breast feeding and over dilution of commercial formulas places neonates at high risk for getting marasmus. Because of the deficiency in intake of all dietary nutrients, metabolic processes--especially liver functions--are preserved, while growth is severely retarded. Caloric intake is too low to support metabolic activity such as protein synthesis or storage of fat. If the condition is prolonged, muscle tissue wasting will result. Fat wasting and anemia are common and severe. Severe vitamin A deficiency commonly results in blindness, although if caught early, this process can be reversed. Death will occur in 40% of children left untreated.

Mariculture Mariculture is the cultivation and harvest of marine flora and fauna in a controlled saltwater environment. Sometimes called marine fish farming, marine aquaculture, or aquatic farming, mariculture involves some degree of human intervention to enhance the quality and/or quantity of a marine harvest. This may be achieved by feeding practices, protection from predators, breeding programs, or other means. Fish, crustaceans, salt-water plants, and shellfish may be farm raised for bait, fishmeal and fish oil production, scientific research, biotechnology development, and repopulating threatened or endangered species. Ornamental

Mariculture

fish are also sometimes raised by fish farms for commercial sales. The most widespread use of aquaculture, however, is the production of marine life for human food consumption. With seafood consumption steadily rising and overfishing of the seas a growing global problem, mariculture has been hailed as a low-cost, high-yield source of animal-derived protein. According to the Fisheries Department of the United Nations’ Food and Agriculture Organization (FAO), over 33 million metric tons of fish and shellfish encompassing 220 different species are cultured (or farmed) worldwide, representing an estimated $49 billion in 1999. Pound for pound, China leads the world in aquaculture production with 32.5% of world output. In comparison, the United States is only responsible for 4.4% of global aquaculture output by weight. Just 7% of the total U.S. aquatic production (both farmed and captured resources) is attributable to aquaculture (compared to 62% of China’s total aquatic output). In the United States, Atlantic salmon and channel catfish represent the largest segments of aquaculture production (34% and 40%, respectively, in 1997). Though most farmed seafood is consumed domestically, the United States imports over half of its total edible seafood annually, representing a $7 billion annual trade deficit in 2001. The Department of Commerce launched an aquaculture expansion program in 1999 with the goal of increasing domestic seafood supply derived from aquaculture production to $5 billion annually by the year 2025. According to the U.S. Joint Subcommittee on Aquaculture, U.S. aquaculture interests harvested 842 million pounds of product at an estimated value of $987 million in 1999. To encourage further growth of the U.S. aquaculture industry, the National Aquaculture Act was passed in 1980 (with amendments in 1985 and 1996). The Act established funding and mandated the development of a national aquaculture plan that would encourage “aquaculture activities and programs in both the public and private sectors of the economy; that will result in increased aquacultural production, the coordination of domestic aquaculture efforts, the conservation and enhancement of aquatic resources, the creation of new industries and job opportunities, and other national benefits.” In the United States, aquaculture is regulated by the U.S. Department of Agriculture (USDA) and the Department of Commerce through the National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric Administration (NOAA). State and local authorities may also have some input into the location and practices of mariculture facilities if they are located within an area governed by a Coastal Zone Management Plan (CZMP). Coastal Zone Management Plans, as authorized by the Coastal Zone 867

Mariculture

Management Act (CZMA) of 1972, allow individual states to determine the appropriate use and development of their respective coastal zones. Subsequent amendments to the CZMA have also made provision for states to be eligible for federal funding for the creation of state plans, procedures, and regulations for mariculture activities in the coastal zone. Tilapia, catfish, striped bass, yellow perch, walleye, salmon, and trout are just a few of the fresh and salt-water finned fish species farmed in the United States. Crawfish, shrimp, and shellfish are also cultured in the U.S. Some shellfish, such as oysters, mussels, and clams, are “planted” as juveniles and farmed to maturity, when they are harvested and sold. Shellfish farmers buy the juvenile shellfish (known as “spat") from shellfish hatcheries and nurseries. Oysters and mussels are attached to lines or nets and put in a controlled ocean environment, while clams are buried in the beach or in sandy substrate below low tide. All three of these shellfish species feed on plankton from salt water. But just as overfarming takes a toll on terrestrial natural resources, aquaculture without appropriate environmental management can damage native ecosystems. Farmed fish are raised in open-flow pens and nets. Because large populations of farmed fish are often raised in confined areas, disease spreads easily and rapidly among them. And farmed fish often transmit sea lice and other parasites and diseases to wild fish, wiping out or crippling native stock. Organic pollution from effluent, the waste products from farmed fish, can build up and suffocate marine life on the sea floor below farming pens. This waste includes fish feces, which contributes to nutrient loading, and chemicals and drugs used to keep the fish disease free and promote growth. It also contains excess fish food, which often contains dyes to make farmed fish flesh more aesthetically analogous to its wild counterparts. Farmed fish that escape from their pens interbreed with wild fish and weaken the genetic line of the native stock. If escaped fish are diseased, they can trigger outbreaks among indigenous marine life. Infectious Salmon Anemia, a viral disease that has plagued fish farms in New Brunswick and Scotland in the 1990s, was eventually found in native salmon. In 2001, the disease first appeared at U.S. Atlantic salmon farms off the coast of Maine. The use of drugs in farmed fish and shellfish intended for human consumption is regulated by the U.S. Food and Drug Administration (FDA). In recent years, antibiotic resistance has been a growing issue in aquaculture, as fish have been treated with a wide array of human and veterinary antibiotic drugs to prevent disease. The commercial development of genetically-engineered, or transgenic, farmed fish is also regulated by FDA. As of May 2002, no transgenic fish had been cleared by FDA for human consumption. The impact transgenic fish

868

Environmental Encyclopedia 3 may have on the survival and reproduction of native species will have to be closely followed if and when commercial farming begins. As mandated by the 1938 Mitchell Act, the NMFS funds 25 salmon hatcheries in the Columbia River Basin of the Pacific Northwest, the largest federal marine fishery program in the United States. These aquaculture facilities were originally introduced to assist in repopulation of salmon stocks that had been lost to or severely hampered by hydroelectric dam projects. However, some environmentalists charge that the salmon hatcheries may actually be endangering wild salmon further, by competing for local habitat and weakening the genetic line of native species. Without careful resource management, aquaculture may eventually take an irreversible toll on other non-farmed marine species. Small pelagic fish, such as herring, anchovy, and chub, are captured and processed into fish food compounds for high-density carnivorous fish farms. According to the FAO, at its current rate, fish farming is consuming twice as many wild fish in feeding their domestic counterparts as aquaculture is producing in fish harvests—an average of 1.9 kg of wild fish required for every kilogram of fish farmed. No matter how economically sound mariculture has been, it has also led to serious habitat modification and destruction, especially in mangrove forests. In the past twenty years, the area of mangrove forests have dwindled by 35% worldwide. Though some of that loss is due to active herbicide control of mangroves, their conversion to salt flats, and the industrial harvesting of forest products (wood chips and lumber), mariculture is responsible for 52% of the world’s mangrove losses. Mangrove forests are important to the environment because these ecosystems are buffer zones between saltwater areas and freshwater/land areas. Mangroves act as filters for agricultural nutrients and pollutants, trapping these contaminants before they reach the deeper waters of the ocean. They also prevent coastal erosion, provide spawning and nursery areas for fish and shellfish, host a variety of migratory wildlife (birds, fish, and mammals), and support habitats for a number of endangered species. Shrimp farming, in particular, has played a major role in mangrove forest reduction. Increasing from 3% to 30% in less than 15 years, commercial shrimp farming has impacted coastal mangroves profoundly by cutting down mangrove trees to create shrimp and prawn ponds. In the Philippines, 50% of the mangrove environments were converted to ponds and between 50% and 80% of those in Southeast Asia were lost to pond culture as well. Shrimp mariculture places high demands on resources. It requires large supplies of juvenile shrimp, which can seriously deplete natural shrimp stocks, and large quantities of

Environmental Encyclopedia 3

Marine ecology and biodiversity

shrimp meal to feed them. There also is considerable waste derived from shrimp production. This can pump organic matter and nutrients into the ponds, causing eutrophication, which causes algae bloom and oxygen depletion in the ponds themselves or even downstream. Many shrimp farmers need to pump pesticides, fungicides, parasiticides, and algicides into the ponds between harvests to sterilize them and mitigate the effects of nutrient loading. Shrimp ponds also have extremely short life spans, usually about 5–10 years, forcing their abandonment and the cutting of more mangrove forests to create new ponds. Mariculture also limits other marine activities along coastal waters. Some aquaculture facilities can occupy large expanses of ocean along beaches which become commercial and recreational no-fish zones. These nursery areas are also sensitive to disturbances by recreational activities like boating or swimming and the introduction of pathogens by domestic or farm animals. [Paula Anne Ford-Martin]

RESOURCES BOOKS Food and Agriculture Organization of the United Nations. The State of World Fisheries and Aquaculture Rome, Italy: FAO, 2000. [cited June 5, 2002]. . Jahncke, Michael L. et al.Public, Animal, and Environmental Aquaculture Health Issues. New York: John Wiley & Sons, 2002. Olin, Paul. “Current Status of Aquaculture in North America.” From Aquaculture in the Third Millennium: Technical Proceedings of the Conference on Aquaculture in the Third Millennium, Bangkok, Thailand. 20-25 February 2000.Rome, Italy: FAO, 2000.

PERIODICALS Barcott, Bruce, and Natalie Fobes. “Aquaculture’s Troubled Harvest.” Mother Jones 26, no.6 (November –December 2001): 38 (8). Naylor, Rosamond, et al. “Effect of Aquaculture on World Fish Supplies.”Nature(June 29, 2000).

OTHER “National Aquaculture Policy, Planning, and Development.” 16 USC 2801. [cited June 4, 2002]. . National Marine Fisheries Service, National Oceanic and Atmospheric Administration. Aquaculture. [cited July 2002]. .

ORGANIZATIONS The Northeast Regional Aquaculture Center, University of Massachusetts Dartmouth, Violette Building, Room 201 285 Old Westport Road, Dartmouth, MA USA 02747-2300 (508) 999-8157, Fax: (508) 999-8590, Toll Free: (866) 472-NRAC (6722), Email: [email protected], http:// www.umassd.edu/specialprograms/NRAC/

Marine ecology and biodiversity Understanding the nature of ocean life and the patterns of its diversity represents a difficult challenge. Not only are there technical difficulties involved with studying life under water (high pressure, need for oxygen tanks, lack of light), there is an urgency to develop a greater understanding of marine life as links between ocean processes and the larger patterns of terrestrial life become more well known. Our current understanding of oceanic life is based on three principal concepts: size, complexity, and spatial distribution. Our knowledge about the size of the ocean’s domain is grounded in three great discoveries of the past few centuries. When Magellan first circumnavigated the earth he inadvertently found that the oceans were a continuous water body, rather than a series of discrete bodies of water. Some time later, the encompassing nature of the world oceans was further clarified by the discovery that the oceans were a chemically uniform aqueous system. All of the principal ions (sodium, chloride, and sulfate) exist in the same concentrations. The third discovery, still underway, is that the ocean is composed of comparatively immense ecological systems. Thus in most ways the oceans are a unified system which is the first defining characteristic of the marine environment. There is, however, a dichotomy between the integral nature of the ocean and the external forces played upon it. Mechanical, thermodynamic, chemical, and biological forces create variation through such things as differential heating, Coriolis force, wind, dissolved gases, salinity differences, and evaporation. The actions in turn set controls in motion which move toward physical equilibrium through feedback mechanisms. Those physical changes then interact with biological systems in nonlinear ways, that is, out of synchronization with the external stimuli and become quite difficult to predict. Thus, we have the second broad characteristic of the oceans, complexity. The third major aspect of ocean life is that life itself is sparse in terms of the overall volume of the oceans, but locally productive systems can create immense populations and/or sites with exceptionally high species diversity. Life is arranged in active layers dictated by nutrients and light in the horizontal planes, and by vertical current (downwelling and upwelling) in the vertical planes. Life decreases through the depth zones from the epipelagic zone in the initial 328 ft (100 m) of the water column to the bottom layers of water, and then jumps again at the benthic layer at the watersubstrate interface. Life also decreases from the littoral zones along the world’s shorelines to the open ocean, interrupted by certain areas with special life supporting systems, like floating sargasso weed beds. In the past twenty years the focus of conservation has shifted to include not only individual species or habitats, 869

Marine Mammals Protection Act (1972)

but to a phenomenon called biological diversity, or biodiversity for short. Biological diversity encompasses from three to four levels. Genetic diversity is the level of genotypic differences within all the individuals that constitute a population of organisms; species diversity refers to the number of species in an area; and community diversity to the number of different community types in a landscape. The highest level, landscape diversity has not frequently been applied in aqueous environments and will not be discussed here. Commonly, species diversity is interpreted as biological diversity, and since few marine groups, except marine mammals, have had very much genetic work done, and community functions are only well known from a few systems, it is the taxonomic interpretation of diversity that is most commonly discussed (e.g., species or higher taxonomic levels such as families and classes, orders and phyla). Of all the species that we know, roughly 16% are from the seas. General diversity patterns in the sea are similar to those on land, there are more smaller than larger species, and there are more tropical species than temperate or polar species. There are centers of diversity for specific taxa, and the structure of communities and ecosystems is based on particular patterns of energy availability. For example, estuary systems are productive due to importation of nitrogen from the land, coral reefs are also productive, but use scarce nutrients efficiently by specially adapted filter feeding mechanisms. Abyssal communities, on the other hand, depend on their entire energy supply from detritus fall from upper levels in the ocean. Perhaps the most specifically adapted of all life forms are the hydrothermal vent communities that employ chemosynthesis rather than photosynthesis for primary production. Water temperature, salinity, and pressure create differing ecosystems in ways that are distinctly different from terrestrial systems. In addition, the boundaries between systems may be dynamic, and are certainly more difficult to detect than on land. Most marine biodiversity occurs at higher taxonomic levels, while the land holds more species, most of them are arthropods. Most of the phyla (32 of 33) that we now know are either marine, or both marine and non-marine, while only one is exclusively non-marine. Thus most of the major life plans exist in the sea. We are now learning that the number of species in the ocean is probably underestimated as we discover more cryptic species, very similar organisms that are actually distinct, many of which have been discovered on the deep ocean floor. This is one of the important diversity issues in the marine environment. Originally, the depths were cast as biological deserts, however, that view may have been promoted by a lack of samples, the small size of many benthic invertebrates, and the low density of benthic populations in the deep sea beds. 870

Environmental Encyclopedia 3 Improved sampling since the 1960s changed that view to one of the ocean floor as a highly species diverse environment. The deep sea is characterized by a few individuals in each of many species; rarity dominates. Whereas, in shallow water benthic environments, there are large, dense populations dominated by a few species. At least three theoretical explanations for this pattern have been made. The stabilitytime hypothesis suggests that ocean bottoms have been very stable environments over long periods of time. This condition causes very finely tuned adaptations to narrow niches, and results in many closely related species. The disturbance or “cropper” hypothesis suggests that intense predation of limited food sources prevents populations from reaching high levels and because the food source is dominated by detrital rain, generalist feeders abound and compete for the same food, which results in only small differences between species. A third hypothesis is that the area of the deep sea bed is so large it supports many species, following from generalizations made by the species area relationship concept used in island biogeography theory. The picture of species number and relative rarity is still not clearly understood. In general, some aspects of marine biology are well studied. Rocky intertidal life has been the subject of major ecological research and yielded important theoretical advances. Similarly, coral reefs are the subject of many studies of life history adaptation and evolutionary biology and ecology. Physiology and morphology research has used many marine animals as examples of organisms’ functions under extreme conditions. This new found knowledge is timely. Up until now we have considered the oceans as an inexhaustible source of food and a sink for our wastes, yet we now realize they are neither. Relative to the land, the sea is in good ecological condition, but to prevent major ecological problems in the marine environment we need to increase human knowledge rapidly and manage our behavior toward the oceans very conservatively, which is a difficult task under the conditions where the ocean is treated as a common resource. [David Duffus]

Marine Mammals Protection Act (1972) The Marine Mammals Protection Act (MMPA) was initially passed by Congress in 1972 and is the most comprehensive federal law aimed at the protection of marine mammals. The MMPA prohibits the taking (i.e., harassing, hunting, capturing, or killing, or attempting to harass, hunt, capture, or kill) on the high seas of any marine mammal by persons or vessels subject to the jurisdiction of the United States. It also prohibits the taking of marine mammals in

Environmental Encyclopedia 3

Marine Mammals Protection Act (1972)

An example of a marine ecosystem. (Illustration by Hans & Cassidy.)

waters or on land subject to United States jurisdiction and the importation into the United States of marine mammals, parts thereof, or products made from such animals. The MMPA provides that civil and criminal penalties apply to illegal takings. The MMPA specifically charges the National Marine Fisheries Service (NMFS) with responsibility for the protection and conservation of marine mammals. The NMFS is given statutory authority to grant or deny permits to take whales, dolphins, and other mammals from the oceans. The original legislation established “a moratorium on the taking of marine mammals and marine mammal products...during which time no permit may be issued for the taking of any marine mammal and no marine mammal may be imported into the United States.” Four types of exceptions allowed for limited numbers of marine mammals to be taken: (1) animals taken for scientific review and public display, after a specified review process; (2) marine mammals taken incidentally to commercial fishing operations prior to October 21, 1974; (3) animals taken by Native Americans and Inuit Eskimos for subsistence or for the production of traditional crafts or tools; and (4) animals taken under a temporary

exemption granted to persons who could demonstrate economic hardship as a result of MMPA (this exemption was to last for no more than a year and was to be eliminated in 1974). MMPA also sought specifically to reduce the number of marine mammals killed in purse-seine or drift net operations by the commercial tuna industry. The language used in the legislation is particularly notable in that it makes clear that the MMPA is intended to protect marine mammals and their supporting ecosystem, rather than to maintain or increase commercial harvests: “[T]he primary objective in management should be to maintain the health and stability of the marine ecosystem. Whenever consistent with this primary objective, it should be the goal to obtain an optimum sustainable population keeping in mind the optimum carrying capacity of the habitat.” All regulations governing the taking of marine mammals must take these considerations into account. Permits require a full public hearing process with the opportunity for judicial review for both the applicant and any person opposed to the permit. No permits may be issued for the taking or importation of a pregnant or nursing female, for 871

Environmental Encyclopedia 3

Marine pollution

taking in an inhumane manner, or for taking animals on the endangered species list. Subsidiary legislation and several court decisions have modified, upheld, and extended the original MMPA: Globe Fur Dyeing Corporation v. United States upheld the constitutionality of the statutory prohibition of the killing of marine mammals less than eight months of age or while still nursing. In Committee for Humane Legislation v. Richardson, the District of Columbia Court of Appeals ruled that the NMFS had violated MMPA by permitting tuna fishermen to use the purse-seine or drift net method for catching yellowfin tuna, which resulted in the drowning of hundreds of thousands of porpoises. Under the influence of the Reagan Administration, MMPA was amended in 1981 specifically to allow this type of fishing, provided that the fishermen employed “the best marine mammal safety techniques and equipment that are economically and technologically practicable.” The Secretaries of Commerce and the Interior were empowered to authorize the taking of small numbers of marine mammals, provided that the species or population stocks of the animals involved were not already depleted and that either Secretary found that the total of such taking would have a negligible impact. The 1984 reauthorization of MMPA continued the tuna industry’s general permit to kill incidentally up to 20,500 porpoises per year, but provided special protection for two threatened species. The new legislation also required that yellowfin tuna could only be imported from countries that have rules at least as protective of porpoises as those of the United States. In Jones v. Gordon (1985), a federal district court in Alaska ruled in effect that the National Environmental Policy Act provided regulations which were applicable to the MMPA permitting procedure. Significantly, this decision made an environmental impact statement mandatory prior to the granting of a permit. Presumably owing to the educational, organizing, and lobbying efforts of environmental groups and the resulting public outcry, the MMPA was amended in 1988 to provide a three-year suspension of the “incidental take” permits, so that more ecologically responsible standards could be developed. Subsequently, Congress decided to prohibit the drift netting method as of the 1990 season. In 1994, the MMPA was amended to include more concrete definitions of harassment levels and grouped them as level A harassment (potential to hurt a wild marine mammal) and level B harassment (potential to disrupt their environment or biology). The 1994 amendments also constructed new restrictions of photography permits and states 872

under harassment level B that scientific research must limit its influence on the marine mammals being studied. [Lawrence J. Biskowski]

RESOURCES BOOKS Dolgin, E. L., and T. G. P. Guilbert, eds. Federal Environmental Law. St. Paul, MN: West Publishing Co., 1974. Freedman, W. Federal Statutes on Environmental Protection. New York: Quorum Books, 1987.

PERIODICALS Hofman, J. “The Marine Mammals Protection Act: A First of Its Kind Anywhere.” Oceanus 32 (Spring 1989): 7–16.

OTHER National Marine Fisheries www.nmfs.noaa.gov>.

Services.

[cited

June

2002].

.

Service.

[cited

May

2002].