Concepts of Chemical Dependency , Sixth Edition

  • 61 449 6
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Concepts of Chemical Dependency , Sixth Edition is the World Wide Web site for Wadsworth and is your direct source to dozens of onli

1,177 95 3MB

Pages 605 Page size 252 x 291.24 pts Year 2006

Report DMCA / Copyright


Recommend Papers

File loading please wait...
Citation preview is the World Wide Web site for Wadsworth and is your direct source to dozens of online resources. At you can find out about supplements, demonstration software, and student resources. You can also send e-mail to many of our authors and preview new publications and exciting new technologies. Changing the way the world learns®

This page intentionally left blank


Concepts of Chemical Dependency

Harold E. Doweiko

Australia • Brazil • Canada • Mexico • Singapore • Spain • United Kingdom • United States

Conepts of Chemical Dependency, Sixth Edition Harold E. Doweiko

Publisher/Executive Editor: Lisa Gebo Senior Acquisitions Editor: Marquita Flemming Assistant Editor: Monica Arvin Editorial Assistant: Christine Northup Technology Project Manager: Barry Connolly Marketing Manager: Caroline Concilla Marketing Assistant: Rebecca Weisman Marketing Communications Manager: Tami Strang Project Manager, Editorial Production: Megan E. Hansen Art Director: Vernon Boes Print Buyer: Doreen Suruki

Permissions Editor: Joohee Lee Production Service: Anne Draus, Scratchgravel Publishing Services Copy Editors: Patterson Lamb and Linda Ruth Dane Illustrator: Greg Draus, Scratchgravel Publishing Services Cover Designer: Larry Didona Cover Images: Marijuana and Paraphernalia, © Comstock Images/Getty Images; Cigarette Butts, © Enrique Algarra/Pixtal/ Agefotostock; Drug Works and Needle, © AbleStock/Index Stock; Woman under a pill-covered glass table, © Amy Illardo/Photonica Compositor: Integra Printer: Malloy Incorporated

© 2006 Thomson Brooks/Cole, a part of The Thomson Corporation. Thomson, the Star logo, and Brooks/Cole are trademarks used herein under license.

Thomson Higher Education 10 Davis Drive Belmont, CA 94002-3098 USA

ALL RIGHTS RESERVED. No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, Web distribution, information storage and retrieval systems, or in any other manner—without the written permission of the publisher. Printed in the United States of America 1 2 3 4 5 6 7 09 08 07 06 05 For more information about our products, contact us at: Thomson Learning Academic Resource Center 1-800-423-0563 For permission to use material from this text or product, submit a request online at Any additional questions about permissions can be submitted by email to [email protected]

Library of Congress Control Number: 2005923112

Asia (including India) Thomson Learning 5 Shenton Way #01-01 UIC Building Singapore 068808 Australia/New Zealand Thomson Learning Australia 102 Dodds Street Southbank, Victoria 3006 Australia Canada Thomson Nelson 1120 Birchmount Road Toronto, Ontario M1K 5G4 Canada UK/Europe/Middle East/Africa Thomson Learning High Holborn House 50/51 Bedford Row London WC1R 4LR United Kingdom

ISBN 0-534-63284-X Latin America Thomson Learning Seneca, 53 Colonia Polanco 11560 Mexico D.F. Mexico

In loving memory of my wife, Jan

This page intentionally left blank


Preface xix


Why Worry About Recreational Chemical Abuse?


Introduction 1 Who Treats Those Who Abuse or Are Addicted to Chemicals? 2 The Scope of the Problem of Chemical Abuse/Addiction 3 The Cost of Chemical Abuse/Addiction in the United States 5 Why Is It So Difficult to Understand the Drug Abuse Problem in the United States? 7 Summary 7


What Do We Mean When We Say Substance Abuse and Addiction? Introduction 9 The Continuum of Chemical Use 9 Definitions of Terms Used in This Book 11 What Do We Really Know About the Addictive Disorders? 12 The State of the Art: Unanswered Questions, Uncertain Answers Summary 15


The Medical Model of Chemical Addiction



Introduction 16 Why Do People Abuse Chemicals? 16 What Do We Mean When We Say That Someone Is “Addicted” to Chemicals? 20 Summary 28


Are People Predestined to Become Addicted to Chemicals? Introduction 29 Multiple Models 29 Reaction Against the Disease Model of Addiction 29 The Final Common Pathway Theory of Addiction 41 Summary 42 vii






Addiction as a Disease of the Human Spirit


Introduction 44 The Rise of Western Civilization, or How the Spirit Was Lost 44 Diseases of the Mind—Diseases of the Spirit: The Mind-Body Question The Growth of Addiction: The Circle Narrows 46 The Circle of Addiction: Addicted Priorities 47 Some Games of Addiction 48 Honesty as a Part of the Recovery Process 49 False Pride: The Disease of the Spirit 50 Denial, Rationalization, Projection, and Minimization: The Four Horsemen of Addiction 52 Summary 54


An Introduction to Pharmacology


Introduction 55 The Prime Effect and Side Effects of Chemicals 55 Drug Forms and How Drugs Are Administered 56 Bioavailability 58 The Drug Half-Life 62 The Effective Dose 64 The Lethal Dose Index 64 The Therapeutic Index 64 Peak Effects 65 The Site of Action 65 The Blood-Brain Barrier 69 Summary 69


Alcohol: Humans’ Oldest Recreational Chemical


Introduction 70 A Brief History of Alcohol 70 How Alcohol Is Produced 71 Alcohol Today 72 Scope of the Problem of Alcohol Use 72 Pharmacology of Alcohol 73 The Blood Alcohol Level 75 Subjective Effects of Alcohol on the Individual: At Normal Doses in the Average Drinker 76 Effects of Alcohol at Intoxicating Doses: For the Average Drinker 77 Medical Complications of Alcohol Use in the Average Drinker 78 Alcohol Use and Accidental Injury or Death 80 Summary 81





Chronic Alcohol Abuse and Addiction


Introduction 82 Scope of the Problem 82 Is There a “Typical” Alcohol-Dependent Person? 83 Alcohol Tolerance, Dependence, and “Craving”: Signposts of Alcoholism Complications of Chronic Alcohol Use 85 Summary 99



Abuse and Addiction to the Barbiturates and Barbiturate-like Drugs Introduction 100 Early Pharmacological Therapy of Anxiety Disorders and Insomnia 100 History and Current Medical Uses of the Barbiturates 101 Pharmacology of the Barbiturates 102 Subjective Effects of the Barbiturates at Normal Dosage Levels 104 Complications of the Barbiturates at Normal Dosage Levels 105 Effects of the Barbiturates at Above-Normal Levels 107 Neuroadaptation, Tolerance to, and Dependence on the Barbiturates 108 Barbiturate-like Drugs 109 Summary 110


Abuse of and Addiction to Benzodiazepines and Similar Agents


Introduction 112 Medical Uses of the Benzodiazepines 112 Pharmacology of the Benzodiazepines 114 Side Effects of the Benzodiazepines When Used at Normal Dosage Levels 115 Neuroadaptation to Benzodiazepines and Abuse/Addiction to These Agents 116 Complications Caused by Benzodiazepine Use at Normal Dosage Levels 118 Subjective Experience of Benzodiazepine Use 120 Long-Term Consequences of Chronic Benzodiazepine Use 121 Buspirone 122 Zolpidem 124 Zaleplon 125 Rohypnol 126 Summary 127


Abuse of and Addiction to Amphetamines and CNS Stimulants Introduction 128 I. THE CNS STIMULANTS AS USED IN MEDICAL PRACTICE The Amphetamine-like Drugs 128 The Amphetamines 132






II. CNS STIMULANT ABUSE 136 Scope of the Problem of CNS Stimulant Abuse and Addiction Effects of the CNS Stimulants When Abused 136 Summary 144





Introduction 146 A Brief Overview of Cocaine 146 Cocaine in Recent U.S. History 147 Cocaine Today 148 Pharmacology of Cocaine 149 How Illicit Cocaine Is Produced 151 How Cocaine Is Abused 152 Subjective Effects of Cocaine When It Is Abused 154 Complications of Cocaine Abuse/Addiction 155 Summary 159


Marijuana Abuse and Addiction


Introduction 161 History of Marijuana Use in the United States 161 A Question of Potency 163 A Technical Point 163 Scope of the Problem of Marijuana Abuse 164 Pharmacology of Marijuana 165 Methods of Administration 167 Subjective Effects of Marijuana 168 Adverse Effects of Occasional Marijuana Use 168 Consequences of Chronic Marijuana Abuse 169 The Addiction Potential of Marijuana 172 Summary 172


Opiate Abuse and Addiction


Introduction 174 I. THE MEDICAL USES OF NARCOTIC ANALGESICS A Short History of the Narcotic Analgesics 174 The Classification of Narcotic Analgesics 176 Where Opium Is Produced 177 Current Medical Uses of the Narcotic Analgesics 177 Pharmacology of the Narcotic Analgesics 177 Neuroadaptation to Narcotic Analgesics 180




Subjective Effects of Narcotic Analgesics When Used in Medical Practice 182 Complications Caused by Narcotic Analgesics When Used in Medical Practice 182 Fentanyl 184 Buprenorphine 185 II. OPIATES AS DRUGS OF ABUSE 186 The Mystique of Heroin 186 Other Narcotic Analgesics That Might Be Abused 188 Methods of Opiate Abuse 189 Scope of the Problem of Opiate Abuse and Addiction 191 Complications Caused by Chronic Opiate Abuse 192 Overdose of Illicit Opiates 195 Summary 196


Hallucinogen Abuse and Addiction


Introduction 197 History of Hallucinogens in the United States 197 Scope of the Problem 199 Pharmacology of the Hallucinogens 199 Subjective Effects of Hallucinogens 201 Phencyclidine (PCP) 203 Ecstasy: Evolution of a New Drug of Abuse 207 Summary 212


Abuse of and Addiction to the Inhalants and Aerosols Introduction 213 History of Inhalant Abuse 213 Pharmacology of the Inhalants 214 Scope of the Problem 215 Method of Administration 216 Subjective Effects of Inhalants 216 Complications From Inhalant Abuse Anesthetic Misuse 218 Abuse of Nitrites 219 Summary 220




The Unrecognized Problem of Steroid Abuse and Addiction Introduction 221 An Introduction to the Anabolic Steroids Medical Uses of Anabolic Steroids 222





The Legal Status of Anabolic Steroids 222 Scope of the Problem of Steroid Abuse 222 Sources and Methods of Steroid Abuse 222 Problems Associated With Anabolic Steroid Abuse Complications of Steroid Abuse 225 Are Anabolic Steroids Addictive? 227 Summary 228


The Over-the-Counter Analgesics: Unexpected Agents of Abuse 229 Introduction 229 A Short History of the OTC Analgesics 229 Medical Uses of the OTC Analgesics 230 Pharmacology of the OTC Analgesics 232 Normal Dosage Levels of OTC Analgesics 235 Complications Caused by Use of OTC Analgesics Overdose of OTC Analgesics 241 Summary 243



Tobacco Products and Nicotine Addiction



Introduction 245 History of Tobacco Use in the United States 245 Scope of the Problem 246 Pharmacology of Cigarette Smoking 247 The Effects of Nicotine Use 250 Nicotine Addiction 251 Complications of the Chronic Use of Tobacco 252 Summary 259


Chemicals and the Neonate: The Consequences of Drug Abuse During Pregnancy 260 Introduction 260 Scope of the Problem 260 The Fetal Alcohol Spectrum Disorder 261 Cocaine Use During Pregnancy 263 Amphetamine Use During Pregnancy 266 Opiate Abuse During Pregnancy 266 Marijuana Use During Pregnancy 268 Benzodiazepine Use During Pregnancy 269 Hallucinogen Use During Pregnancy 270 Buspirone Use During Pregnancy 270



Bupropion Use During Pregnancy 270 Disulfiram Use During Pregnancy 271 Cigarette Use During Pregnancy 271 Over-the-Counter Analgesic Use During Pregnancy Inhalant Abuse During Pregnancy 273 Summary 273


Hidden Faces of Chemical Dependency



Introduction 275 Women and Addiction: An Often Unrecognized Problem Addiction and the Homeless 280 Substance Use Problems and the Elderly 280 The Homosexual and Substance Abuse 284 Substance Abuse and the Disabled 286 Substance Abuse and Ethnic Minorities 287 Summary 289



The Dual-Diagnosis Client: Chemical Addiction and Mental Illness 290 Introduction 290 Definitions 290 Dual-Diagnosis Clients: A Diagnostic Challenge 291 Why Worry About the Dual-Diagnosis Client? 291 The Scope of the Problem 292 Characteristics of Dual-Diagnosis Clients 293 Psychopathology and Drug of Choice 293 Problems in Working With Dual-Diagnosis Clients 299 Treatment Approaches 300 Summary 303


Chemical Abuse by Children and Adolescents


Introduction 304 The Importance of Childhood and Adolescence in the Evolution of Substance-Use Problems 304 Scope of the Problem 305 Why Do Adolescents Use Chemicals? 310 The Adolescent Abuse/Addiction Dilemma: How Much Is Too Much? 313 Possible Diagnostic Criteria for Adolescent Drug/Alcohol Problems 317 The Special Needs of the Adolescent in a Substance-Abuse Rehabilitation Program 318 Summary 319




Codependency and Enabling


Introduction 320 Enabling 320 Codependency 321 Reactions to the Concept of Codependency Summary 329


Addiction and the Family



Introduction 330 Scope of the Problem 330 Addiction and Marriage 330 Addiction and the Family 332 The Adult Children of Alcoholics (ACOA) Movement Summary 339


The Evaluation of Substance-Use Problems


Introduction 341 The Theory Behind Alcohol- and Drug-Use Evaluations Screening 342 Assessment 344 Diagnosis 345 The Assessor and Data Privacy 346 Diagnostic Rules 347 The Assessment Format 348 Other Sources of Information 352 The Outcome of the Evaluation Process 355 Summary 355


The Process of Intervention




Introduction 356 A Definition of Intervention 356 Characteristics of the Intervention Process 357 The Mechanics of Intervention 357 Family Intervention 358 Intervention With Other Forms of Chemical Addiction The Ethics of Intervention 361 Intervention via the Court System 362 Other Forms of Intervention 364 Summary 364





The Treatment of Chemical Dependency


Introduction 365 Characteristics of the Substance-Abuse Rehabilitation Professional 365 The Minnesota Model of Chemical-Dependency Treatment 367 The Treatment Plan 368 Other Treatment Formats for Chemical Dependency 369 Aftercare Programs 373 Summary 373


Treatment Formats for Chemical-Dependency Rehabilitation Introduction 374 Outpatient Treatment Programs 374 Inpatient Treatment Programs 377 Inpatient or Outpatient Treatment? 381 Partial Hospitalization Options 383 Summary 385


The Process of Recovery


Introduction 386 The Decision to Seek Treatment 386 The Stages of Recovery 386 Specific Points to Address in the Treatment of Addiction to Common Drugs of Abuse 390 Summary 396


Problems Frequently Encountered in the Treatment of Chemical Dependency 397 Introduction 397 Limit Testing by Clients in Treatment 397 Treatment Noncompliance 397 Relapse and Relapse Prevention 399 “Cravings” and “Urges” 402 The “Using” Dream 403 Controlled Drinking 404 The Uncooperative Client 404 Toxicology Testing 406 The Addicted Patient With Chronic Pain Issues 412 Insurance Reimbursement Policies 412 D.A.R.E. and Psychoeducational Intervention Programs Summary 415






Pharmacological Intervention Tactics and Substance Abuse 416 Introduction 416 Pharmacological Treatment of Alcohol Abuse and Dependence 416 Pharmacological Treatment of Opiate Addiction 421 Pharmacological Treatment of Cocaine Addiction 427 Pharmacological Treatment of Amphetamine Abuse/Dependence 428 Pharmacological Treatment of Nicotine Dependence 428 Summary 431


Substance Abuse/Addiction and Infectious Disease Introduction 432 Why Is Infectious Disease Such a Common Complication of Alcohol/Drug Abuse? 432 The Pneumonias 433 Acquired Immune Deficiency Syndrome (AIDS) 434 Tuberculosis (TB) 440 Viral Hepatitis 441 Summary 445


Self-Help Groups


The Twelve Steps of Alcoholics Anonymous 446 Introduction 446 The History of AA 446 Elements of AA 447 AA and Religion 449 One “A” Is for Anonymous 450 AA and Outside Organizations 450 The Primary Purpose of AA 450 Of AA and Recovery 452 Sponsorship 452 AA and Psychological Theory 453 How Does AA Work? 453 Outcome Studies: The Effectiveness of AA 454 Narcotics Anonymous 456 Al-Anon and Alateen 456 Support Groups Other Than AA 457 Criticism of the AA/12-Step Movement 459 Summary 460





The Debate Around Legalization


Introduction 462 The Debate Over Medicalization 462 The “War on Drugs”: The Making of a National Disaster Summary 473


Crime and Drug Use



Introduction 474 Criminal Activity and Drug Use: Partners in a Dance? 474 Urine Toxicology Testing in the Workplace 476 Unseen Victims of Street Drug Chemistry 477 Drug Analogs: The “Designer” Drugs 478 Adulterants 484 Drug Use and Violence: The Unseen Connection 485 Summary 486 Appendix One: Alcohol Abuse Situation Sample Assessment


Appendix Two: Chemical Dependency Situation Sample Assessment Appendix Three: The “Jellinek” Chart for Alcoholism Appendix Four: Drug Classification Schedule Glossary 496 References Index






This page intentionally left blank


In the years since the terrorist attacks on the World Trade Center on September 11, 2001, national priorities have shifted away from the drug-abuse problem to the war against terrorism. This shift in focus does not mean that the abuse of chemicals has disappeared. Indeed, although the number of adolescents who admit to abusing cannabis has leveled off, it is at a level more than three times as high as that seen in the 1980s. Heroin remains plentiful and cheap. Evidence suggests that the amount of cocaine under cultivation has remained stable and might even be on the increase, in spite of efforts to persuade local farms to switch to other crops. These signs indicate that the drug-abuse problem has not disappeared. The field of addiction treatment is constantly changing. New discoveries in the fields of neurology, neuropsychology, and neuropsychopharmacology have provided new insights into the effects of recreational chemicals on the user’s brain and how the drugs of abuse disrupt the normal function of the user’s neurons. Compounds that were viewed as emerging drugs of abuse just five or six years ago have faded into obscurity, whereas emerging chemicals hold the potential to become the latest trend. Pharmaceuticals that once held great promise in the fight against drug abuse, such as LAAM (see Chapter 32), have been found to pose significant health risks for the user and have been removed from the arsenal of medications used to treat alcohol and drug addiction. Access to inpatient rehabilitation centers has been further curtailed since the fifth edition of this text appeared, and methamphetamine use continues to spread from the west to the east coast. These conditions made a new edition of this text imperative. In order to keep pace with the world of addictions, more than 450 changes have been made to this text. New data have been added in all chapters, many of which have been extensively rewritten, and older,

obsolete material has been deleted. A new section on the emerging “Drug Court” movement, a new form of legal intervention, has been added to Chapter 27, for example. Chapter 36 includes new information on tryptamines and phenethylamines, families of chemicals that include many potential or emerging drugs of abuse. The section on gamma hydroxybutyrate (GHB) has also been revised as more information about this and other “date rape” drugs has been uncovered. The material on “ecstasy” has been updated as scientists explore the possibility that this popular drug of abuse might actually be a selective neurotoxin in primates and possibly humans. Two new chapters have been added. Chapter 35 addresses the growing debate on the question of legalization, and Chapter 36 explores the debate on the relationship between substance abuse and criminal behavior. Issues such as the difference between medicalization and full legalization are investigated, and questions are raised about how the Constitution has been reinterpreted in light of the “war on drugs.” The fast pace of research and the evolving social response to the problem of substance abuse and addiction are two reasons why the field of addictive medicine is so exciting: It is constantly changing. There are few generally accepted answers, a multitude of unanswered questions, and, compared to the other branches of science, few interdisciplinary boundaries to limit exploration of the field. This text has tried to capture the excitement of this process while providing an overview of the field of substance abuse and rehabilitation.

Disclaimer This text was written in an attempt to share the knowledge and experience of the author with others interested in the field of substance abuse. Every effort xix



has been made to ensure that the information reviewed in this text is accurate, but this book is not designed for, nor should it be used as, a guide to patient care. Furthermore, this text provides a great deal of information about the current drugs of abuse, their dosage levels, and their effects. This information is provided not to advocate or encourage the use or abuse of chemicals. Rather, this information is reviewed to inform the reader of current trends in the field of drug abuse/addiction. The text is not intended as a guide to self-medication, and neither the author nor the publisher assumes any responsibility for individuals who attempt to use this text as a guide for the administration of drugs to themselves or others or as a guide to treatment.

Acknowledgments It would not be possible to mention every person who has helped to make this book a reality. However, I must mention the library staff at Lutheran Hospital, La Crosse, for their continued assistance in tracking down obscure references, many of which have been used in this edition of Concepts of Chemical Dependency. I also thank the following reviewers who offered comments:

Riley Venable, Texas Southern University; Maria Saxionis, Bridgewater State College; Fred T. Ponder, Texas A&M University; John B. McIntosh, Penn State University, Altoona; John Jung, California State University, Long Beach; Suzanne Lenhart, Tri-State University; James F. Scorzelli, Northeastern University, Boston; and Yolanda V. Edwards, University of South Carolina School of Medicine. Finally, I would like to point out that without the support of my late wife, Jan, the earliest editions of this text would never have been published. Until her untimely death, she happily read each chapter of each edition. She corrected my spelling (many, many times over) and encouraged me when I was up against the brick wall of writer’s block. Her feedback was received with the same openness that any author receives “constructive criticism” when she offered it the first time around. But in spite of that she persisted with her feedback about each edition, and more often than not she was right. She was indeed my best friend and my “editor in chief.” Although I have attempted to complete the revisions to this sixth edition in such a manner as to remain true to what I think she would have liked, I do wonder what she would have had to say about this edition of Concepts of Chemical Dependency, and I miss her input.

Concepts of Chemical Dependency

This page intentionally left blank


Why Worry About Recreational Chemical Abuse?

• Approximately 25% of patients seen by primary care physicians have an alcohol or drug problem (Jones, Knutson, & Haines, 2003) • Between 20% and 50% of all hospital admissions are related to the effects of alcohol abuse/addiction (Greenfield & Hennessy, 2004; McKay, Koranda, & Axen, 2004). • The abuse of illicit drugs is a major cause of ischaemic stroke in adults, increasing the individual’s risk of such an event 1,100% (Martin, Enevoldson, & Humphrey, 1997).

Introduction Collectively, the substance-use disorders are the most prevalent mental health problem in the United States today (Vuchinich, 2002). But in spite of a “war” on drug abuse that has spanned the last three decades, people still insist on abusing chemicals that change their conscious perception of the world (Schuckit, 2001). In spite of an expenditure of hundreds of billions of dollars in an effort to eliminate recreational chemical abuse, the substance-use disorders continue to be a major problem in this country. Although the frequency of abuse of such substances as cannabis has leveled off, substance use is at levels far above those seen in the 1980s and 1990s. Further, while the abuse of some compounds such as PCP have become rare, the misuse of other chemicals, such as MDMA and heroin, is on the increase. Proponents of the “war on drugs” point to these trends as evidence that the current approach, a legal approach that seeks to incarcerate those who abuse illicit chemicals, is working. Detractors of this policy point to these same trends as evidence that the war on drugs is a dismal failure and that other approaches to the problem of alcohol/drug abuse must be tried. They defend this position by observing that after more than a century’s effort, virtually every drug that was ever discovered is both easily available and commonly abused by illicit drug abusers in the United States (Hopkins, 1998). In reality, recreational substance abuse is deeply ingrained in the social life of the United States. For example, the challenge of providing affordable, effective health care to the citizens of the United States has been compounded by the alcohol/drugs in a number of ways:

Recreational drug use is not simply a drain on the general medical resources of the United States but is also a significant contributing factor to psychiatric problems that people experience. For example: • The most common cause of psychosis in young adults is alcohol/drug abuse (Cohen, 1995). • Suicide is 30 times as common among alcoholdependent people as it is in the general population (Mosier, 1999). Between 20% and 35% of completed suicides are carried out by alcohol-dependent individuals (Lester, 2000; Preuss et al., 2003). • Suicide is the cause of death in 35% of all intravenous drug abusers (Neeleman & Farrell, 1997) and 5% of all alcohol-dependent people (Preuss et al., 2003). The problem of interpersonal violence has contributed to untold suffering in the United States for generations. Fully 56% of all assaults are alcohol-related (Dyehouse & Sommers, 1998). Further, research has found that adults with an alcohol or drug-use disorder were 2.7 times as likely to report having engaged in the 1


Chapter One

physical abuse of a child and 4.2 times as likely to report child neglect as nonusing control subjects (Ireland, 2001). There is a known relationship between substance abuse and homicide (Rivara et al., 1997). The authors found that illicit drug use in the home increased a woman’s chances of being murdered by a significant other by a factor of 28, even if she herself was not using drugs. Alcohol alone is implicated in half of all homicides committed in the United States (National Foundation for Brain Research, 1992). The role of alcohol/drugs in the process of victimization has been underscored by study after study: • The team of Liebschutz, Mulvey, and Samet (1997) found that 42% of a sample of 2,322 women who were seeking treatment for substance-use problems had a history of having been physically or sexually abused at some point in their lives. A quarter of these women said that they were in danger of being revictimized again in the near future. • Of a sample of 802 inpatients being treated for alcoholism, 49% of the women and 12% of the men reported that they had been the victim of some form of sexual abuse (Windle, Windle, Scheidt, & Miller, 1995). The impact of alcohol/drug abuse on the health care crisis facing the United States in the early years of the 21st century is not limited to the problem of interpersonal violence. For example, between 40% (Liu et al., 1997) and 60% (Hingson, 1996) of the population of the United States will be involved in an alcohol-related motor vehicle accident at some point in their lives. The list goes on and on. Indeed, as one examines the full scope of recreational chemical use/abuse in this country, it becomes increasingly clear that recreational substance abuse extracts a terrible toll from each individual living here. It is a problem that, directly or indirectly, touches every individual in the nation.

Who Treats Those Who Abuse or Are Addicted to Chemicals? In spite of the damage done by alcohol/drug abuse or addiction, only 4 cents of every dollar spent by the 50 states was devoted to prevention and treatment of substance-use problems (Grinfeld, 2001). Nor are the

various state governments alone in not addressing the issue of substance abuse. Nationally, less than onefifth of the physicians surveyed considered themselves prepared to deal with alcohol-dependent patients, whereas less than 17% thought they had the skills necessary to deal with prescription drug abusers (National Center on Addiction and Substance Abuse at Columbia University, 2000). These findings are understandable considering that few “medical schools or residency programs have an adequate required course in addiction.” Further, “most physicians fail to screen for alcohol or drug dependence during routine examinations. Many health professionals view such screening efforts as a waste of time” (McLellan, Lewis, O’Brien, & Kleber, 2000, p. 1689). As a result of this professional pessimism, physicians tend to “resist being involved in negotiating a referral and brokering a consultative recommendation when alcoholism is the diagnosis” (Westermeyer, 2001, p. 458). Bernstein, Tracey, Bernstein, and Williams (1996) investigated the outcome of this neglect. The authors examined the ability of emergency department physicians to detect alcohol-related problems in over 210 patients. The patients completed an evaluation process that included three different tests: the Ever A Problem (EAP) quiz, the CAGE (discussed in Chapter 26), and the QED Saliva Alcohol Test (SAT). Forty percent of the patients were found to have an alcohol-use problem on at least one of the three measures utilized, yet less than a quarter of these patients were referred for further evaluation or treatment. The authors concluded that professional beliefs about the hopelessness of attempting to intervene when the patient had an alcohol-use problem was still a major reason that physicians did not refer that patient to treatment. In spite of the known relationship between substance abuse and traumatic injury, alcoholism remains undetected or undiagnosed by physicians (Greenfield & Hennessy, 2004). This suggests that although the benefits of professional treatment for alcohol abuse/addiction have been demonstrated time and again, many physicians continue to consider alcohol and illicit drug-use problems to be virtually untreatable (National Center on Addiction and Substance Abuse at Columbia University, 2000).

Why Worry About Recreational Chemical Abuse?

However, this diagnostic blindness is not limited to physicians. The typical training program for registered nurses includes fewer than 2 to 4 hours of classwork on addictive diseases, and many programs have no formal training at all on this disorder (Coombs, 1997). Further, even though alcohol use/abuse is a known risk factor for violence within the family, marital/ family therapists only rarely ask the proper questions to identify alcohol/drug abuse/dependence. When a substance-use problem within a marriage or family is not uncovered, therapy proceeds in a haphazard fashion. Vital clues to a very real illness within the family are missed, and the attempt at family or marital therapy is ineffective unless the addictive disorder is identified and addressed. In spite of the obvious relationship between substance abuse and the various forms of psychopathology, 74% of the psychologists surveyed admitted that they had had no formal education in the area of the addictions (Aanavi, Taube, Ja, & Duran, 2000). Most psychologists in practice rate their graduate school training in the area of drug addiction as inadequate (Cellucci & Vik, 2001). In a very real sense, no matter whether substance abuse/addiction is a true “disease” or not, the health care and mental health professions have responded to this disorder by not training practitioners to recognize its signs or how to treat it. These findings are important because they show the marked lack of attention or professional training the mental health and health care professions have given to the problem. But perhaps this is because drug use/abuse in the United States is such a minor problem that dealing with it does not require the training of large numbers of professionals. In the next section, the scope of substance abuse/addiction will be examined, and you decide if it really is as serious as it appears.

The Scope of the Problem of Chemical Abuse/Addiction Globally, 3% of the population, or 185 million people, are estimated to use an illicit substance at least once each year (United Nations, 2004). In the United States, 35% of men and 18% of women are predicted to develop a substance-use disorder at some point during their lives (Rhee et al., 2003). At first glance, these estimates seem to suggest that substance-use problems are


more common in the United States than elsewhere in the world, but keep in mind that these two research studies utilize two different measures: annual drug use versus estimated lifetime prevalence of substanceuse problems. The underlying assumptions on which the two studies are based are often vastly different. One dramatic and frightening estimate of the scope of substance-use problems in the United States was offered by Wilens (2004a, b), who suggested that between 10% and 30% of adults have a substance-use disorder. These figures are indeed quite alarming and were consistent with the findings of other research studies (Kessler et al., 1994; Kessler et al., 1997). The data from each of these studies was based on the responses of a sample of 8,098 individuals, who took part in the National Comorbidity Survey. The sample was selected to approximate the characteristics of the population of the United States as a whole (in terms of age, sex, and so on), providing an overview of the population that would meet criteria for a diagnosis of one of 14 separate psychiatric conditions both in the preceding 12 months and during the respondent’s lifetime. People in the United States are curious about illicit drugs: An estimated 70 million people in this country have used an illicit substance at least once (Leshner, 1997b). In contrast to this number, only about 19.5 million people in the United States above the age of 12 were thought to have abused an illicit chemical at some point, and only 5.3 million of this number were addicted to a drug(s) (Office of National Drug Control Policy, 2004). These figures were similar to those suggested a decade earlier in the Harvard Mental Health Letter (“Strong medicine,” 1995) that 5% to 10% of the adults in the United States had a “serious alcohol problem” (p. 1) and that another 1% to 2% had “a serious illicit drug problem” (p. 1). The intravenous drug addict is often seen as a stereotype of the addicted person. Yet only 1.5 million people in the entire United States are estimated to be intravenous drug users (Work Group on HIV/AIDS, 2000). This estimate includes both drug abusers and addicts, yet this total is less than 1% of the estimated population of the country. However, the wide differences between the various estimates of those who are substance abusers or are addicted to drugs in this


country underscore one serious shortcoming in the field of substance-abuse rehabilitation: the lack of clear data. Depending on the research study cited, substance abuse is or is not a serious problem, is or is not getting worse (or better), will or will not be resolved in the next decade, and is something that parents should or should not worry about. The truth is that large numbers of people use one or more recreational chemicals, but only a small percentage of these people will ultimately become addicted to the chemical(s) being abused (Peele, Brodsky, & Arnold, 1991). In the next section, we look at an overview of substance abuse in this country. Estimates of the problem of alcohol use, abuse, and addiction. Surprisingly, the use of alcohol in the United States has been declining since around 1980 and has actually dropped about 15% since then (Musto, 1996). But alcohol remains a popular recreational chemical in the United States, used by an estimated 119 million people (Office of National Drug Control Policy, 2004). Of this number, 16.27 million are thought to be physically dependent on it (Office of National Drug Control Policy, 2004). There is a discrepancy in the amount of alcohol consumed by casual drinkers compared to problem drinkers: Only 34% of the population in this country consumes 62% of all of the alcohol produced (Kotz & Covington, 1995). Approximately 10% of those who drink alcohol on a regular basis will become alcohol dependent (Kotz & Covington, 1995). However, researchers disagree on the exact scope of alcohol addiction in the United States. Estimates range from 9 million (Ordorica & Nace, 1998) to 12 million (Siegel, 1989) to perhaps as many as 16.27 million people (Office of National Drug Control Policy, 2004). The majority of those who abuse or are addicted to alcohol in the United States are male. But this does not mean that alcohol abuse/addiction is exclusively a male problem. The ratio of male to female alcohol abusers/addicts is thought to fall between 2:1 and 3:1 (Blume, 1994; Cyr & Moulton, 1993; Hill, 1995). These figures suggest that significant numbers of women are also abusing or addicted to alcohol. Because it can be purchased legally by adults over the age of 21, many people tend to forget that it is also a drug. However, the grim reality is that this “legal” chemical makes

Chapter One

up the greatest part of the drug abuse/addiction problem in this country. Franklin (1987) stated, for example, that alcoholism alone accounts for 85% of drug addiction in the United States. This is not surprising, as alcohol is the most commonly abused chemical in the world (Lieber, 1995). Estimates of the problem of narcotics abuse and addiction. When many people hear the term drugs of abuse, narcotics are the drugs people think of, especially heroin. Although narcotic analgesics have the reputation of being quite addictive, only about half of those who abuse these drugs become addicted to them (Jenike, 1991). Globally, around 10 million people are estimated to abuse or be addicted to heroin (Milne, 2003). In the United States, 810,000 people are estimated to be dependent on opiates, and the problem of opiate addiction probably costs society about $21 billion annually (Fiellin, Rosenheck, & Kosten, 2001). This is a far different estimate from the one offered by Herbert Kleber (quoted in Grinfeld, 2001)—that there are approximately 1 million heroin-dependent people in the United States. About half of the heroin-addicted individuals in the United States are thought to live in New York City (Kaplan, Sadock, & Grebb, 1994; Witkin & Griffin, 1994). Approximately 20% of those addicted to opiates are women (Krambeer, von McKnelly, Gabrielli, & Penick, 2001). Given a median estimate of 800,000 heroin-dependent people in the United States, this would mean that approximately 160,000 women in this country are addicted to opiates. There is a hidden population of opiate abusers in the United States, however: individuals who have regular jobs, and thus have private health care insurance, but who abuse or are addicted to opiates. Fully 76% of illicit drug abusers in the United States are employed, as are 81% of the binge drinkers and 81% of the heavy drinkers (Lowe, 2004). It is unlikely that these individuals will appear in estimates of drug addiction, and very little is known about this particular population. There are other aspects of opiate abuse/addiction that also have never been studied. For example, some pharmaceutical narcotic analgesics are known to be diverted to the illicit drug market. However, virtually no information is available on this problem, and we don’t know whether the person who abuses


Why Worry About Recreational Chemical Abuse?

pharmaceuticals is similar to, or markedly different from, the person who abuses illicit narcotics. Thus, the estimate of 500,000–1,000,000 intravenous heroin addicts must be accepted as only a minimal estimate of the narcotics-abuse/addiction problem in the United States. Estimates of the problem of cocaine abuse and addiction. Cocaine abuse in the United States peaked in the mid-1980s, but cocaine remains a popular drug of abuse. Globally, an estimated 15 million people abuse or are addicted to cocaine, the vast majority of whom are thought to live in North America (Milne, 2003). In contrast, Grinfeld (2001) estimated that there were 2.5 million cocaine addicts in the United States. Surprisingly, in spite of cocaine’s reputation as an addictive substance, only a fraction of those who use it ever actually become addicted to it. Researchers now believe that only 3% to 20% of users go on to become addicted to this substance (Musto, 1991). Other researchers have suggested that only one cocaine user in six (Peele, Brodsky & Arnold, 1991) to one in twelve (Peluso & Peluso, 1988) was actually addicted to the drug. Estimates of the problem of marijuana abuse/addiction. Marijuana is the most commonly abused illegal drug in the United States (Kaufman & McNaul, 1992) as well as Canada (Russell, Newman, & Bland, 1994). Some estimate that approximately 25% of the entire population of the United States, or more than 70 million people, have used marijuana at least once, and that there are 9 million “regular” users of marijuana in this country (Angell & Kassirer, 1994, p. 537). Of this number, approximately 3 million are thought to be addicted to the drug (Grinfeld, 2001). Estimates of the problem of hallucinogenic abuse. As with marijuana, there are questions about whether hallucinogenics may be addictive. For this reason, this text speaks of the “problem of hallucinogenic abuse.” Perhaps 10% of the entire population of the United States have used a hallucinogen at least once (Sadock & Sadock, 2003). However, hallucinogenic use is actually quite rare, and of those young adults who have used hallucinogenic drugs, only 1% or 2% will have done so in the past 30 days, according to the authors. These data suggest that the problem of addiction to hallucinogenics is exceedingly rare.

Estimates of the problem of tobacco addiction. Tobacco is a special product. Like alcohol, it is legally sold to adults. Unfortunately, tobacco products are also readily obtained by adolescents, who make up a significant proportion of those who use tobacco. Researchers estimate that approximately 46 million Americans smoke cigarettes (Brownlee et al., 1994). Of this number, an estimated 24 million smokers are male, and 22.3 million are female.

The Cost of Chemical Abuse/Addiction in the United States Although the total number of people in this country who abuse or are addicted to recreational chemicals is limited, recreational substance use still extracts a terrible toll from society. Alcohol and drug abuse by some estimates cost $81 billion in lost productivity each year in the United States: $37 billion because of premature death and $44 billion because of illness (Lowe, 2004). Each year in the this country an estimated 420,000 smokers die from tobacco-related illness, and an additional 35,000 to 56,000 nonsmokers die each year as a result of their exposure to secondhand cigarette smoke (Mokdad, Marks, Stroup, & Gerberding, 2004; Benson & Sacco, 2000). Each year, an estimated 100,000 (Fleming, Mihic, & Harris, 2001; Naimi et al., 2003; Small, 2002;) to 200,000 (Hyman & Cassem, 1995; Kaplan, Sadock, & Grebb, 1994) die from alcohol-related illness or accidents. The annual drug-related death toll as a result of drug-related infant deaths, overdose-related deaths, suicides, homicides, motor vehicle accident deaths, and the various diseases associated with drug abuse in the United States is estimated at 16,000 (Craig, 2004) to 17,000 (Mokdad et al., 2004) people a year. However, even this number is still one-sixteenth as many people as are thought to die as a result of just tobacco use each year in this country, yet tobacco remains legal for individuals over the age of 21. Collectively, all forms of recreational chemical abuse account for one-fourth to one-third of all deaths in the United States each year (Hurt et al., 1996). The majority of these substance-related deaths are caused by alcohol/tobacco abuse. As these figures suggest, chemical use, or abuse, is a significant factor in premature


death, illness, loss of productivity, and medical expenses. However, because chemical abuse/addiction has so many hidden faces, behavioral scientists believe that these are only rough estimates of the annual impact of alcohol/drug use problems in the United States. Consider, for example, the hidden facet of substance abuse as a background cause of traumatic injuries. For example, 71% of patients admitted to a major trauma center had evidence of alcohol/illicit drugs in their bodies at the time (Cornwell et al., 1998). The cost of alcohol abuse in the United States. A number of factors must be considered in attempting to calculate the annual financial cost of alcohol abuse and addiction in this country. Included in this list are direct and indirect costs, such as the cost of alcohol-related criminal activity, motor vehicle accidents, destruction of property, the cost of social welfare programs, private and public hospitalization costs for alcohol-related illness, and the cost of public and private treatment programs. Alcohol abuse/addiction is thought to cost society $185 billion/year in the United States alone, of which $26 billion is for direct health care costs (Petrakis, Gonzalez, Rosenheck, & Krystal, 2002; Smothers, Yahr, & Ruhl, 2004). The cost of alcoholrelated lost productivity in this country alone is estimated at $67.7 billion (Craig, 2004) to $138 billion per year (Brink, 2004). In recent years, politicians have spoken at length about the need to control the rising cost of health care in the United States. Alcohol-use disorders are significant factors in the growing health care financial crisis. Although only 5% to 10% of the general population has an alcohol-use problem, 10% to 20% of the ambulatory patients and 25% to 40% of the patients in hospitals suffer from some complication of alcohol use/abuse (Mersey, 2003; Weaver, Jarvis, & Schnoll, 1999). Further, 15% to 30% of the nursing home beds in this country are occupied by individuals whose alcohol use has contributed in part to their need for placement in a nursing home (Schuckit, 2000). Many of these nursing home beds are supported partly by public funds, making chronic alcohol abuse a major factor in the growing cost of nursing home care for the elderly. Alcohol-related costs of vehicle and property destruction amount to $24.7 billion a year in the United States,

Chapter One

according to Craig (2004), with alcohol being a factor in approximately 40% of all fatal motor vehicle accidents. Alcohol abuse is thought to be a factor in 25% to 60% of all accidents resulting in traumatic injuries (Dyehouse & Sommers, 1998). Each year, an estimated 85,000 to 140,000 people in this country lose their lives because of alcohol use/abuse (Mokdad et al., 2004). Individuals who have been injured as a result of alcohol use/abuse require medical treatment. Ultimately, this medical treatment is paid for by the public in the form of higher insurance costs and higher taxes. Indeed, alcohol-use disorders are thought to account for 15% of the money spent for health care in the United States each year (Schuckit, 2000). Yet in spite of the pain and suffering that alcohol causes, only 5% (Prater, Miller, & Zylstra, 1999) to 10% of alcohol-dependent individuals are ever identified and referred to a treatment program (Wing, 1995). The cost of tobacco use. Although it is legally produced and might be consumed by adults without legal problems, tobacco extracts a terrible cost. Estimates of the economic cost of cigarette smoking range from $53 to $73 billion in just direct medical costs in the United States, plus an additional $47 to $82 billion a year in lost productivity (Anczak & Nogler, 2003; Patkar, Vergare, Batka, Weinstein, & Leone, 2003; “Cigarette Smoking Attributable Morbidity . . . ,” 2004). Globally, more than 3 million people, die each year as a result of smoking-related illness; 435,000 of these live in the United States (Mokdad et al., 2004; Patkar et al., 2003). It is believed that one in every five deaths in the United States can be traced to smoking-related disease (Miller, 1999). The cost of illicit substance abuse. A number of factors must be included in any estimate of recreational drug use in the United States, including the estimated financial impact of premature death or illness caused by substance abuse, lost wages from those who lose their jobs as a result of substance abuse, the financial losses incurred by victims of drug-related crimes, and the expected costs of drug-related law-enforcement activities. With this in mind, researchers have suggested that the annual economic cost of recreational chemical use in the United States is approximately $383 per person (Swan, 1998). The total annual economic impact of


Why Worry About Recreational Chemical Abuse?

illicit chemical use/abuse is estimated at between $110 billion (Connors, Donovan, & DiClemente, 2001) and $276 billion per year (Stein, Orlando, & Sturm, 2000). No matter which of these estimates you accept as most accurate, drug abuse is clearly an expensive luxury. Drug use as an American way of life. Notice that in the last paragraph drug abuse was identified as a “luxury.” To see how we as a nation have come to value recreational chemical use, consider that money spent on illicit recreational chemicals is not used to buy medical care, food, shelter, or clothing, but spent simply for personal pleasure. In the last years of the 20th century, the annual expenditure for illicit recreational chemicals in the United States was a sum greater than the total combined income of the 80 poorest Third World countries (Corwin, 1994). In conclusion, there is no possible way to estimate fully the personal, economic, or social impact that these various forms of chemical addiction have had on society When one considers the possible economic impact of medical costs incurred, lost productivity, or other indirect costs from such “hidden” drug abuse and addiction, one can begin to appreciate the impact that chemical abuse and addiction has inflicted.

Why Is It So Difficult to Understand the Drug Abuse Problem in the United States? For the past two generations, politicians have spoken about society’s war on drug use/abuse. One of the basic strategies of this ongoing war has been the exaggeration of the dangers associated with chemical use (Musto, 1991; Peele, 1994). This technique is known as disinformation, and it seems to have been almost an unofficial policy of the government’s antidrug efforts to distort and exaggerate the scope of the problem and the dangers associated with recreational drug use. An excellent example of this “disinformation policy” is the statement made by U.S. Representative Vic Fazio, who, in calling for legislation to control access to certain chemicals that might be used to manufacture illicit methamphetamine, spoke of “a generation of meth-addicted crank babies . . .

rapidly filling our nation’s hospitals” (“Politicians discover,” 1996, p. 70). This statement came as a surprise to health care professionals: There was no epidemic of methamphetamine addicted babies. But this did not prevent the false statement from being offered as a “fact” in the United State House of Representatives. For more than two generations, the media have presented drugs in such a negative light that “anyone reading or hearing of them would not be tempted to experiment with the substances” (Musto, 1991, p. 46). Unfortunately, such scare tactics have not worked. For example, in the mid-1980s the media presented report after report of the dangers of chemical addiction yet consistently failed to point out that only 5.5 million Americans (or about 2% of the then-current population of approximately 260 million) was addicted to illegal drugs (Holloway, 1991). It is not the goal of this text to advocate substance use, but there are wide discrepancies between the scope of recreational drug use as reported in the mass media and that reported in the scientific research. For example, Wilens (2004a, b) suggested that between 10% and 30% of the adults in the United States have a substance-use disorder of some kind. In contrast, other researchers have suggested that only a small percentage of the U.S. population was using illicit chemicals. Given these wide discrepancies, the most plausible conclusion is that much of what has been said about the drug-abuse “crisis” in the United States has been tainted by misinformation, or disinformation. To understand the problem of recreational chemical use/abuse, it is necessary to look beyond the “sound bytes” or the “factoids” of the mass media and the politicians.

Summary Researchers estimate that at any point in time, 2% to 10% of American adults either abuse or are addicted to illegal drugs. Although this percentage would suggest that large numbers of people are using illicit chemicals in this society, it also suggests that the drugs of abuse are not universally addictive. The various forms of chemical abuse/addiction discussed here reflect different manifestations of a unitary


disorder: chemical abuse/addiction. Finally, although drug addiction is classified as a disease, most physicians are ill-prepared to treat substance-abusing patients. In this chapter, we have examined the problem of recreational drug use and its impact on society. In later sections of this book, we will explore in

Chapter One

detail the various drugs of abuse, their effects on the user, the consequences of their use, and the rehabilitation process available for those who are abusing or addicted to chemicals. This information should help you to better understand the problem of recreational substance use in this country.


What Do We Mean When We Say Substance Abuse and Addiction?

Even the stage of addiction to alcohol/drugs is not uniform. Rather, “drug use is considered a normal learned behavior that falls along a continuum ranging from patterns of little use and few problems to excessive use and dependence” (Budney, Sigmon, & Higgins, 2003, p. 249). Unfortunately, there are no firm boundaries between the points on a substance-use continuum (Sellers et al., 1993). Only the end points—total abstinence, and active physical addiction to chemicals—remain relatively fixed. The main advantage of a drug-use continuum is that it allows us to classify chemical use of various intensities and patterns. Drug use/abuse/addiction thus becomes a behavior with a number of possible intermediate steps between the two extreme points, not a “condition” that either is or is not present. For the purpose of this text, we will view the phenomenon of recreational alcohol/drug use along the continuum shown in Figure 2.1. The first point in the continuum presented in Figure 2.1 is Level 0: Total abstinence. Individuals whose substance use falls in this category abstain from all alcohol/drug abuse, and they present no immediate risk for substance-use problems (Isaacson & Schorling, 1999). The second category is Level 1: Rare/social use. This level includes experimental use and presents a low risk for a substance-use disorder on the continuum suggested by Isaacson and Schorling (1999). Individuals in this category would only rarely use alcohol or chemicals for recreational purposes. They would not experience any of the social, financial, interpersonal, medical, or legal problems that are the hallmark of the pathological use of chemicals. Further, people whose substance use is at this level would not demonstrate the loss of control over their chemical use that is found at higher levels of the continuum. Their chemical use would not pose any threat to their lives.

Introduction The last chapter examined substance abuse/addiction as an under-recognized social problem. Like many problem areas, the world of substance abuse and drug rehabilitation has its own language. This chapter presents some of the more common concepts and terms used in this field.

The Continuum of Chemical Use People frequently confuse chemical use with abuse and addiction. Indeed, these terms are often mistakenly used as if they were synonymous, even in clinical research studies (Minkoff, 1997). In reality, recreational alcohol/drug use, like most forms of human behavior, falls on a continuum (Kaminer, 1999). Complete abstinence is at one end of the continuum; physical addiction to a chemical is the opposite end (McCrady & Epstein, 1995). Between these two extremes are various patterns of chemical use that differ in the intensity with which people engage in substance use and the consequences of this behavior. In their discussion of illegal substance use, Cattarello, Clayton, and Leukefeld (1995) suggested that “people differ in their illicit drug use. Some people never experiment; some experiment and never use again. Others use drugs irregularly or become regular users, whereas others develop pathological and addictive patterns of use” (p. 152). In this statement, the authors identified five different patterns of recreational chemical use: (a) total abstinence, (b) a brief period of experimentation, followed by a return to abstinence, (c) irregular, or occasional, use of illicit chemicals, (d) regular use of chemicals, and (e) the pathological or addictive pattern of use that is the hallmark of physical dependence on chemicals. 9


Chapter Two 0

Total abstinence from drug use


Rare/social use of drugs


Heavy social use/early problem use of drugs



Heavy problem use/early addiction to drugs

Clear addiction to drugs

FIGURE 2.1 The continuum of chemical use.

Level 2: Heavy social use/early problem drug use. A person whose chemical use falls at this point in the continuum would (a) use alcohol/drugs in a manner that is clearly above the norm for society, and/or (b) begin to experience various combinations of legal, social, financial, occupational, and personal problems associated with chemical use. Individuals whose substance use falls in this range could be classified as being at risk for a substance-use disorder (Isaacson & Schorling, 1999), for being substance abusers, or for being problem drinkers. Individuals in this category are more numerous than those who are clearly addicted to chemicals. Sobell and Sobell (1993) found, for example, that problem drinkers were four times as numerous as alcohol-dependent individuals. At this level of chemical use, individuals begin to manifest symptoms of a behavioral disorder in which they make poor choices about their use of a recreational chemical, but are potentially still able to control their use (Minkoff, 1997). They might try to hide or deny the problems that arise from their chemical use. Fortunately, many of those who reach this point in the drug-use continuum will learn from their experience and alter their behavior to avoid future problems. Thus, at this level of use, the individual is not addicted to chemicals. Level 3: Heavy problem use/early addiction. At level 3, the alcohol or chemical use has clearly become a problem. Indeed, this person may have become addicted to chemicals, although he or she may argue the point.

Someone whose chemical abuse is at this level has started to experience medical complications as well as classic withdrawal symptoms when he or she is unable to continue the use of drugs/alcohol. Isaacson and Schorling (1999) classified individuals at this level as engaging in “problem use.” They are often preoccupied with substance use and have lost control over their chemical use (Brown, 1995; Gordis, 1995). Shute and Tangley (1997) estimated that 40 million people in the United States abuse alcohol but are not dependent on it. These individuals would fall into categories 3 and 4 on the continuum. Level 4: Clearly addicted to drugs. At this point the person demonstrates all of the symptoms of the classic addiction syndrome, in combination with multiple social, medical, legal, financial, occupational, and personal problems that are the hallmark of an alcohol/drug dependency. A person whose chemical use falls at this point in the continuum would clearly have the physical disorder of alcohol/drug dependency (Minkoff, 1997). This individual is clearly addicted in the assessor’s mind. Even at this level of substance use, the addicted individual might try to rationalize away or deny problems associated with his or her alcohol or drug use. More than one elderly alcoholic, for example, has tried to explain away an abnormal liver function as being the aftermath of a childhood illness. However, to an impartial observer, the person at this level is clearly addicted to alcohol or drugs.


What Do We Mean When We Say Substance Abuse and Addiction?

Admittedly, this classification system, like all others, is imperfect. The criteria used to determine an individual’s level of use are arbitrary and subject to discussion. It is often “the variety of alcohol related problems, not any unique criterion, that captures what clinicians really mean when they label a person alcoholic” (Vaillant, 1983, p. 42). However, even in the case of the opiates, some individuals will use these drugs, perhaps even on a regular basis, and not become addicted. Physical addiction is just one point on the continuum of drug-use styles.

Definitions of Terms Used in This Book

macology,2 it is rare for a person to be addicted to just one chemical. For example, many stimulant users will drink alcohol or use benzodiazepines to control the side effects of cocaine or amphetamines. Addiction/dependence. Technically, addiction is a term that is poorly defined, and most scientists prefer the more precise term dependence (Shaffer, 2001). In this text, these terms will be used interchangeably. Physical dependence on alcohol or drugs might be classified as a primary, chronic, disease with genetic, psychosocial and environmental factors influencing its development and manifestations. The disease is often progressive and fatal. It is characterized by impaired control over drinking, preoccupation with the drug alcohol, use of alcohol despite adverse consequences, and distortions in thinking. (Morse & Flavin, 1992, p. 1013)

To understand each other when they communicate about the phenomenon of substance abuse, the people who study this problem need a common language. This section presents definitions of some of the most common terms in this field. Social use. “Social use” of a substance is defined by traditional social standards. Currently, alcohol is the chemical most frequently found within a social context, often being used in religious or family functions. In some circles, marijuana is also used in a social context, although it is a controlled substance1, and thus it is less acceptable than alcohol. Substance abuse. Substance abuse occurs when an individual uses a drug without a legitimate medical need to do so. In the case of alcohol, the person is drinking in excess of accepted social standards (Schuckit, 1995b). Thus the definition of substance abuse is based on current social standards. The individual who abuses a chemical might be said to have made poor choices regarding use of that substance, but he or she is not addicted to it (Minkoff, 1997). Drug-of-choice. Clinicians once spoke about the individual’s drug of choice as an important component of the addictive process. They assumed that the drug a person would use given the choice was an important clue to the nature of his or her addiction. However, little emphasis is currently put on the individual’s drug of choice (Walters, 1994). One reason for this change is that the nature of addiction itself is changing. In this era of polyphar-

This definition contains all of the core concepts used to define drug addiction. Each form of drug addiction is viewed as a (a) primary disease (b) with multiple manifestations in the person’s social, psychological, spiritual, and economic life; the disease (c) is often progressive, (d) potentially fatal, and (e) marked by the individual’s inability to control the use of the drug; the person has (f) a preoccupation with chemical use, and in spite of its many consequences, (g) develops a distorted way of looking at the world that supports continued use of that chemical. In addition, dependence on a chemical is marked by (a) the development of tolerance to its effects and (b) a characteristic withdrawal syndrome when the drug is discontinued (Schuckit, 2000). Each of these symptoms of addiction will be discussed later in more detail. Tolerance develops over time, as the individual’s body struggles to maintain normal function in the presence of one or more foreign chemicals. Technically, there are several different types of tolerance. In this text, we will limit our discussion to just two: (a) metabolic tolerance and (b) pharmacodynamic tolerance. Metabolic tolerance develops when the body becomes effective in biotransforming a chemical into a form that can be



Appendix 3.



easily eliminated from the body. (The process of biotransformation will be discussed in more detail in Chapter 6). The liver is the main organ involved in biotransformation. In some cases, constant exposure to a chemical causes the liver to become more efficient at breaking it down, making a given dose less effective over time. Pharmacodynamic tolerance describes the increasing insensitivity of the central nervous system to the drug’s effects. When the cells of the central nervous system are continuously exposed to a chemical, they will often try to maintain normal function by making minute changes in their structure to compensate for the drug’s effects. These cells then become less sensitive to the effects of that chemical, and the person must use more of the drug to achieve the initial effect. If used for a long enough period of time, the major recreational chemicals will bring about a characteristic withdrawal syndrome. The exact nature of withdrawal will vary depending on the class of drugs being used, the length of time the drug is used, and other factors such as the individual’s state of health. But each group of drugs will produce certain physical symptoms when the person stops taking them. A rule of thumb is that the withdrawal syndrome will include symptoms that are opposite to those induced by the drug. In clinical practice, the existence of a withdrawal syndrome is evidence that pharmacodynamic tolerance has developed. The withdrawal syndrome is caused by the absence of the chemical to which the central nervous system had previously adapted. When the drug is discontinued, the central nervous system will go through a period of readaptation, as it learns to function normally without the drug being present. During this period of time, the individual will experience the physical signs of withdrawal. This process is clearly seen during alcohol withdrawal. Alcohol functions on the cells of the central nervous system much like the brakes on your car. If you attempt to drive while the brakes are engaged, you might eventually force the car to go fast enough to meet the posted speed limits; but if you were to release the pressure on the brakes suddenly, the car would leap ahead because the brakes would no longer be impeding its forward motion. You would have to ease up on the gas pedal so the engine would slow enough to keep you within the posted speed limit.

Chapter Two

During that period of readjustment, the car would, in a sense, be going through withdrawal. Much the same thing happens in the body when the individual stops using drugs. The body must adjust to the absence of a chemical that, previously, it had learned would always be there. This withdrawal syndrome, like tolerance of the drug’s effects, provides strong evidence that the individual is addicted to one or more chemicals. The Growth of New “Addictions” Not only does the popular press exaggerate the dangers of chemical abuse, but society also tends to speak of “addictions” to a wide range of behaviors/substances, including food, sex, gambling, men, women, play, television, shopping, credit cards, making money, carbohydrates, shoplifting, unhappy relationships, french fries, lip balm, and a multitude of other “non-drug” behaviors or substances (Shaffer, 2001). This expanded use of the term addiction does not appear to have an end in sight, although it may have reached its zenith with the formation of “Lip Balm Anonymous” (Shaffer, 2001). Fortunately, there is little evidence that non-drug centered behaviors can result in physical addiction. In this text, the term addiction will be limited to physical dependence on alcohol and chemical agents commonly known as “drugs of abuse.”

What Do We Really Know About the Addictive Disorders? If you were to watch television talk shows or read a small sample of the self-help books on the market, you would think that researchers fully understand the causes and treatment of drug abuse. Nothing could be further from the truth! Much of what is “known” about addiction is based on mistaken assumptions, clinical theory, or, at best, incomplete data. An excellent example of how incomplete data might influence treatment theory is that much of the research on substance abuse is based on a distorted sample of people: those who are in treatment for substance-abuse problems (Gazzaniga, 1988). Virtually nothing is known about people who use chemicals on a social basis but never become addicted, or those


What Do We Mean When We Say Substance Abuse and Addiction?

who are addicted to chemicals but recover from their chemical-use problems without formal intervention or treatment. A serious question that must be asked is whether individuals in treatment are representative of all drug/alcohol addicted persons. For example, individuals who seek treatment for a substance-use disorder are quite different from those who do not (Carroll & Rounsaville, 1992). As a group, alcohol/drug addicted people who do not seek treatment seem better able to control their substance use and to have shorter drug-use histories than people who seek treatment. This may be why the majority of those who abuse chemicals either stop or significantly reduce their chemical use without professional intervention (Carroll & Rounsaville, 1992; Humphreys, Moos, & Finney, 1995; Mayo Foundation for Medical Education and Research, 1989; Peele, 1985, 1989; Tucker & Sobell, 1992). It appears that only a minority of those who begin to use recreational chemicals lose control over their substance use and require professional intervention. Yet it is on this minority that much of the research on recognition and treatment of substanceabuse problems is based. Consider, for a moment, the people known as “chippers.” They make up a subpopulation of drug users about whom virtually nothing is known. They seem to be able to use a chemical, even one supposedly quite addictive, only when they want to, and then to discontinue its use when they wish to do so. Researchers are not able to make even an educated guess as to their number. Chippers are thought to use chemicals in response to social pressure, and then to stop using when the social need has passed. But this is only a theory, and it might not be supported by research. Another reason that much of the research in substance abuse rehabilitation is flawed is that a significant proportion is carried out either in Veterans Administration (VA) hospitals or in public facilities such as state hospitals. However, individuals in these facilities are not automatically representative of the “typical” alcohol/drug dependent person. For example, to be admitted to a VA hospital, the individual must have successfully completed a tour of duty in the military. This means that the person is quite different from those who either never enlisted in the military or who enlisted but were unable to complete a tour of duty. The alcohol/drug addict who is

employed and able to afford treatment in a private treatment center might be far different from the indigent alcohol/drug dependent person who must be treated in a publicly funded treatment program. Further, only a small proportion of the available literature on the subject of drug addiction addresses forms of addiction other than alcoholism. An even smaller proportion addresses the impact of recreational chemical use on women (Cohen, 2000). Much of the research conducted to date has assumed that alcohol/drug use is the same for men and women, overlooking possible differences in how men and women come to use chemicals and the differing ways addiction affects them. Further, although children and adolescents have long been known to abuse chemicals, there is still virtually no research on drug abuse/addiction in this group. Yet, as will be discussed in Chapter 21, drug abuse in this population is a serious problem. Children and adolescents who abuse chemicals are not simply small adults, and research done on adults cannot be accurately generalized to them. Thus, much of what we think we know about addiction is based on research that is quite limited, and many important questions remain to be answered. Yet this is the foundation on which an entire industry of treament has evolved. It is not our purpose to deny that large numbers of people abuse drugs or that such drug abuse carries with it a terrible cost in personal suffering. It is also not our purpose to deny that many people are harmed by drug abuse. We know that people become addicted to chemicals. The purpose of this section is to make the reader aware of the shortcomings of the current body of research on substance abuse.

The State of the Art: Unanswered Questions, Uncertain Answers As you have discovered by now, there is much confusion in the professional community over the problems of substance abuse/addiction. Even in the case of alcoholism, the most common of the drug addictions, there is an element of confusion or uncertainty over what the essential features of alcoholism might be. For example, 30% to 45% of all adults will have at least one transient alcohol-related problem (blackout, legal problem, etc.)


at some point in their lives (Sadock & Sadock, 2003). Yet this does not mean that 30% to 45% of the adult population is alcohol dependent! Rather, this fact underscores the need for researchers to more clearly delineate the features that might identify the potential alcoholic. What constitutes a valid diagnosis of chemical dependency? Ultimately, the definitions of substance abuse or addiction are quite arbitrary (O’Brien, 2001). A generation ago, George Vaillant (1983) suggested that “it is not who is drinking but who is watching” (p. 22, italics added for emphasis) that defines whether a given person is alcohol dependent. The same is true for other drugs of abuse. In the end, a diagnosis of drug addiction is a value judgment. This professional opinion might be made easier by suggested criteria such as those for mental illnesses in the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (4th edition, 2000; DSM-IV); but even in rather advanced cases of drug dependency, the issue of whether the individual is addicted is not always clear-cut. Let us, for the moment, focus on the problem of alcoholism, or drug addiction, and its diagnosis. There are three elements necessary to the diagnosis of alcoholism or drug addiction (Shaffer, 2001): 1. Craving/compulsion: the individual’s thoughts become fixated on obtaining and using the chemical(s) he or she has become dependent on. 2. Loss of control: the person will use more of the chemical than he or she intended, is unable to cut back on the amount used, or is unable to stop using it. 3. Consequences: the individual will use the drug regardless of the results of this use. Such consequences might include impairment of social, vocational, or physical well-being as well as possible legal or financial problems. Although these criteria provide some degree of consistency between diagnoses, ultimately, the diagnosis of chemical dependency is one person’s opinion about another person’s chemical use. The issue of assessing another individual’s substance-use pattern will be discussed in a later chapter. The point here is

Chapter Two

that we still have much to learn and many questions to answer about how to best assess a person’s chemicaluse pattern and provide an accurate diagnosis. What is the true relationship between alcohol/drug use and violence within the family? In the last chapter, we noted a relationship between alcohol/drug use and violence in the family. It is wrong to assume automatically, however, that the drug use caused the violence. Indeed, there is evidence to suggest that at least in some families the violence might have taken place regardless of whether drugs or alcohol were involved (Steinglass, Bennett, Wolin, & Reiss, 1987). In such families, alcohol use and violence reflect the presence of another form of familial dysfunction that has yet to be identified. The point to keep in mind is that we cannot see a relationship between alcohol/drug use and violence within the family and assume that the drug use caused the violence. Behavioral science has a great deal more to learn about the true relationship between violence and alcohol/drug abuse. What is the role of news media in the development of new chemical use trends? One of the most serious of the unanswered questions facing mental health or substance abuse professionals is whether the media have been a positive or a negative influence on people who have not started to experiment with alcohol or drugs. There is a prohibition against chemical use, coupled with legal sanctions against the importation or use of many drugs. Because of this prohibition, the sale, or use of drugs or alcohol (for those who are under the legal drinking age), is “newsworthy.” Some have charged that media reports, rather than making drug use unattractive, have actually enhanced its appeal to many who might otherwise not have been motivated to experiment. Media coverage of drug arrests, the “dangers” associated with the use of various chemicals, not to mention the profits associated with the sale of controlled substances, all contribute to a certain “aura” that surrounds drug abuse. The experience of the Netherlands in dealing with the drug problem (discussed in Chapter 35) suggests that when the legal sanctions against drug use are removed, drugs actually become less attractive to the average individual, and casual drug use declines. In the Netherlands, substance abuse was originally seen as a public health issue rather than a legal problem.


What Do We Mean When We Say Substance Abuse and Addiction?

Only after large numbers of chemical-using foreigners moved to the Netherlands to take advantage of this permissiveness which had been widely reported in the media, did Dutch authorities begin to utilize law enforcement as a means of controlling substance use. The point is that much evidence suggests the media reports have actually contributed to the problem of substance by adding to the aura of mystery and “charm” that surrounds the street drug world. Thus, the question must be asked: Whose side are the media on?

Summary In this chapter, a continuum of drug use was introduced and terms common to the study of substance abuse were presented. There is a problem of inadequate research in chemical dependency, and this was explored, as well as the role of drug use in family violence and the part played by the media in inadvertently encouraging drug experimentation through wide-scale reporting about the drug scene.


The Medical Model of Chemical Addiction

Later in this text, the various major drugs of abuse will be discussed. However, knowledge of what each drug might do to the user does not answer a simple yet very difficult set of questions: (a) Why do people begin to use these chemicals, (b) why do they continue to use recreational chemicals, and (c) why do some become addicted to them? In this chapter, the answers to these questions will be examined from the perspective of the “medical,” “biomedical,” or “disease” model of addiction.

the nearest liquor store? If you wanted it, where could you buy some marijuana? If you are above the age of about 15, the odds are very good that you could answer either of these questions. But why didn’t you buy any of these chemicals on your way to work or to school this morning? Why did you, or didn’t you, buy a recreational drug or two on your way home last night? The answer is that you made a choice. So, in one sense, people use the drugs of abuse because they choose to do so. But a number of factors influence the individual’s decision to use or not use recreational chemicals, and these will be discussed in the next section of this chapter.

Why Do People Abuse Chemicals?1

Factors That Influence Recreational Drug Use

At first, to ask why people abuse chemicals might seem rather simplistic. People use drugs because the drugs make them feel good. Because they feel good after using the drug, some people wish to repeat the experience. As a result of this continual search for druginduced pleasure, the drugs of abuse have become part of our environment. The prevailing atmosphere of chemical use/abuse then forces each one of us to make a decision to use or not use recreational chemicals every day. For most of us, the choice is relatively simple. Usually the decision not to use chemicals does not even require conscious thought. But regardless of whether we acknowledge the need to make a decision, each of us is faced with the opportunity to use recreational chemicals each day and we must decide whether or not to do so. Some people might challenge the issue of personal choice, but stop for an instant, and think: Where is

The physical reward potential. The reasons a person might use alcohol or another drug of abuse are complex. The novice chemical user may make the decision to try one or more drugs in response to peer pressure or because that individual expects the drug will have pleasurable effects. Researchers call this the “pharmacological potential,” or the “reward potential” of the chemical (Budney, Sigmon, & Higgins, 2003; Kalivas, 2003; Monti, Kadden, Rohsenow, Cooney, & Abrams, 2002; Meyer, 1989). As virtually all the drugs of abuse have a high reinforcement potential (Crowley, 1988), it is easy to understand how the principles of operant conditioning might apply to the phenomenon of drug abuse/addiction (Budney et al., 2003). According to the basic laws of behavioral psychology, if something (a) increases the individual’s sense of pleasure or (b) decreases his or her discomfort, then he or she is likely to repeat that behavior. This is called the reward process. In contrast, if a certain behavior (c) increases the individual’s sense of discomfort or (d) reduces the person’s sense of pleasure, he or she would be unlikely to repeat that behavior. This is called the punishment



This question is a reference not to those people who are addicted to chemicals, but to those who use chemicals for recreational purposes.


The Medical Model of Chemical Addiction

potential of the behavior in question. Finally, the immediate consequence (either reward or punishment) has a stronger impact on behavior than delayed consequence. When these rules of behavior are applied to the problem of substance abuse such as cigarette smoking, one discovers that the immediate consequences of chemical use (that is, the immediate pleasure) has a stronger impact on behavior than the delayed consequences (i.e., possible development of disease at a later date). Therefore, it should not be surprising that because many people find the effects of the drugs of abuse2 to be pleasurable, they will be tempted to use the drugs again and again. But the reward potential of a chemical substance, while a powerful incentive for its repeated use, is not sufficient in itself to cause addiction (Kalivas, 2003). The social learning component of drug use. Individuals do not start life expecting to abuse chemicals. Alcohol/drug abusers must be taught (a) that substance use is acceptable, (b) to recognize the effects of the chemical, and (c) to interpret them as pleasurable. All of these tasks are accomplished through social learning. For example, in addition to the influence of peer groups on the individual’s chemical-use history (discussed later in this chapter), how substance use is portrayed in the movies or other forms of mass media impacts how the individual perceives the abuse of that chemical (Cape, 2003). Marijuana abuse provides a good illustration of points “b” and “c” (above). First time marijuana users must be taught by their drug-using peers (1) how to smoke it, (2) how to recognize the effects of the drug, and (3) why marijuana intoxication is so pleasurable (Kandel & Raveis, 1989). The same learning process takes place with the other drugs of abuse, including alcohol (Monti et al., 2002). It is not uncommon for a novice drinker to become so ill after a night’s drinking that he or she will swear never to drink again. However, more experienced drinkers will help the novice learn such things as how to drink, what effects to look for, and why these alcohol-induced physical sensations are so pleasurable. This feedback is often informal and comes 2Obviously, the over-the-counter analgesics are exceptions to this rule, since they do not cause the user to experience “pleasure.” However, they are included in this text because of their significant potential to cause harm.


through a variety of sources such as a “drinking buddy,” newspaper articles, advertisements, television programs, conversations with friends and coworkers, and casual observations of others who are drinking. The outcome of this social learning process is that the novice drinker is taught how to drink, and how to enjoy the alcohol he or she consumes. Individual expectations as a component of drug use. The individual’s expectations for a drug are a strong influence on how that person interprets the effects of the chemical. These expectations evolve in childhood or early adolescence as a result of multiple factors, such as peer group influences, the child’s exposure to advertising, parental substance use behaviors, and mass media (Cape, 2003; Monti et al., 2002). To understand this process, consider the individual’s expectations for alcohol. Research has shown that these are most strongly influenced by the context in which the individual uses alcohol and by his or her cultural traditions rather than the pharmacological effects of the alcohol consumed (Lindman, Sjoholm, & Lang, 2000). These drug use expectations play a powerful role in shaping the individual’s drug- or alcohol-use behavior. For example, for people who became “high-risk drinkers” (Werner, Walker, & Greene, 1995, p. 737), by the end of their junior year of college their expectations that alcohol use would be a positive experience for them were significantly stronger than those of nondrinkers or those the authors classified as “low risk” drinkers (p. 737). In the case of LSD abuse, the individual’s negative expectations are a significant factor in the development of a “bad trip.” Novice LSD users are more likely to anticipate negative consequences from the drug than are more experienced users. This anxiety seems to help set the stage for the negative drug experience known as the “bad trip.” Although people’s expectations about the effects of alcohol or drugs play a powerful role in shaping their subsequent alcohol/drug use behavior, they are not fixed. In some cases, the expectations about the use of a specific drug are so extremely negative that people will not even contemplate the use of that compound. This is frequently the case for children who grow up with a violent, abusive alcoholic parent; often these children vow never to use alcohol themselves. This is an extreme adaptation to the problem of personal alcohol use, but it is not uncommon.


More often, the individual’s expectations about alcohol/drugs can be modified by both personal experience and social feedback. For example, if an adolescent with initial misgivings about drinking finds alcohol’s effects to be pleasurable, he would be more likely to continue to use alcohol during adolescence (Smith, 1994). After his first use of a recreational chemical, his preconceptions, combined with feedback from others, will help shape his interpretation of the chemical’s effects. Based on his subjective interpretation of the alcohol’s effects, he becomes more willing to use that compound in the future. Cultural/social influences on chemical use patterns. People’s decision to use or not use a recreational chemical is made within the context of their community and the social groups to which they belong (Monti et al., 2002; Rosenbloom, 2000). A person’s cultural heritage can impact his or her chemical use at five levels (Pihl, 1999): (a) the general cultural environment, (b) the specific community in which the individual lives, (c) subcultures within the specific community, (d) family/peer influences, and (e) the context within which alcohol/drugs are used. At each of these levels, factors such as the availability of recreational substances and prevailing attitudes and feelings combine to govern the individual’s use of moodaltering chemicals (Kadushin, Reber, Saxe, & Livert, 1998; Westermeyer, 1995). Thus, in “cultures where use of a substance is comfortable, familiar, and socially regulated both as to style of use and appropriate time and place for such use, addiction is less likely and may be practically unknown” (Peele, 1985, p. 106). Unfortunately, in contrast to the rapid rate at which new drug use trends develop, cultural guidelines might require generations or centuries to develop (Westermeyer, 1995). An interesting transition is emerging from the Jewish subculture, especially in the ultraorthodox sects. Only certain forms of alcohol are blessed by the local rabbi as having been prepared in accordance with Jewish tradition and thus are considered “kosher.” Recreational drugs, on the other hand, are not considered kosher, and are forbidden (Roane, 2000). Yet as the younger generation explores new behaviors, many are turning toward experimental use of the “unclean” chemicals that they hear about through non-Jewish friends and the mass media. Significant numbers of these individuals are becoming addicted to recreational chemicals in spite of

Chapter Three

the religious sanction against their use, in large part because their education failed to warn them of the addictive powers of these compounds (Roane, 2000). In the Italian American subculture, drinking is limited mainly to religious or family celebrations, and excessive drinking is strongly discouraged. The “proper” (i.e., socially acceptable) drinking behavior is modeled by adults during religious or family activities, and there are strong familial and social sanctions against those who do not follow these rules. As a result of this process of social instruction, the Italian American subculture has a relatively low rate of alcoholism. More than a generation ago, Kunitz and Levy (1974) explored the different drinking patterns of the Navaho and Hopi Indian tribes. This study is significant as the cultures co-exist in the same part of the country and share similar genetic histories. However, Navaho tribal customs see public group drinking as acceptable and solitary drinking as a mark of deviance. For the Hopi, however, drinking is more likely to be a solitary experience, for alcohol use is not tolerated within the tribe, and those who drink are shunned. These two groups, who live in close geographic proximity to each other, clearly demonstrate how different social groups develop different guidelines for alcohol use for their members. For the most part, the discussion in this section has been limited to the use of alcohol. This is because alcohol is the most common recreational drug used in the United States. However, this is not true for all cultural groups. The American Indians of the Southwest will frequently ingest mushrooms with hallucinogenic potential as part of their religious ceremonies. In many cultures in the Middle East, alcohol is prohibited but the use of hashish is quite acceptable. In both cultures, strict social rules dictate the occasions when these substances might be used, the conditions under which they might be used, and the penalties for unacceptable use. The point to remember is that cultural rules provide the individual with a degree of guidance about acceptable and unacceptable substance use. But within each culture, there are various social groups which may, only to a limited degree, adopt the standards of the parent culture. The relationship between different social groups and the parent culture is illustrated in Figure 3.1.

The Medical Model of Chemical Addiction


Individual social groups within parent culture

FIGURE 3.1 Relationship between different social groups and the parent culture.

Social feedback mechanisms and drug use. There is a subtle, often overlooked feedback mechanism that exists between the individual and the social group to which she belongs. Whereas the individual’s behavior is shaped, at least in part, by her social group, she will also help to shape the behavioral expectations of the group by choosing which groups to associate with. In other words, individuals who abuse certain chemicals tend to associate with others who abuse those same compounds and to avoid those whose substance abuse pattern is different. An example of this is the pattern of cocaine abuse that has evolved in the United States: “Crack” cocaine is found mainly in the inner cities, whereas powdered cocaine is found more often in the suburbs. Although most people do not think in terms of cultural expectations, their behavior does parallel these themes. Consider the “closet” alcohol abuser, who might go a different liquor store each day to hide the extent of his drinking from sales staff (Knapp, 1996), or who might sneak around the neighborhood at night hiding empty alcohol bottles in the neighbors’ trash cans. In each case, the individual is attempting to project an image of his alcohol use that is closer to social expectations than to reality. A fact often overlooked in substance abuse research is that chemical use patterns are not fixed. People often change their alcohol/drug use pattern over time. For example, if you were to question large numbers


of people who used marijuana and hallucinogenics during the “hippie” era (late 1960s to the mid 1970s), most would say the drug use was simply a “phase I was going through.” Unfortunately, some people find the chemical’s effects desirable enough to encourage further abuse in spite of social sanctions against it. In such cases it is not uncommon for the individual to drift toward a social group that encourages and supports use of that drug in what amounts to either a conscious or unconscious attempt to restructure his or her social environment so that it supports the chemical use. Individual life goals as helping to shape chemical use. Another factor that also influences the individual’s decision to either begin or continue the use of chemicals is whether use of a specific drug is consistent with the person’s long-term goals or values. This is rarely a problem with socially approved drugs, such as alcohol—and to a smaller degree tobacco. But consider the junior executive who has just won a much hoped for promotion, only to find that the new position is with a division of the company with a strong “no smoking” policy. In this hypothetical example, the executive might find that giving up smoking is not as serious a problem as he had once thought, if this is part of the price he must pay to take the promotion. In such a case, the individual has weighed whether further use of that drug (tobacco) is consistent with his life goal of a major administrative position with a large company. However, there are also cases in which the individual will search for a new position rather than accept the restriction on his cigarette use. A flow chart of the decision-making process to use or not use alcohol or drugs is shown in Figure 3.2. Note, however, that we are discussing the individual’s decision to use alcohol or drugs on a recreational basis. People do not plan to become addicted to alcohol or drugs. Thus, the factors that initiate chemical use are not the same as factors that maintain chemical use (Zucker & Gomberg, 1986). A person might begin to abuse narcotic analgesics because these chemicals help her deal with painful memories. However, after she has become physically addicted to the narcotics, her fear of withdrawal may be one reason that she continues to use the drugs.


Chapter Three Does person choose to use drugs at this time? No Yes Was chemical use rewarding?


Person decides not to use drug again in near future


Person decides not to use drug again in near future


Person decides not to use drug again in near future

Yes Is there social reinforcement for further drug use? Yes Is drug use consistent with life goals? Yes

Continued drug use

Person abstains from use of drug in question. Must make daily decision to use or to not use.

FIGURE 3.2 The chemical use decision-making process.

What Do We Mean When We Say That Someone Is “Addicted” to Chemicals? Surprisingly,. in light of the ease with which people speak of the “medical model” of alcohol/drug addiction, there is no single definition of addiction to alcohol/drugs. Rather, there are a number of competing definitions. Although many of these appear to have some validity in certain situations, a universally accepted comprehensive theory of addiction has yet to be developed. In this text, addiction will be defined by the criteria outlined in the American Psychiatric Association’s (2000) Diagnostic and Statistical Manual of Mental Disorders (4th edition - Text Revision; 2000) (or DSM-IV-TR ). According to the DSM-IV-TR, the following are some of the signs of alcohol/drug addiction: 1. Preoccupation with use of the chemical between periods of use.

2. Using more of the chemical than had been anticipated. 3. The development of tolerance to the chemical in question. 4. A characteristic withdrawal syndrome from the chemical. 5. Use of the chemical to avoid or control withdrawal symptoms. 6. Repeated efforts to cut back or stop the drug use. 7. Intoxication at inappropriate times (such as at work), or when withdrawal interferes with daily functioning (hangover makes person too sick to go to work, for example). 8. A reduction in social, occupational, or recreational activities in favor of further substance use. 9. Continued chemical use in spite of having suffered social, emotional, or physical problems related to drug use.

The Medical Model of Chemical Addiction

Any combination of four or more of these signs is used to identify the individual who is said to suffer from the “disease” of addiction. In the disease model of substance abuse, or the medical model as it is also known, (a) addiction is a medical disorder, as much as cardiovascular disease or a hernia might be; (b) there is a biological predisposition toward addiction; (c) the disease of addiction is progressive. An unspoken assumption on which the disease model of drug addiction rests is that some people have a biological vulnerability to the effects of chemicals that is expressed in the form of a loss of control over the use of that substance (Foulks & Pena, 1995). The Medical Model of Drug Addiction The medical model accepts as one of its basic tenets the belief that much of behavior is based on the individual’s biological predisposition. Thus, if the individual behaves in a way that society views as inappropriate, the medical model assumes that there is a biological dysfunction that causes this “pathology.” But the reader must remember that there is no single, universally accepted disease model that explains alcohol/drug use problems. Rather, there is a group of loosely related theories stating that alcohol/drug abuse/addiction is the outcome of an unproven biomedical or psychobiological process, and thus can be called a “disease” state. The disease model of chemical dependency has not met with universal acceptance. Indeed, for decades the treatment of those who suffered from a chemical dependency rested not with physicians but with substance abuse counselors and mental health professionals (Stein & Friedmann, 2001). Only now are physicians starting to claim that patients with addictive disorders suffer from a chronic, relapsing disorder that falls in their purview (Stein & Friedmann, 2001). In this section, the disease model of addiction is discussed, along with some of the research that, according to proponents of this model, supports their belief that the compulsive use of chemicals is a true disease. Jellinek’s work. The work of E. M. Jellinek (1952, 1960) has had a profound impact on how alcoholism3 is viewed by physicians in the United States. Prior to the American Medical Association’s decision to classify 3

See Appendix 3.


alcoholism as a formal disease in 1956, the condition was viewed as a moral disorder. Alcoholics were considered immoral individuals both by society in general and by the majority of physicians. In contrast to this, Jellinek (1952, 1960) and a small number of other physicians argued that alcoholism was a disease, like cancer or pneumonia. Certain characteristics of the disease, according to Jellinek, included (a) the individual’s loss of control over his or her drinking, (b) a specific progression of symptoms, and (c) death if the alcoholism was left untreated. In an early work on alcoholism, Jellinek (1952) suggested that the addiction to alcohol progressed through four different stages. The first, which he called the Prealcoholic phase, was marked by the individual’s use of alcohol for the relief from social tensions encountered during the day. In the prealcoholic stage, one sees the roots of the individual’s loss of control over her drinking, as she is no longer drinking on a social basis but has started to drink for relief from stress and anxiety. As she continues to engage in “relief drinking” for an extended period of time, she enters the second phase of alcoholism: the Prodromal stage (Jellinek, 1952). This is marked by memory blackouts, secret drinking (also known as hidden drinking), a preoccupation with alcohol use, and feelings of guilt over her behavior while intoxicated. With continued use, the individual would eventually become physically dependent on alcohol, a hallmark of what Jellinek (1952) called the Crucial phase. Other symptoms of this third stage are a loss of self-esteem, a loss of control over one’s drinking, social withdrawal in favor of alcohol use, self-pity, and a neglect of proper nutrition while drinking. During this phase, the individual would attempt to reassert her control over the alcohol by entering into periods of abstinence, only to return to its use after short periods of time. Finally, with continued alcohol use, Jellinek (1952) thought that the alcoholic would enter the Chronic phase. The symptoms of this phase include a deterioration of the person’s morals, drinking with social inferiors, the development of motor tremors, an obsession with drinking, and for some, the use of “substitutes” when alcohol is not available (e.g., drinking rubbing alcohol). A graphic representation of these four stages of alcoholism is shown in Figure 3.3.


Chapter Three Prealcoholic Phase

Prodromal Phase

Alcohol used for relief from social tension

First blackouts; preoccupation with use of alcohol; development of guilt feelings

Crucial Phase

Chronic Phase

Loss of control over alcohol; withdrawal symptoms; preoccupation with drinking

Loss of tolerance for alcohol; obsessive drinking; alcoholic tremors

FIGURE 3.3 Jellinek’s four stages of alcoholism.

In 1960, Jellinek presented a theoretical model of alcoholism that was both an extension and a revision of his earlier work. According to Jellinek (1960), the alcoholic was unable to consistently predict in advance how much he or she would drink at any given time. Alcoholism, like other diseases, was viewed by Jellinek as having specific symptoms, which included the physical, social, vocational, and emotional complications often experienced by the compulsive drinker. Further, Jellinek continued to view alcoholism as having a progressive course that if not arrested would ultimately result in the individual’s death. In his 1960 book, Jellinek went further by attempting to classify different patterns of addictive drinking. Like Dr. William Carpenter in 1850, Jellinek came to view alcoholism as a disease that might be expressed in a number of different forms, or styles, of drinking (Lender, 1981). Unlike Dr. Carpenter, who thought there were three types of alcoholics, Jellinek identified five subforms of alcoholism. Jellinek used the first five letters of the Greek alphabet to identify the most common forms of alcoholism found in the United States. Table 3.1 provides a brief overview of his theoretical system. Advanced in an era when most physicians viewed alcohol dependence as being caused by a moral weakness, Jellinek’s (1960) model of alcoholism offered a new paradigm. First, it provided a diagnostic framework within which physicians could classify different

patterns of drinking, as opposed to the restrictive dichotomous view—in which the patient was either alcoholic or not—that had previously prevailed. Second, Jellinek’s (1960) model of alcoholism as a physical disease made it worthy of study, and the person with this disorder was worthy of “unprejudiced access” (Vaillant, 1990, p. 5) to medical treatment. Finally, the Jellinek model attributed the individual’s use of alcohol not to a failure of personal willpower, but to the drinker’s suffering from a medical disorder (Brown, 1995). Since the time that the Jellinek (1960) model was introduced, researchers have struggled to determine whether it is valid. Sobell and Sobell (1993) found that there was a clear-cut progression in the severity of the individual’s drinking in only 30% of the cases. In the same year, Schuckit, Smith, Anthenelli, and Irwin (1993) found clear evidence of a progression in the severity of problems experienced by the alcohol-dependent men in their research sample; but the authors concluded that there was remarkable variation in the specific problems encountered by their subjects, suggesting that alcohol-dependent individuals do not follow a single progressive pattern. Thus, the research data supporting the Jellinek model continue to be mixed. The genetic inheritance theories. In the last 20 years of the 20th century, researchers began to identify genetic patterns that seemed to predispose some individuals to develop alcohol-use patterns. Early evidence suggested that a gene called slo-1, which controls the activity of a certain protein known as the BK channel, seemed to mediate the individual’s sensitivity to alcohol’s effects (Lehrman, 2004). The BK channel protein normally controls the flow of ions out of the neuron during the normal cycle of neural “firing.” When alcohol binds at this protein complex, it holds the ion channel open for far longer than is normal, slowing the rate at which that neuron can prepare for the next firing cycle and thus slowing the level of activity for that neuron (Lehrman, 2004). The team of Tsuang et al. (1998) examined the issue of genetic predisposition toward substance abuse and concluded that both genetic and environmental factors predisposed their subjects toward the abuse of classes of chemicals. The authors also found variations in how either the environment or genetic inheritance influenced the use of a specific compound. Each class of drug had a unique genetic predisposition according to


The Medical Model of Chemical Addiction TABLE 3.1 Comparison of Jellinek’s Drinking Styles Type of alcoholism






Psychological dependence on alcohol?





Possibly but not automatically

Do physical complications develop?



Minimal to no physical complications

Multiple and serious physical problems from drinking

Possibly, but rare because of binge pattern of alcohol use

Tolerance to the effects of alcohol?



Yes. Person will “crave” alcohol if forced to abstain from use.

Yes. Person will “crave” alcohol if forced to abstain from use.

Possibly, but rare because of binge pattern of alcohol use

Can the individual abstain from alcohol use?

For short periods of time, if necessary

For short periods of time, if necessary

No. Person has lost control over his or her alchohol use

No. Person has lost control over his or her alcohol use

Yes. Person is able to abstain during periods between binges

Is this pattern of drinking stable?






Is this pattern of drinking progressive?

In rare cases, but not automatically

Possibly, but not automatically

Strong chance of progression to gamma, but not automatic

No. This is an end-point style of drinking


If so, to what pattern will this style of drinking progress?




Not applicable


*According to Jellinek (1960), the epsilon style of drinking was the least common in the United States and only limited information about this style of drinking was available to him.

the authors, possibly explaining why different individuals seem “drawn” to very specific drugs of abuse. Researchers have found cultural factors to be one factor that helps to determine whether the genetic predisposition for cigarette smoking is activated (Kendler, Thornton, & Pedersen, 2000). In Sweden, as the social restructions against the use of tobcacco products by women slowly relax, more and more women, including those with the suspected genetic predisposition and who may become dependent, are able to indulge in the use of tobacco products. Nor are environmental influences limited to the use of tobacco products. The team of Gruber and Pope (2002) suggested that unspecified “genetic factors” (p. 392) accounted for 44% of the risk

for marijuana abuse, whereas “family environmental factors” (p. 392) accounted for 21% of the risk. A quarter of a century ago, Cloninger, Bohman, and Sigvardsson (1981) uncovered inheritance patterns in families that continued to express themselves even in cases where the child was adopted shortly after birth and was not brought up with the biological family. Drawing on the records of 3,000 individuals who were adopted, the authors discovered that the children of alcoholic parents were likely to grow up to become dependent on alcohol themselves, even when the children were reared by nonalcoholic adoptive parents almost from birth. The authors also found that the children who grew up to be alcoholic essentially fell into two groups. In the first


subgroup, three-quarters of the children had biological parents who were alcoholic, and these children went on to develop alcohol-use disorders. During young adulthood, these individuals would drink only in moderation. Only later in life did their drinking progress to the point that they could be classified as alcohol dependent. Even so, Cloninger et al. (1981) found that these individuals tended to function within society and were only rarely involved in antisocial behaviors. The authors classified these individuals as “Type I” (or “Type A” or “late onset”) alcoholics (Gastfriend & McLellan, 1997; Goodwin & Warnock, 1991). Cloninger et al. (1981) found that there was a strong environmental impact on the possibility that the adopted child whose biological parents were alcoholic would also be alcoholic. For example, the authors found that children of alcoholic parents, if adopted by a middle-class family in infancy, actually had only a 50–50 chance of being alcoholic in adulthood. Although this is still markedly higher than what one would expect based on the knowledge that only 3% of the general population is alcohol dependent, it is still lower than the outcome found for children of alcoholic parents who were adopted and raised by parents of lower socioeconomic status. In this case, the chances were greater that the child would grow up to be an alcoholic. The authors interpreted these findings as evidence of a strong environmental influence on the evolution of alcohol use, in spite of the individual’s genetic inheritance. The second, smaller group of alcoholics found by the research team of Cloninger et al. (1981) were male, more violent alcoholics who tended to be involved in criminal activity. These individuals were classified as having “Type II” (or “male limited,” “Type B,” or “early onset”) alcoholism (Gastfriend & McLellan, 1997; Goodwin & Warnock, 1991). A male child born to a “violent” alcoholic ran almost a 20% chance of himself becoming alcoholic, no matter what social status the child’s adoptive parents had. Because a male child whose father was a violent alcoholic stood a significantly greater chance of himself becoming dependent on alcohol than what one would expect on the basis of chance alone, the authors concluded that there was a strong genetic influence for this subgroup of alcoholics.

Chapter Three

The team of Sigvardsson, Bohman, and Cloninger (1996) successfully replicated this earlier study on the heritability of alcoholism. The authors examined the adoption records of 557 men and 600 women who were born in Gothenburg, Sweden, and who were adopted at an early age by nonrelatives. A significant percentage of the adopted children had alcoholic biological fathers, allowing for a good-sized research sample. The authors confirmed their earlier identification of two distinct subtypes of alcoholism for men. Further, the authors found that the “Type I” and “Type II” subtypes appear to be independent, but possibly related, forms of alcoholism. Where one would expect 2% to 3% of their sample to have alcohol-use problems on the basis of population statistics, the authors found that 11.4% of their male sample fit the criteria for Type I alcoholism, and 10.3% of the men in their study fit the criteria for Type II alcoholism. But in contrast to the original studies, which suggested that Type II alcoholism was limited to males, there is now evidence that a small percentage of alcohol-dependent women might also be classified as Type II alcoholics (Cloninger et al., 1996; Del Boca & Hesselbrock, 1996). The distinction between Type I and Type II alcoholics has lent itself to a series of research studies designed to identify possible personality traits unique to each group of alcohol dependents. Researchers have found that, as a group, Type I alcoholics tend to engage in harm-avoidance activities, whereas Type II alcoholics tend to be high in the novelty seeking trait (Cloninger et al., 1996). Other researchers have found differences in brainwave activity, using the electroencephalograph (EEG), between the Type I and Type II alcoholics. Further, as a group, Type I alcoholics tend to have higher levels of the enzyme monoamine oxidase (MAO) than Type II alcoholics do. The researchers hypothesized that this lower MAO level in Type II alcoholics might account for their tendency to be more violent than Type I alcoholics (Cloninger et al., 1996). Thus, the Type I–Type II typology seems to have some validity as a way of classifying different patterns of alcohol use/abuse. Using a different methodology and a research sample of 231 substance abusers, 61 control subjects, and 1,267 adult first-degree relatives of these individuals, the team of Merikangas et al. (1998) found evidence of “an 8-fold

The Medical Model of Chemical Addiction

increased risk of drug [use] disorders among relatives of probands4 with drug disorders” (p. 977). According to the authors, there was evidence of familial predisposition toward the abuse of specific substances, although they did admit that the observed familial “clustering of drug abuse could be attributable to either common genetic or environmental factors” (p. 977). Such environmental factors might include impaired parenting skills, marital discord, stress within the family unit, and/or physical/emotional/ sexual abuse as well as exposure to parental chemical abuse at an early age, according to the authors. These findings were supported by an independent study conducted by Bierut et al. (1998), who suggested that there was “a general addictive tendency” (p. 987) that was transmitted within the family unit. However, they could not be more specific about the nature of this genetic predisposition toward alcohol/substance abuse. Other researchers have concluded that 48% to 58% of the risk for alcoholism is based on the individual’s genetic inheritance, at least for males (Prescott & Kendler, 1999). Further, researchers have found evidence that within each family forces are at work that seem to help shape the individual’s choice of recreational chemical(s) to abuse (Bierut et al., 1998; Merikangas et al., 1998). The biological differences theories. Over the past 50 years, a number of researchers have suggested that people who are alcohol dependent are somehow different biologically from those who are not. The range of this research is far too extensive to discuss in this chapter, but the general theme is that alcohol-dependent individuals seem to metabolize alcohol differently from nondependent drinkers, that the site/mechanism of alcohol biotransformation is different for the alcoholdependent person compared to the nonalcoholic, or that the alcohol-dependent person seems to have a reaction to the effects of that chemical that are different from the reaction of those who are not dependent on it. The general thrust of these research articles is that there is a biological difference between the alcoholic and the nonalcoholic. This assumption has resulted in studies by various researchers who have attempted to identify the exact difference that might exist between alcoholic and nonalcoholic individuals. One example


See Glossary.


is the study conducted by the team of Ciraulo et al. (1996). They selected a sample of 12 women who were adult daughters of alcoholic parents, and 11 women whose parents were not alcohol dependent, and then administered either a 1mg dose of the benzodiazepine alprazolam or a placebo to their subjects. The authors found that the women of alcoholic parents who received the alprazolam found it to be more enjoyable than did those women whose parents were not alcohol dependent. This finding, along with an earlier study using male subjects conducted by the same team, was consistent with the findings of Tsaung et al. (1998), who suggested on the basis of their research that people developed vulnerabilities to classes of drugs rather than to a specific substance. An interesting approach is that of Goldstein and Volkow (2002), who used neuro-imaging technology to explore which areas of the brain become active during the experience of “craving” and intoxication. The authors noted that some of the same regions of the brain activated during these drug-use experiences, such as the orbiotofrontal cortex and the anterior cingulate gyrus, are both connected with the limbic system and cognitive-behavioral integration activities such as motivation and goal-directed behavior. The authors suggest that through repeated exposure to the chemical, the individual comes to expect certain effects from that chemical. Finally, as a result of repeated drug-induced episodes of pleasure, the individual becomes less sensitive to normal reward experiences, and through both a cognitive and neurobehavioral process comes to overvalue the reinforcing effects of alcohol/drugs. This theory, although still in its formative stages, would seem to account for many of the facets of alcohol/drug use disorders. Starting in the early 1990s, several different teams of researchers began to explore the possibility that one of the five dopamine receptor subtypes might play a critical role in the development of alcohol/drug use problems. Much of this research centered around the dopamine D2 receptor gene and its role in alcohol dependence. Research has found that those individuals who have lower levels of dopamine D2 receptor sites are more likely to respond with pleasure to an intravenous injection of a stimulant such as methylphenidate than are those individuals with high levels of dopamine D2 receptor sites (Volkow, 2004).


The team of Cheng, Gau, Chen, Chang, and Chang (2004) followed a sample of 499 individuals in Taiwan for a period of 4 years, and found strong evidence supporting the genetic inheritance theory for alcoholism for the men in their study. In the early 1990s, the team of Blum et al. (1990) published the results of their research into the prevalence of the dopamine D2 receptor gene in samples of brain tissue from 70 cadavers, half of which were known to be alcohol dependent in life. The authors found that 77% of the brains from alcohol-dependent people, but only 28% of the nonalcoholic brains, possessed the dopamine D2 receptor gene. In an extension of the original research, Noble, Blum, Ritchie, Montgomery, and Sheridan (1991) utilized tissue samples from the brains of 33 known alcohol-dependent people and a matched group of 33 nonalcoholic controls. The authors concluded, on the basis of their “blind” study of the genetic makeup of the tissue samples, that there was strong evidence of a genetic foundation for severe alcoholism involving the D2 dopamine receptor. As will be discussed in the next chapter, however, these studies have been challenged by other researchers. On the basis of such research, Blum, Cull, Braverman, and Comings (1996) argued that such behavioral disorders as alcoholism, drug abuse/addiction, cigarette smoking, pathological gambling, Tourette’s Syndrome, and obesity were all reflections of a “reward deficiency syndrome” in which the brain’s reward system failed to function appropriately. The authors hypothesized that a defect in the A1 subtype of the dopamine D2 receptor gene was pivotal for the development of the so-called reward deficiency syndrome, which expressed itself behaviorally as an inability of the individual to derive pleasure from everyday activities. Marc Schuckit (1994) utilized a different approach to try to identify biological predictors of alcoholism. In the early 1980s, the author tested 227 men who were the sons of alcoholics along with a control group. He found that 40% of the sons of alcholics, but only 10% of the men who did not have an alcoholic parent (from the control group), were “low responders” to a standard dose of alcohol. The author found that the “low responders” did not seem to have been as strongly affected by the alcohol that they had received as were the individuals in the control group. Ten years later, the author again was able to contact 223 of the original sample of men who

Chapter Three

were raised by alcoholic parents. He found that of the men who had an abnormally low response to the alcohol challenge test, 56% had become alcoholic. Of the men raised by alcoholic parents who did not demonstrate an abnormally low physiological response to a standard dose of alcohol when originally tested, only 14% had become alcoholic in the decade between the original examination and the follow-up studies (Schuckit & Smith, 1996; Schuckit, 1994). These data were interpreted to suggest that low responders were insensitive to the effects of alcohol. In turn, this insensitivity was hypothesized to contribute to a tendency for the individual to drink more often and to consume more alcohol per session than individuals who were not low responders (Schuckit, 1994). Thus, there is strong evidence for a genetic predisposition toward addictive disorders. But researchers also believe that environmental factors must interact with the individual’s genetic heritage to allow that disorder to develop (Monti et al., 2002). To date, no unequivocal biochemical or biophysical difference between those who are or are not addicted to one or more chemicals has been identified by researchers. The Personality Predisposition Theories of Substance Abuse Many researchers believe that substance abuse might be traced back to the individual’s personality structure. This perspective is known as the characterological model of addiction (Miller & Hester, 1995). An example of this perspective on substance-use disorders is the theory that individuals might turn to alcohol/drugs as a result of a self-regulation disorder (Khantzian, 2004). According to this theory, individuals engage in harmful or selfdestructive behaviors because they lack the ability to meet their emotional needs in appropriate ways. Their drug of choice is viewed as having specific properties that allow them to more effectively cope with unpleasant emotional states that threaten to overwhelm them (Khantzian, 2003a, 2004; Murphy & Khantzian, 1995; Shaffer & Robbins, 1995). An early proponent of this model was Karen Horney (1964), who spoke of alcohol as being a way to “narcotize” (p. 45) anxiety. The specific compounds being abused are viewed as providing at least short-term relief over these painful internal affect states either through the pharmacological effects of the chemicals or the

The Medical Model of Chemical Addiction

attendant rituals, practices, and drug-centered pseudoculture (Khantzian, Mack, & Schatzberg, 1999). The addiction to the chemical is viewed as an unintended side effect of the individual’s use of that compound in the struggle to deal with these painful emotional states (Khantzian, Mack, & Schatzberg, 1999; Murphy & Khantzian, 1995). Table 3.2 provides a summary of the psychoanalytic perspective of addiction. In support of the psychoanalytic model of addictions, an impressive body of evidence suggests that certain personality traits do seem to predispose the individual to specific forms of drug abuse. When forces conspired to limit the amount and quality of heroin available in Australia’s capital territory, heroin addicts did not appear to switch their drug of choice in large numbers, suggesting that the particular drug played a specific role in their lives that could not be fulfilled by other chemicals (Smithson, McFadden, Mwesigye, & Casey, 2004). The team of LeBon et al. (2004) found that heroin-dependent people demonstrated higher scores for the traits of Noveltyseeking and Self-directedness on the Cloninger TCI personality test than did alcohol-dependent people, suggesting that those who abuse or become addicted to heroin might differ from alcohol-dependent people in fundamental ways. Further, there is evidence suggesting a relationship between psychological trauma and later substance-use problems. Individuals who experienced physical and/or sexual abuse in childhood or adolescence, for example, seem prone to substance-use problems in later life (Khantzian, 2004; Miller & Downs, 1995).


Ego State and Drug of Choice

Class of chemical being abused

Affective state that chemical of abuse is thought to control

Alcohol and CNS depressants (barbiturates, benzodiazepines, etc.)

Loneliness, emptiness, isolation

Opiates (heroin, morphine, etc.)

Rage and aggression

CNS stimulants (cocaine, amphetamines, etc.)

Depression, a sense of depletion, anergia (sense of no energy), low self-esteem

Source: Based on Murphy & Khantzian (1995).


A small number of theorists have come to view the painful affective state known as shame as being a central factor for at least a subgroup of substance abusers (Bradshaw, 1988a; Sanford, 2004). Although the chemical offers the illusion of helping people escape from this painful affective state at least for a brief time, their compulsion to use the same method to cope with the shame plants the seed of addiction—for it is when people compulsively use one and only one method of escaping from emotional distress that they are in danger of becoming addicted to that system of control. A number of studies have found that abnormal risk taking seems to identify children who have the potential for later substance-use problems. The team of Dobkin, Tremblay, and Sacchitelle (1997) found that their data, drawn from a pool of 13-year-old boys, some of whom had alcoholic fathers and some of whom did not, failed to suggest which of these boys were at risk for later alcohol-use disorders. But on the basis of their research the authors concluded that the mother’s parenting style and whether the boy engaged in disruptive behaviors were more indicative of increased risk for alcohol-use disorders than was the father’s drinking status. Partial support for this study was provided by Masse and Tremblay (1997), who examined the personality characteristics of students at ages 6, 10, and adolescence to learn whether certain personality features predicted which individuals would engage in recreational drug use later in life. The authors found that students highest in the characteristic of novelty seeking and lowest on harm avoidance were most likely as adolescents to engage in cigarette and recreational drug use. Low harm avoidance is a trait that might express itself through disruptive behaviors, providing partial support for the study completed by Dobkin et al. (1997). Section summary. A number of researchers have suggested that certain personality characteristics predispose the individual toward alcoholism or other forms of chemical abuse. There do appear to be certain personality traits that are associated with substance use disorders, but it is difficult to determine whether these personality traits precede the development of the drug dependency or if they are a result of the frequent use of illicit chemicals. To date, no clearly identified causal factor has been found, and research into possible personality factors that might predispose one toward alcohol or substance abuse continues.


Chapter Three

Summary This chapter has explored some of the leading theories that attempt to explain why people use recreational chemicals and why they might become addicted to these drugs. Several factors that help to modify the individual’s substance-use pattern were explored, including the physical reinforcement value of the drugs being abused, the social reinforcement value, cultural rules that govern recreational chemical use, and the individual’s life goals. The medical or disease model of addiction has come to play an important role in the treatment of substance abuse in this country. Based on the work of E. M. Jellinek, the disease model of alcoholism has come to be applied to virtually every other form of substance abuse in addition to alcohol. Jellinek viewed alcoholism as being

a progressive disorder, which moved through specific stages. In a time when the alcoholic was considered a social failure who resorted to the bottle, Jellinek suggested that the individual suffered from a disease that would, if not treated, result in death. In time, the field of medicine came to accept this new viewpoint, and alcoholism was seen as a medical disorder. Since the early work of Jellinek, other researchers have attempted to identify the specific biophysical dysfunction that forms the basis for the addictive disorders. Most recently, drawing on medicine’s growing understanding of human genetics, scientists have attempted to identify the genetic basis for alcoholism and the other forms of drug addiction. To date, however, the exact biochemical or genetic factors that predispose one to become addicted have not been identified.


Are People Predestined to Become Addicted to Chemicals?

subtle philosophical differences between the ways physicians view the same disease. For example, treatment protocols for a condition such as a myocardial infarction might vary from one hospital to another because of the differing treatment philosophies for that disorder at the different health care facilities (or between physicians who follow different treatment philosophies). Advocates for the disease model of alcoholism point out that alcohol dependence (and, by extension, the other forms of drug dependence) have strong similarities to other chronic, relapsing disorders such as asthma, hypertension, or diabetes, and that the addictions are medical disorders (Marlowe & DeMatteo, 2003). In contrast, others argue that the substance-use disorders are forms of reckless misconduct such as speeding, and that individuals who engage in these behaviors should be treated as criminals by the court system (Marlowe & DeMatteo, 2003). Critics of the disease model often center their attack on how disease is defined. In the United States, disease is seen as reflecting a biophysical dysfunction of some kind that interferes with the normal function of the body. In an infectious process, a bacterium, virus, or fungus invading the host organism would be classified as a disease by this criterion. Another class of disease is those caused by a genetic disorder that leads to abnormal growth or functioning of the individual’s body. A third class is those in which the optimum function of the body is disrupted by acquired trauma. As noted in previous chapters, behavioral scientists agree that there is a genetic “loading” for alcoholism that increases the individual’s risk for developing this disorder (O’Brien, 2001). They argue that if such

Introduction The disease model of substance addiction, which was discussed in the last chapter, has not met with universal acceptance. Indeed, many health care professionals and scientists maintain that there are no biological or personality traits that automatically predispose the individual to abuse chemicals. Some researchers question the possibility that alcohol/drug addiction is a true “disease”; others concede that there is evidence of biological or psychosocial predispositions toward substance abuse but that certain environmental forces are needed to activate this predisposition toward addiction. In this chapter, some of these reactions against the “disease” model of substance abuse will be examined.

Multiple Models Although the medical model predominates the field of substance-abuse rehabilitation in the United States, several theoretical systems address the problem of alcohol/drug abuse. Some of the more important theoretical models are reviewed in Table 4.1. Although each of these theoretical models has achieved some degree of acceptance in the field of substance-abuse rehabilitation, no single model has come to dominate the field as has the disease model.

Reaction Against the Disease Model of Addiction It is tempting to speak of the “disease model” of alcohol/drug abuse as if there were a single, universally accepted definition of substance-use problems. But in reality, there are often subtle and sometimes not so 29


Chapter Four

TABLE 4.1 Theoretical Models of Alcohol/Drug Abuse Moral model Core Element

The individual is viewed as choosing to use alcohol in problematic manner.

Educational model Core Element

Alcohol problems are caused by a lack of adequate knowledge about harmful effects of this chemical.

Temperance model

Spiritual model

Dispositional disease model

Drunkenness is a sign that the individual has slipped from his or her intended path in life.

The person who becomes addicted to alcohol is somehow different from the nonalcoholic. The alcoholic might be said to be allergic to alcohol.

Characterological model

General systems model

Medical model

Problems with alcohol use are based on abnormalities in the personality structure of the individual.

People’s behavior must be viewed within context of social system in which they live.

This model advocates the use of alcohol in moderate manner.

The individual’s use of alcohol is based on biological predispositions, such as his or her genetic heritage, brain physiology, and so on.

Source: Chart based on material presented by Miller & Hester (1995).

a genetic predisposition exists for alcoholism, then one must exist for all forms of substance addiction, as alcohol is just one of a variety of recreational chemicals. If there is a genetic predisposition for addictive behaviors, then chemical dependency is very much like the other physical disorders for which there is a genetic predisposition. In this sense, substance abuse might be said to be a disease, and this is what E. M. Jellinek proposed in 1960. Reaction to the Jellinek model.1 Recall that Jellinek (1960) proposed a theoretical model for alcohol dependence only. In spite of this, his model has been applied to virtually every other pattern of drug abuse/ addiction, even though doing so has exposed some serious flaws. First, there were problems in the way Jellinek (1960) carried out his research. He based his work on surveys that were mailed out to 1,600 members of Alcoholics Anonymous (AA) and of the 1,600 surveys sent, only 98 were returned (a return rate of just 6%). Such a low return rate is rarely accepted as the foundation for a research study. Further, researchers must assume that those individuals who chose to participate in the study were different from those who decided not to do so, if only because they participated 1See

Appendix 3.

in the study. Also, members of a self-help group such as AA should be viewed as being different from nonmembers, because they made the decision to join the self-help group whereas nonmembers did not. Jellinek (1960) violated a number of basic rules of statistical research when he designed his model of alcoholism. He assumed that (a) AA members were the same as nonmembers, and (b) those people who returned the survey were the same as those who did not return the survey. Further, Jellinek utilized a cross-sectional research design. While this does not violate any rule of statistical research, cross-sectional research may not yield the same results as a lifespan (longitudinal) research design. The Jellinek (1960) model tends to break down when it is used to examine the alcohol-use patterns of alcoholdependent persons over the course of their lifetime (Vaillant, 1995). In challenging the Jellinek (1960) model, Skog and Duckert (1993) pointed out that alcoholism is not automatically progressive. A number of researchers have observed that the progression in the severity of alcoholism suggested by Jellinek develops in only a minority (25%–30%) of the cases (Sobell & Sobell, 1993; Toneatto, Sobell, Sobell, & Leo, 1991). In reality, positions “taken on the progressive nature of

Are People Predestined to Become Addicted to Chemicals?

alcoholism often depend more on the treatment orientation of the observers than upon the adequacy of their data” (Vaillant, 1995, p. 5). There is little research data supporting the theory that alcoholism is a progressive disorder, and many studies have shown that alcohol-dependent individuals alternate between periods of abusive and nonabusive drinking or even total abstinence. Illicit drug use also tends to follow a variable course (Toneatto, Sobell, Sobell, & Rubel, 1999). These findings challenge Jellinek’s conclusion that alcohol dependence was automatically a progressive disorder. Further, the concept of loss of control over alcohol use, a central feature of Jellinek’s theory, has been challenged (Schaler, 2000). Research suggests that chronic alcohol abusers drink to achieve and maintain a desired level of intoxication (Schaler, 2000), suggesting that the alcohol abuser has significant control over his alcohol intake. At this time, most professionals in the field accept that alcohol-dependent individuals have inconsistent control over their alcohol intake rather than a total loss of control (Toneatto et al., 1991; Vaillant, 1990, 1995). The genetic inheritance theories. In the last half of the 20th century and the first years of the 21st century, a significant body of evidence has suggested that the addictive disorders have a genetic basis. However, research has failed to identify a single “alcohol gene,” and it has become necessary to hypothesize that alcoholism, and by extension the other addictive disorders, were “polygenetic” rather than monogenetic in nature (Krishnan-Sarin, 2000). For more than a generation proponents of genetic inheritance theories have pointed to the work of Cloninger, Bohman, and Sigvardsson (1981) to support their contention that there is a biological predisposition for alcoholism. Yet it has been suggested that the methodology utilized by Cloninger et al. was flawed (Hall & Sannibale, 1996). Whereas Cloninger and colleagues claimed that the alcohol-dependent males in their study fell into two different subgroups, the Type I/Type II typology discussed in the last chapter, Hall and Sannibale (1996) found that fully 90% of the alcohol-dependent individuals who are admitted to treatment actually have characteristics of both Type I and Type II alcoholics, a finding that raises questions as to the validity of this distinction.


Further, Cloninger et al. (1981) uncovered strong evidence suggesting that environmental forces also help to shape alcohol-use disorders. Subsequent research has found that while genetic inheritance seems to account for 40%–60% of the variance for alcohol dependence, environmental forces can do much to mitigate the impact of the individual’s biological heritage (Jacob et al., 2003). Indeed, after examining the histories of over 1,200 pairs of monozygotic and dizygotic twins born in the United States between 1939 and 1957 and conducting structured psychiatric interviews with these individuals, the authors concluded that the individual’s “genetic risk [for alcoholism] in many cases becomes actualized only if there is some significant environmental sequela to the genetic vulnerability” (Jacob et al., 2003, p. 1270, italics added for emphasis). Currently, the evidence suggests a strong environmental influence on the evolution of alcoholism, in spite of the individual’s genetic inheritance, for those individuals whom Cloninger et al. (1981) classified as having “Type I” alcoholism. Because of this environmental influence, Type I alcoholism is also called milieu-limited alcoholism by some researchers. In contrast to the Type I alcoholics identified by Cloninger and colleagues were the Type II , or malelimited alcoholics. These individuals tend to be both alcoholic and involved in criminal behaviors. The male offspring of a “violent” alcoholic adopted in infancy ran almost a 20% chance of himself becoming alcohol dependent regardless of the social status of the child’s adoptive parents. However, here again the statistics are misleading. Although almost 20% of the male children born to a “violent alcoholic” themselves eventually became alcoholic, more than 80% of the male children born to these fathers do not follow this pattern. This would suggest that additional factors, such as environmental forces may play a role in the evolution of alcoholism for Type II alcoholics. Perhaps the strongest evidence of an environmental impact on the development of alcoholism is the significant variation in the male:female ratio of those who are alcohol dependent in different cultures around the world. In the United States, the male: female ratio for alcohol use disorders is about 5.4:1. In Israel, this same ratio is approximately 14:1, whereas in


Puerto Rico it is 9.8:1, and 29:1 in Taiwan. In South Korea the male:female ratio for alcohol-use disorders is 20:1, and it is 115:1 in the Yanbian region of China (Hill, 1995). If alcoholism were simply a matter of genetic inheritance, one would not expect a significant variation in the male: female ratio. As a comparison, approximately 1% of the population has schizophrenia in every culture studied, and the male:female ratio for schizophrenia is approximately the same around the globe. Thus, on the basis of research to date, it is clear that both a biological predisposition toward alcohol addiction and strong environmental influences help to shape the individual’s alcohol-use pattern. But there is still a great deal to be discovered about the evolution of substance-use disorders; as evidence of this, for reasons that are not understood up to 60% of known alcoholics come from families in which there is no prior evidence of alcohol dependence (Cattarello, Clayton, & Leukefeld, 1995). Do genetics rule? A great deal of research has been conducted with the goal of isolating the “alcohol gene.” Such a simplistic view of alcoholism as being caused by a genetic disorder serves only to confuse the average person. In reality, nonfamilial alcoholism accounts for 51% of all alcohol-dependent persons, a finding that is unlikely if the genetic predisposition for alcoholism is transmitted in families (Renner, 2004). This finding reinforces the truism that “genes confer vulnerability to but not the certainty of developing a mental disorder” (Hyman & Nestler, 2000, p. 96). Another popular misconception is that genetic predisposition is unalterable. This is clearly seen in the field of substance-abuse rehabilition, where counselors speak knowlingly of a patient’s “genetic loading” for an addictive disorder because relatives are themselves addicted to alcohol/drugs. In reality, the popular belief that “one gene  one unchangable behavior” is in error (Alper & Natowicz, 1992; Sapolsky, 1997). Admittedly, there does appear to be a genetic predisposition, or a “loading,” for alcohol/ drug dependence. But the genetic “loading” for a certain condition does not guarantee that it will develop (Holden, 1998; Sapolsky, 1997, 1998), and to predict who will or will not develop a substance-use disorder on the basis of genetic predisposition is not possible

Chapter Four

at this time (Madras, 2002). The individual’s genetic predisposition should be viewed only as a rough measure of his/her degree of risk, not the individual’s predestination (Cattarello et al., 1995; Gordis, 1996). Social, environmental, historical, and cultural forces all play a role in determining whether the genetic potential toward alcohol/drug addiction will or will not be activated. Arguing against the biological determinist theories of alcoholism, Greene and Gordon (1998) stated: “No ‘alcoholism gene’ has been discovered, just a bundle of biological risk factors that make alcoholism more or less likely” (p. 35). Popular opinion to the contrary, “to say that some behavior is ‘genetic’ rarely means it is inevitable” (Tavris, 1988, p. 42). This was seen in the results of an experiment in which genetically identical rats were sent to a number of different laboratories and then administered standard doses of alcohol under rigidly controlled conditions. Rather than respond to the alcohol in a uniform manner, the rats in the various laboratories had different responses to their exposure to alcohol (Tabakoff & Hoffman, 2004). This outcome would hardly have been seen if the rats’ reaction to alcohol was determined by their genetic heritage alone, since they were genetically identical. Thus, while there is evidence to suggest a genetic component to alcoholism, there is also strong evidence suggesting that cultural, social, and environmental forces play an equally strong role in the evolution of substance-use disorders. The identification of a genetic pattern that predisposes the individual toward addictive disorders would simply indicate the presence of a risk factor for the development of a substance-use disorder, not predestination. The dopamine D2 receptor site connection. A number of research studies have found that individuals who are alcohol or cocaine dependent seem to have fewer dopamine D2 receptor sites than do individuals who are not dependent on these chemicals. The clinical importance of these studies remains unclear, as it is possible that the observed findings reflect not preexisting condition but the brain’s protective down regulation of receptor sites in response to the substance-induced release of dopamine (O’Brien, 2004). Evidence supporting the dopamine


Are People Predestined to Become Addicted to Chemicals?

D2 hypothesis—that the observed deficit of dopamine D2 receptor sites predates the development of alcohol or cocaine addiction—has received only limited support (Krishnan-Sarin, 2000). Even if the dopamine D2 connection does play a major role in the biochemistry of addiction, “finding genetic differences in susceptibility to drug abuse or addiction does not imply that there is an ‘addiction gene’ which dooms unfortunate individuals to become hopeless drug addicts” (George, 1999, p. 99). In the last chapter, a recent study by Marc Schuckit (1994) was presented as evidence of a biological predisposition toward alcohol abuse/dependence in certain men. The author based the study on 223 men who, when tested a decade earlier, had demonstrated an abnormally low physical response to a standard dose of an alcoholic beverage. At the time of his earlier study, Schuckit had found that fully 40% of the men who had been raised by alcoholic parents, but only 10% of the control group, demonstrated this unusual response. A decade later, in 1993, the author found that 56% of the men who had the abnormally low physiological response to alcohol had progressed to alcohol dependence. The author interpreted this finding as evidence that the abnormally low physical response to a standard dose of an alcoholic beverage that he had found might identify a biological “marker” for alcoholism. But only a minority of the men who had been raised by an alcoholic parent demonstrated this abnormally low physiological response to the alcohol challenge test—91 of 227. Further, a full decade later, only 56% of these 91 men (just 62 men of the original sample) appeared to have become dependent on alcohol. Although this study is suggestive of possible biochemical mechanisms that might predispose the individual toward alcoholism, it also illustrates quite clearly that biological predisposition does not predestine the individual to develop an alcohol-use disorder. Other challenges to the disease model of addiction. No matter how you look at it, addiction remains a most curious “disease.” Even Vaillant (1983), who has long been a champion of the disease model of alcoholism, had to concede that to make alcoholism fit into the disease model, it had to be “shoehorned” (p. 4). Further, even if alcoholism is a disease, “both

its etiology and its treatment are largely social” (Vaillant, 1983, p. 4). This trait would suggest that alcohol dependence is an unusual disorder. For example, after following a group of alcoholic males for 50 years, Vaillant (1995) concluded that in at least some cases genetics determine whether an individual is going to become dependent on alcohol, while the social environment determines when this transition might occur. The possibility has been suggested that what we call “addictions” are actually a misapplication of existing neurobiological reward systems (Rodgers, 1994). The “reward system” provides reinforcement when the individual engages in a life-sustaining behavior. Unfortunately, the drugs of abuse overwhelm the brain’s normal reward mechanism to trick it into believing that chemical use is the most important priority for the organism (Leshner, 2001b). From this perspective, we might all be said to have the potential to become addicted because we are all biologically “wired” with a “reward system.” However, at this time, it is not known why some people are more easily trapped than are others by the ability of chemicals to activate the reward system. In the United States, an estimated $1 billion per year is spent by the manufacturers of alcoholcontaining beverages to promote their product. If alcohol abuse/dependence is indeed a disease, why is the use of the offending agent, alcohol, promoted through commercial means? The answer to this question raises some interesting points about the role of alcohol within this society and the classification of excessive alcohol use as a disease. The medical model and individual responsibility. For some unknown reason, “we exempt addiction from our beliefs about change. In both popular and scientific models, addiction is seen as locking you into an inescapable pattern of behavior” (Peele, 2004a, p. 36). One of the reasons for this misperception is that modern medicine “always gives the credit to the disease rather than the person” (B. Siegel, 1989, p. 12). Given this initial assumption, it is only natural for clinicians to believe that in the gradation between determinism and free will, the initiation of substance use may occur toward the free-will end of the spectrum, whereas continued


Chapter Four

abuse may fall more toward the deterministic end, after certain neurochemical changes have taken place in the brain. Once the addictive process begins, neurobiological mechanisms make it increasingly difficult for the individual to abstain from the drug. (Committee on Addictions of the Group for the Advancement of Psychiatry, 2002, p. 706)

The medical model thus proposes that people freely choose to initiate the substance use, but once entangled they increasingly become helpless victims of their biology. From this perspective, the individual essentially ceases to exist except as a genetically preprogrammed disease process! Consider the following case summary of an adolescent who developed a chemical use problem: One parent is identified as being a pharmacist; the other is a physician. The parents, identified as the “Lowells” were “well-versed in the clinical aspects of substance abuse, [but were] . . . outmaneuvered by the cunning that so often accompanies addiction” (Comerci, Fuller, & Morrison, 1997, p. 64). In this clinical summary, the child is totally absolved of any responsibility for attempting to manipulate his parents! It is the addiction that caused the adolescent to outmaneuver/manipulate the parents, not the adolescent! In a very real sense, the same process might be seen in the concept behind methadone maintenance. Proponents of the methadone maintenance concept suggest that even a single dose of narcotics would forever change the brain structure of the opiate-dependent individual, making that person crave more opiates (Dole, 1988; Dole & Nyswander, 1965). Now, if narcotics are so incredibly powerful, how does one account for the thousands of patients who receive narcotics for the control of pain, for extended periods of time, without developing a “craving” for opioid after their treatment is ended? Even patients who receive massive doses of narcotic analgesics for the control of pain only rarely report a sense of euphoria, or feel the urge to continue their use of opioids (Rodgers, 1994). In addition to this, Dole and Nyswander’s (1965) theory has no explanation for the many individuals who “chip” (occasionally use) narcotics for years without becoming addicted to

these drugs. The whole concept on which methadone maintenance is based is the belief that narcotics are so powerful that just a single dose takes away all of the individual’s power of self-determination. Finally, one must explain how the majority of people with a chemical dependency problem come to terms with it on their own, without any form of professional or para-professional assistance (Peele, 2004a). Many view the addictive disorders as being “a brain disease. The behavioral state of compulsive, uncontrollable drug craving, seeking, and use comes about as a result of fundamental and long-lasting changes in brain structure and function” (Leshner, 1997a, p. 691). Yet, when one speaks with alcohol-dependent people, they readily agree that they can resist the craving for alcohol, if the reward for doing so is high enough. Many alcohol-dependent people successfully resist the desire to drink for weeks, months, years, or decades, casting doubt on the concept of an “irresistible” craving for alcohol or the other drugs of abuse. If one can resist the impulse to use, if the reward is high enough, does the substance rob the individual of all willpower? One central feature of the medical model of illness is that once a person has been diagnosed as having a certain “disease,” he or she is expected to take certain steps toward recovery. According to the medical model, the “proper way to do this is through following the advice of experts (e.g., doctors) in solving the problem” (Maisto & Connors, 1988, p. 425). Unfortunately, as was discussed in Chapter 1, physicians are not required to be trained in either the identification or the treatment of the addictions. The medical model of addiction thus lacks internal consistency in that while medicine claims that addiction is a disease, it does not routinely train its practitioners in how to treat this ailment. What Exactly Are the Addictive Disorders? Proponents of the disease model often note that Dr. Benjamin Rush was the first to suggest that alcoholism was a disease more than 200 years ago. What is overlooked is that the very definition of “disease” has changed since the time of Dr. Rush. In his day, a disease was anything classified as being able to cause an imbalance in the nervous system (Meyer,

Are People Predestined to Become Addicted to Chemicals?

1996). Most certainly, alcohol appears capable of causing such an imbalance or disruption in the normal function of the central nervous system (CNS). Thus, by the standards of Benjamin Rush in the 1700s, alcoholism was indeed a disease. However, in the first decade of the 21st century, the issue is hardly as clear. The branch of medicine charged with the treatment of the addictions, psychiatry, is still in the process of defining what is, and is not, a manifestation of mental illness (Bloch & Pargiter, 2002). This ongoing process is clearly seen in the debate of whether substance abuse/addiction is or is not an actual form of mental illness (Kaiser, 1996; Schaler, 2000; Szasz, 1972, 1988). At what point does a “bad habit” become a disease? If a bad habit such as alcoholism were to be classified as a disease, then where do we draw the line between other unacceptable behaviors and disease? This issue has become so muddled that today any socially-unacceptable behavior is likely to be diagnosed as an “addiction.” So we have shopping addiction, videogame addiction, sex addiction, Dungeons and Dragons addiction, running addiction, chocolate addiction, Internet addiction, addiction to abusive relationships, and so forth. . . . All of these new “addictions” are now claimed to be medical illnesses, characterized by selfdestructiveness, compulsion, loss of control, and some mysterious, as-yet-unidentified physiological component. (Schaler, 2000, p. 18, italics added for emphasis)

Through this process of blurring the distinction between unacceptable behavior and actual disease states a number of “pseudo ailments” (Leo, 1990, p. 16) have evolved. These new “diseases” show that we have “become a nation of blamers, whiners, and victims, all too happy, when we get the chance, to pass the buck to someone else for our troubles” (Gilliam, 1998, p. 154). Consider that the 12-Step model pioneered by AA has now been applied to more than 100 different conditions that at least some people believe are a form of addiction (Addiction—Part II, 1992b). One point often misunderstood by those both outside and within the medical field is that the concept


of a “disease” and its treatment are fluid and that they change in response to new information. Stomach ulcers, once thought to be the consequence of stressinduced overproduction of gastric acids, are now viewed as the site of a bacterial infection in the stomach wall and are treated with antibiotics, not tranquilizers. The very nature of the concept of disease makes it vulnerable to misinterpretation, and a small but vocal minority within and outside the field of psychiatry question whether the medical model should be applied to behavioral disorders. Another overlapping point that is often overlooked in the debate over whether the addictions are actual diseases is the financial incentive to “discovering” a new disease, especially for those who have developed a treatment for it. This was clearly seen in the first decade of the 21st century. Following the aggressive marketing of compounds such as methylphenidate for the treatment of childhood attention deficit hyperactivity disorder (ADHD) in the last decade of the 20th century, some pharmaceutical companies began a media campaign suggesting that adults who had difficulty concentrating and who were easily distracted should discuss with their physician whether they needed methylphenidate. This media campaign did not mention (a) that the diagnosis of ADHD is quite difficult, (b) that questions have been raised about whether ADHD is even a real disorder, (c) and that some have challenged the appropriateness of using compounds such as methylphenidate in treating ADHD—if it exists. Another complicating issue is that neither alcohol nor drugs are the enemy. By itself, a chemical compound has no inherent value (Shenk, 1999; Szasz, 1997, 1996, 1988). Drug molecules are neither “good” nor “evil.” It is the manner in which they are used by the individual that determines whether they are helpful or harmful. To further complicate matters, society has made an arbitrary decision to classify some drugs as dangerous and others as being acceptable for social use. The antidepressant medication Prozac (fluoxetine) and the hallucinogen MDMA both cause select neurons in the brain to release the neurotransmitter serotonin and then block its reabsorption. Surprisingly, although fluoxetine is an antidepressant, a small but significant percentage


of those patients taking this drug do so because they desire its mood-enhancing effects rather than its antidepressant properties (“Better than well,” 1996). This raises a dilemma: If a pharmaceutical is being used by people only because they enjoy its effects, where is the line between the legitimate need for that medication and its abuse? The basis for making this distinction is often not based on scientific studies but on “religious or political (ritual, social) considerations” (Szasz, 1988, p. 316). As it is more than apparent that people desire the recreational drugs for their effects, it would seem that the current “war” on drugs was really a “war on human desire” (Szasz, 1988, p. 322). The dilemma is not so much that people use chemicals, according to Szasz, but that people desire to use them for personal pleasure. Indeed, it “is hard, in fact, to think of a single social ritual that does not revolve around some consciousness-altering substance” (Shenk, 1999, p. 43). As further evidence supporting Szasz’s position, 1 out of every 131 outpatient deaths in the United States is caused by “drug mistakes” (Friend, 1998). An estimated 300 deaths per day or 125,000 deaths per year occur in this country alone as a result of adverse reactions to prescribed medications (Graedon & Graedon, 1996; Lazarou, Pomeranz, & Corey, 1998; Pagliaro & Pagliaro, 1998). Another 2.21 million people are injured each year in the United States as a result of mistakes made in the prescription of legitimate pharmaceuticals by health care professionals (Lazarou et al., 1998). The annual death toll caused by such drug mistakes in this country is five times the number of deaths caused each year by recreational drug use. Yet there is hardly a whisper from Washington about the impact of drug mistakes, whereas thousands of speeches have been made about the problem of drug misuse. If the priority is to save lives, why is there so little attention to drug-prescribing mistakes as a source of premature death? The unique nature of addictive disorders. In spite of all that has been written about the problem of alcohol/drug use/abuse over the years, researchers continue to overlook a very important fact. Unlike the other diseases, the substance use disorders require the active participation of the “victim” in order to exist. The capacity for addiction rests with the

Chapter Four

individual, not (as so many would have us believe) with the drug itself (Savage, 1993). The addictive disorders do not force themselves on the individual in the same sense that an infection might. Alcohol or drugs do not magically appear in the individual’s body. Rather, the “victim” of this disorder must go through several steps to introduce the chemical into his or her body. Consider heroin addiction: The addict must obtain the money to buy the drug. Then, he or she must find somebody who is selling heroin and actually buy some for use. Next, the “victim” must prepare the heroin for injection, mixing the powder with water, heating the mixture, pouring it into a syringe; find a vein to inject the drug into; and then insert the needle into the vein. Finally, after all of these steps, the individual must actively inject the heroin into his or her own body. This is a rather complicated chain of events, each of which involves the active participation of the individual, who is then said to be a “victim” of a disease process. If it took as much time and energy to catch a cold, pneumonia, or cancer, it is doubtful that any of us would ever be sick a day in our lives! The team of O’Brien and McLellan (1996) offered a modified challenge to the disease model of the addictions as it now stands. The authors accepted that drug/alcohol addiction is a form of chronic disease; but whereas the addictive disorders were chronic diseases like adult-onset diabetes or hypertension, there also were behavioral factors that helped to shape the evolution of these disorders. Thus, according to the authors, “although a diabetic, hypertensive or asthmatic patient may have been genetically predisposed and may have been raised in a highrisk environment, it is also true that behavioral choices . . . also play a part in the onset and severity of their disorder” (p. 237). It is people’s behavior, the decisions they make, that will help to shape the evolution of the addictive disorders. Ultimately, the people retain responsibility for their behavior, even if they have a “disease” such as addiction (Vaillant, 1983, 1990). In the past 60 years, proponents of the medical model of alcoholism have attempted to identify the biological foundation for abusive drinking. Over the years, a large number of research studies have been published, many


Are People Predestined to Become Addicted to Chemicals?

of which have suggested that alcoholics (a) seem to metabolize alcohol differently than nonalcoholics, or (b) seem to be relatively insensitive (or, depending on the research study, more sensitive) to the effects of alcohol, compared to nonalcoholics. Proponents of the medical model of addiction often point to these studies as evidence of a biological predisposition toward alcoholism. However, in spite of a significant amount of research, no consistent difference has been found in the rate of metabolism, the route by which addicted and nonaddicted individuals biotransform chemicals, or the susceptibility of addicted/nonaddicted individuals to the effects of recreational chemicals. Although substance abuse-rehabilitation professionals talk about the “genetic predisposition” toward alcohol/drug use disorders as if this were a proven fact, the truth is that scientists still have virtually no idea how individual genes, or groups of genes, affect the individual’s behavior (Siebert, 1996). As David Kaiser (1996) observed, Modern psychiatry has yet to convincingly prove the genetic/biologic cause of any single mental illness. However, this does not stop psychiatry from making essentially unproven claims that . . . alcoholism . . . [is] in fact primarily biologic and probably genetic in origin, and that it is only a matter of time until . . . this is proven. (p. 41)

Thus, at this time, it does not appear that the disease model of addiction as it now stands provides the ultimate answer to the question of why people become addicted to chemicals. The disease model as theory. Since it was first introduced, the disease model of chemical dependency has experienced a remarkable metamorphosis: Although it was first introduced as a theoretical model of alcoholism, it has evolved into the standard model for the treatment of virtually all forms of drug addiction. Further, although the medical model of addiction is but one of several competing theoretical models, proponents do not speak of it as a theoretical model, but as an established fact. In part, this reflects the impact of the medical diagnosis process on the evolution of the disease model, for within this context a disease is

viewed as a clinical entity with a predictable course (Rosenberg, 2002). The diagnostic process provides an avenue of communication between the clinician and the bureaucrat, and it legitimatizes specific illnesses as being worthy of social approval or acceptance (Rosenberg, 2002). When viewed in this light, the disease model of the addictions might be seen as having value, providing “a useful metaphor or reframe for many clients” (Treadway, 1990, p. 42). But it is only an analogy, which has contributed little in the way of new, effective, treatment methods for those who are addicted (Marlowe & DeMatteo, 2003). In reality, the disease model of alcohol/drug addiction reflects the unproven tenet of modern medicine: All forms of suffering are caused by a physical disorder of some kind (Breggin, 1998). In spite of rather vocal claims as to the scientific nature of the medical model, psychiatrists as medical doctors have always claimed that everything they happen to be treating is biological and genetic. [These] claims, in other words, are nothing new. . . . They are inherent in the medical viewpoint. In reality, not a single psychiatric diagnosis, including schizophrenia and manic-depressive disorder have been proven to have a genetic or biochemical origin. (Briggin, 1998, p. 173, italics added for emphasis)

Thus, the theory of the biogenetic foundation of alcoholism, and by extension the other forms of drug addiction, has become dogma. Unfortunately, dogmatists tend to rarely, if ever, question their basic assumptions (Kaiser, 1996). For example, proponents of the disease model seem determined to defend it from all criticism. This process is not uncommon. History has demonstrated time and time again that once a certain theoretical viewpoint has become established, proponents of that position work to protect it from both internal and external criticism (Astrachan & Tischler, 1984). This process may clearly be seen in the disease model of addiction. The current atmosphere is one in which legitimate debate over strengths and weaknesses of the different models of addiction is discouraged. There is only one “true” path to enlightenment, according


to proponents of the disease model, and you should not question its wisdom. In this country the disease model has become “big politics and big business,” a situation that encourages its proponents to turn a deaf ear to other viewpoints (Fingarette, 1988, p. 64). The disease model has formed the basis of a massive “treatment” industry, into which many billions of dollars and thousands of man-years have been invested. In a very real sense, the biogenetic model has taken on a life of its own (Vaillant, 1983). In reality, what is surprising is not that the disease model exists, but that it has become so politically successful in this country. Consider that the treatment methods currently in use are those advocated by the proponents of the disease model and they have not changed significantly in 40 years (Rodgers, 1994). Further, practitioners commonly have had little training in the application of scientific theories to treatment settings and frequently have only their own history of alcohol/drug addiction as a guide to how to proceed with the treatment process (Marinelli-Casey, Domier, & Rawson, 2002). Many current treatment methods are based not on clinical research but on somebody’s belief that those methods should work (Gordis, 1996). Not surprisingly, strong evidence shows that current treatment methods for the addictions are possibly less effective than doing nothing for the individual (Larimer & Kilmer, 2000). Proponents of the medical model are hardly likely to go to insurance companies or the public after 60-odd years of claiming that the addictions are diseases and admit that treatment does not work. Rather, as Peele (1989) pointed out, when the “treatment” of an addictive disorder is unsuccessful, the blame is usually put on the patient through such claims as “She did not want to quit,” or on the existence of unproven “overwhelming and uncontrollable impulses” (Shaffer, 2001, p. 2). But the blame is never placed on the disease model, in spite of an extensive body of evidence that suggests it has not been successful in the treatment of the addictive disorders. Summary of reaction to the disease model of addiction. A welcome breath of fresh air was offered by Miller (1998) who observed,

Chapter Four

In the end, even in more biologically oriented treatment programs, clients in effect are left to use their rational capacities of deciding, accepting, choosing, and controlling themselves. And so they do, by the millions, with or more often without treatment. . . . Motivation [to quit using chemicals] does not seem to be a matter of insurmountable biology. (p. 122)

People in the United States seem to be fascinated with biological explanations for addictive disorders. Although the available data do seem to point to a biological factor in substance abuse, researchers have not been able to identify the specific biological mechanism or genetic pattern that seems to predispose the individual to the addictive use of chemicals. Indeed, researchers in the field of behavioral genetics are viewing alcohol dependence as being “polygenic,” a behavior that reflects the input of a number of different genes (Gordis, 1996). Each of these genes then adds or subtracts a degree of risk to the individual’s total potential for developing an addiction to alcohol. But genetic predisposition does not mean predestination (Schuckit, 2001). Rather, it is wise to remember that “in no mental illness is there expected to be a one-to-one relationship between the genes and disease. Instead, genes are thought of as ‘risk factors’ that increase the probability that mental illness will occur but that do not determine it” (McMahon, 2003, pp. 63–64). For example, although Schuckit’s (1994) study was suggestive of possible genetic factors that might predispose the individual to alcohol dependence, the combination of low response to the test dose of alcohol and having a family history of alcoholism still only accounted for approximately 22% of the individual’s later risk for an alcohol-use disorder (Lehrman, 2004). Environmental forces such as an adverse childhood environment must also be present for the individual to develop an alcohol-use disorder (Schuckit, 2001; Small, 2002). Currently, the evidence suggests that the individual’s genetic heritage accounts for about 60% of the ultimate risk for alcoholism, whereas the environment contributes the remainder (Schuckit, 2001).

Are People Predestined to Become Addicted to Chemicals?

The Personality Predisposition Theories of Substance Abuse Personality factors have long been suspected of playing a role in the development of the substance-use disorders, but research has failed to isolate a prealcoholic personality (Renner, 2004). In spite of this, certain constellations of personality patterns seem to be associated with some subtypes of alcoholism. Type II alcoholic males, for example, were found by Cloninger, Sigvardsson, and Bohman (1996) to be three times more likely to be depressed and four times more likely to have attempted suicide as Type I alcoholic males. There are a number of variations on this “predisposing personality” theme, but as a group they all are strongly deterministic in the sense that people are viewed as being powerless to avoid the development of an addictive disorder because of their personality predisposition if they are exposed to certain conditions. This is clearly seen in the “very word addict [which] confers an identity that admits no other possibilities” (Peele, 2004a, p. 43, italics in original). For example, a number of researchers have suggested that the personality traits of impulsiveness, thrill seeking, rebelliousness, aggression and nonconformity were “robust predictors of alcoholism” (Slutske et al., 2002, p. 124). Other researchers, however, found little evidence to suggest personality factors represent familial or heritable risk factors (Swendsen, Conway, Rounsaville, & Merikangas, 2002). Thus, the role of personality as a possible predisposing factor for substance use disorders remains elusive at best. Some researchers investigated whether the personality traits of nonconformity, risk taking, and rebelliousness might reflect disturbances in the dopamine utilization system in the brains of individuals who were alcohol abusers/addicts. To test this hypothesis, the team of Heinz and colleagues (1996) examined the clinical progress of 64 alcoholdependent individuals and attempted to assess their sensitivity to dopamine through various biochemical tests. Although the researchers expected to find an association between depression, anxiety, disturbances in dopamine utilization, and alcohol-use problems, there was little evidence to support the popular beliefs


that alcoholism is associated with depression, high novelty seeking, or anxiety. The work of Cloninger et al. (1996) seemed to point to the personality characteristics of Harm Avoidance (HA) and Reward Dependency (RD) as predisposing the individual to substance use disorders. But when the team of Howard, Kivhahan, and Walker (1997) examined a series of research studies that attempted to relate Cloninger’s theory of personality to the development of alcohol abuse/addiction, the authors found that even when a test specifically designed to test Cloninger’s theory of personality was used, the results did not clearly support the theory that individuals high in the traits of Harm Avoidance and Reward Dependency were significantly more likely to have an alcohol-use disorder. Thus, to date the personality predisposition theoretical models do not allow for more than a general statement that some personality characteristics might increase the long-term risk that a person will become addicted to chemicals. However, which personality characteristics might predispose the individual to become addicted to alcohol and/or drugs is still not clear. At this time, the “alcoholic personality” is viewed as nothing more than a clinical myth that has developed within the field of substance-abuse rehabilitation (Stetter, 2000). Even though there is limited evidence to support these beliefs, clinicians continue to operate on the assumption (a) that alcoholics are developmentally immature, (b) that the experience of growing up in a disturbed family helps to shape the personality growth of the future alcoholic, and (c) that alcohol-dependent individuals tend to overuse ego defense mechanisms such as denial. Unfortunately, much of what is called “treatment” in the United States rests on such assumptions about the nature of the personality of addicted people, which have not been supported in the clinical research. Traits identified in one research study as being central to the personality of addicted people are found to be of peripheral importance in subsequent studies. In the face of this evidence, then, one must ask how the myth of the “alcoholic personality” evolved. One possibility is that researchers became confused by the high comorbidity levels between alcohol/drug-use


disorders and antisocial personality disorder (ASPD), especially as 84% (Ziedonis & Brady, 1997) to 90% of individuals with ASPD will have an alcohol/drug use problem at some point in their lives (Preuss & Wong, 2000). This is not to suggest that the antisocial personality disorder caused the substance use. Rather, ASPD and the addiction to chemicals are postulated to be two separate disorders, which may co-exist in the same individual (Schuckit, Klein, Twitchell, & Smith, 1994; Stetter, 2000). An alternate theory about how people began to believe that there was an “addictive personality” might be traced to the impact of psychoanalytic thought in the first half of the 20th century. There is no standard definition or form of psychoanalysis, but as a group the psychoanalytic schools postulated that substance abuse is a symptom of an underlying disorder that motivates the individual to abuse chemicals in an attempt to calm these inner fires (Leeds & Morgenstern, 2003). Various psychoanalytic theorists offered competing theories as to the role of substance misuse in the personality of the addicted person, but essentially all major psychoanalytic theories suggest that there is an “addictive personality” that suffers from an internal conflict that paves the ground for addictive behavior. While this is theoretically appealing, psychoanalytic inquiry has failed to agree on the nature of this conflict or how it might be addressed (Leeds & Morgenstern, 2003). In spite of these failings, psychoanalytic theories have continued to influence the way addictive behaviors are viewed. Another theory suggesting that the “addictive personality” might be a research artifact was advanced by Pihl (1999). The author, drawing on earlier research, pointed out that 93% of the early research studies that attempted to isolate the so-called addictive personality were based on samples drawn from treatment centers. Unfortunately, there are major differences between those people who are and are not in treatment for a substance-use problem. One major difference is that some people are in a treatment program for a substance-use problem whereas others are not. The early studies cited by Pihl (1999) might have isolated a “treatment personality” more than an “addictive” personality, with those people who enter formal rehabilitation programs having

Chapter Four

common personality traits as compared with those who did not enter treatment. Ultimately, however, the study of the whole area of personality growth and development, not to mention the study of those forces that initially shape and later maintain addiction, are still so poorly defined that it is quite premature to answer the question of whether there are personality patterns that may precede the development of substance-use disorders. The abuses of the medical model. Unfortunately, since the time of its introduction, the disease model of alcoholism has been misused—or perhaps “misapplied” might be a better term—to the point that “judges, legislators, and bureaucrats . . . can now with clear consciences get the intractable social problems caused by heavy drinkers off their agenda by compelling or persuading these unmanageable people to go elsewhere—that is, to get ‘treatment’” (Fingarette, 1988, p. 66). This is because the substance-use disorders exist in the boundary between biological facts and social values (Rosenberg, 2002; Wakefield, 1992). Indeed, it has been argued that the term “mental disorder” is merely an evaluation label that justifies the use of medical power (in the broad sense, in which all the professions concerned with pathology, including psychiatry, clinical psychology, and clinical social work, are considered to be medical) to intervene in socially disapproved behavior. (Wakefield, 1992, p. 374)

This statement remains no less true in today’s world and is supported by writers such as Bracken (2002), who observed that “psychiatric classification systems do not hold some universal truth about madness and distress” (p. 4). Rather, such systems are arbitary classification systems designed to provide a way to understand the individual’s experiences and problems. Such classification systems are useful but hold significant dangers as well. There is the everpresent danger that they might be used as weapons to silence those who disagree with the authorities (Bracken, 2002). For example, the “patient” suffers from a profound schizophrenia, which by definition means that she is unable to correctly interpret reality, and so her observations about her treatment are


Are People Predestined to Become Addicted to Chemicals?

inherently incorrect. Also, the emphasis of a psychiatric classification system on psychopathology “can be profoundly disempowering and stigmatizing” (Bracken, 2002, p. 4). Thus, although this was not the original intent, the medical (psychiatric) diagnostic system has become a way to control social deviance (Bracken, 2002; Rosenberg, 2002). Armed with a diagnosis of abnormality, the guardians of social order, the courts and the lawyers, have assumed the power to define how this deviant behavior is to be treated. Within this context, one could argue that the “war on drugs” is nothing more than a politically inspired program to control individuals who were defined by conservative Republicans as social deviants (Humphreys & Rappaport, 1993). According to the authors, the war on drugs essentially served the Reagan administration as a “way to redefine American social control policies in order to further political aims” (p. 896). By shifting the emphasis of social control away from the community mental health center movement to the war on drugs, the authors suggested, justification was also found for a rapid and possibly radical expansion of the government’s police powers, and the “de facto repeal of the Bill of Rights” (Duke, 1996, p. 47). Indeed, the charge has been made that the community mental health movement itself has been subverted by government rules and regulations until it has become little more than “an arm of government enforcement” (Cornell, 1996, p. 12). As part of this process, many forms of nonconformist behaviors, including substance misuse, are now mistakenly classified not as political inconvenience but as psychiatric problems (Wilson & Trott, 2004). But the medical model of the addictive disorders might be viewed as having evolved into an excuse to extend the government’s police powers by making nonconformist behavior a medical disorder. Most certainly, the problem of drivers who operate motor vehicles while intoxicated present a very real problem of social deviance. However, one must question the wisdom of sending the chronic offender to “treatment” time and time again, when his or her acts warrant incarceration, and it has been argued that incarceration may help bring about a greater behavior change in these people than

would repeated exposure to short-term treatment programs (Peele, 1989). In an ideal world, one question that would be considered is this: At what point should treatment be offered as an alternative to incarceration, and when should incarceration be imposed on the chronic offender? Unfortunately, all too often, the courts fail to consider this issue before sending the offender to “treatment” once more.

The Final Common Pathway Theory of Addiction As should be evident by now, most practitioners in the field view the addictions to be a multimodal process, resting on a foundation of genetic predisposition and a process of social learning (Monti et al., 2002). But to date both the biological and the psychosocial theories of addiction have failed to explain all of the phenomena found in substance abuse/addiction, and a grand unifying theory of addiction has yet to evolve. But there is another viewpoint to consider, one called the final common pathway (FCP) theory of chemical dependency. In a very real sense, FCP is a non-theory: It is not supported by any single group or profession. However, the final common pathway perspective holds that substance use/abuse is not the starting point but a common endpoint of a unique pattern of growth. According to the FCP theory, there is no single “cause” of drug dependency but a multitude of different factors that may contribute to or detract from an individual’s chance of becoming addicted to chemicals. These might include social forces, psychological conditioning, how the person copes with internal pain, a spiritual shortcoming, or some combination of other factors. The proponents of this position acknowledge a possible genetic predisposition toward substance abuse. But the FCP theory also suggests that it is possible for a person who lacks this genetic predisposition for drug dependency to also become addicted to chemicals, if he or she has the proper life experiences. Strong support for the final common pathway model of addiction might be found in the latest neurobiological research findings. Over time, evolution has equipped humans (and many other species) with


a “reward system” that is activated when the individual engages in some activity that enhances survival (Nesse & Berridge, 1997; Nestler, Hyman, & Malenka, 2001; Selim, 2001; Stahl, 2000). The drugs of abuse seem to activate this so-called pleasure center or the reward system of the brain (Gardner, 1997; Reynolds & Bada, 2003). This is often called the pharmacological reward potential of the drugs being used. In effect, the final common pathway theory of addiction holds that the various drugs of abuse “create a signal in the brain that indicates, falsely, the arrival of a huge fitness benefit” (Nesse & Berridge, 1998, p. 64; Reynolds & Bada, 2003). This signal involves, at least in part, a spike in the dopamine levels of the ventral tegmentum (Saal, Dong, Bonci, & Malenka, 2003) and nucleus accumbens (Leshner, 2001b) regions of the brain. These regions of the brain are interconnected and form part of the brain’s reward system (Salloway, 1998; O’Brien, 1997; Fleming, Potter, & Kettyle, 1996; Hyman, 1996; Blum, Cull, Braverman, & Comings, 1996; Restak, 1994), which is part of what is known as the mesolimbic dopamine system of the brain (Leshner, 1998; Stahl, 2000). The “mesolimbic reward system . . . extends from the ventral tegmentum to the nucleus accumbens, with projections to areas such as the limbic system and the orbitofrontal cortex”(Leshner, 1998, p. 46). The meso-limbic dopamine system seems to function as a focal point for the brain’s reward system, projecting electrochemical messages to the limbic system (where emotions are thought to be generated), and the frontal cortex (a region of the brain involved with consciousness and planning). Research has demonstrated that the drugs of abuse cause a five- to tenfold increase in the dopamine levels in these regions of the brain at least at first, and it is theorized that when the dopamine levels fall after the individual stops using drugs for a period of time that the subjective experience is that of a sense of “craving” (Anthony, Arria, & Johnson, 1995; Nutt, 1996; O’Brien, 1997). There still is much to learn about how the drugs of abuse alter brain function. For example, we know that the current drugs of abuse alter the function of the locus ceruleus. This appears to be the region of the brain that coordinates the body’s response to both novel external stimuli and to internal stimuli that might signal a danger to the individual (Gourlay &

Chapter Four

Benowitz, 1995). Thus, the locus ceruleus will respond to such internal stimuli as blood loss, hypoxia, and pain. The locus ceruleus is also involved in the “fight-or-flight” response of fear and anxiety. This makes clinical sense, as in ages past novel stimuli might prove dangerous to the individual (such as the first time the observer sees a mountain lion running at him). It also is not surprising that this region of the brain is involved in the body’s response to the various drugs of abuse. Last, the final common pathway model of addiction views substance dependence as a common endpoint. Earlier editions of this text had suggested that the different drugs of abuse might activate different nerve pathways but that the final step was the activation of the brain’s “reward” or “pleasure” center, and that this was where the phemonenon of addiction was centered. The team of Saal et al. (2003) arrived at this same conclusion on the basis of their clinical research on brain function. The various drugs of abuse might follow different nerve pathways, but they all eventually activate the same regions of the brain’s pleasure center. This, then, is the core element of addiction according to the final common pathway theory of addiction: Addiction is the common endpoint for each individual who suffers from the compulsion to use chemicals. To treat the addiction, the chemical dependency counselor must identify the forces that brought about and support this individual’s addiction to chemicals. With this understanding, the counselor might establish a treatment program that will help the individual abstain from further chemical abuse.

Summary Although the medical model of drug dependency has dominated the treatment industry in the United States, this model is not without its critics. For each study that purports to identify a biophysical basis for alcoholism or other forms of addiction, other studies fail to document such a difference. For each study that claims to have isolated personality characteristics that seem to predispose one toward addiction, other studies fail to find that predictive value in these characteristics, or find that the personality

Are People Predestined to Become Addicted to Chemicals?

characteristic in question is brought about by the addiction, not one that predates it. Some researchers see the medical model of addiction as a metaphor through which people might better understand their problem behavior. However, the medical model of addiction is a theoretical model,


one that has not been proven and one that does not easily fit into the concept of disease as medicine in this country understands the term. Indeed, it was suggested that drugs were themselves valueless, and that it was the use to which people put the chemicals that was the problem, not the drugs themselves.


Addiction as a Disease of the Human Spirit

spiritus refers to the divine, living force within each of us. Yet human beings hold a unique position in the circle of life on Earth. In humankind, life, spiritus, has become aware of itself as being apart from nature, and we are all aware of our isolation from one other (Fromm, 1956). This awareness is known as “self-awareness.” But with the awareness of “self” comes the painful understanding that each of us is forever isolated from his fellows. Fromm termed this awareness of one’s basic isolation as being an “unbearable prison” (p. 7), in which are found the roots of anxiety and shame. “The awareness of human separation,” wrote Fromm, “without reunion by love is the source of shame. It is at the same time the source of guilt and anxiety” (p. 8). A flower, bird, or tree cannot help being what its nature ordains: a flower, bird, or tree. A bird does not think about being a bird or what kind of a bird it might become. The tree does not think about “being” a tree. Each behaves according to its gifts to become a specific kind of bird or tree. Arguably, each must live the life that it was predestined to live. But man possesses the twin gifts of self-awareness and self-determination. These gifts, however, carry a price. Fromm (1956, 1968) viewed humans’ awareness of their fundamental aloneness as being the price they had to pay for the power of self-determination. Humans, by selfawareness, have come to know that they are different from the animal world. With the awareness of “self” comes the power of self-determination. But selfawareness also brought a sense of isolation from the rest of the universe. People became aware of “self,” and in so doing came to know loneliness. It is only through the giving of “self” to another through love that Fromm (1956, 1968) envisioned man as transcending his isolation to become part of a greater whole.

Introduction To some, addiction is best understood as a disease of the “spirit,” a disconnection syndrome in which the person’s relationship with “self” and a higher power is replaced with the false promises of the chemical (Alter, 2001). The concept of alcoholism as a spiritual disorder forms the basis of the Alcoholics Anonymous program (Miller & Hester, 1995; Miller & Kurtz, 1994). From this perspective, to understand the reality of addiction is, ultimately, to understand something of human nature. In this chapter, the spiritual foundation for the addictive disorders will be explored.

The Rise of Western Civilization, or How the Spirit Was Lost Throughout the 20th century and the first decade of the 21st century, science and spirituality have been moving farther and farther apart. Although spirituality is recognized as one of the factors that helps to define, give structure to, and provide a framework within which to interpret human existence (Mueller, Plevak, & Rummans, 2001), in today’s world of medicine many “physicians question the appropriateness of addressing religious or spiritual issues within a medical setting” (Koenig, 2001, p. 1189). The physician’s discomfort reflects the attitude of “enlightened” society, which turns away, as if embarrassed by the need to discuss “spiritual” matters. To such a person, the “spirit” is viewed as a remnant of man’s primitive past, just like spears or clothing made of animal skins. In this way, the “enlightened” person turns away from his or her roots. The word spirit is derived from the Latin word spiritus, which on one level simply means “breath” (Mueller et al., 2001). On a deeper level, however, 44


Addiction as a Disease of the Human Spirit

Merton (1978) took a similar view on the nature of human existence. Yet Merton clearly understood that one could not seek happiness through the compulsive use of chemicals. He discovered that “there can never be happiness in compulsion” (p. 3). Rather, happiness may be achieved through the love that is shared openly and honestly with others. Martin Buber (1970) took an even more extreme view, holding that only through our relationships does our life have definition. Everyone stands “in relation” to another. The degree of relation, the relationship, is defined by how much of the “self” one offers to another, and what flows back in return. The reader might question what relevance this material has to a text on chemical dependency. The answer is found in the observation that the early members of Alcoholics Anonymous came to view alcoholism (and by extension, the other forms of addiction) as a “disease.” The disease of addiction to alcohol (and the other drugs of abuse) was viewed as being unique. In their wisdom, these early members saw alcoholism as a disease not only of the body but also of the spirit. In so doing, they transformed themselves from helpless victims of alcoholism into active participants in the healing process of recovery. Out of this struggle, the early members of Alcoholics Anonymous came to share an intimate knowledge of the nature of addiction. They viewed addiction not as a phenomenon to be dispassionately studied but as an elusive enemy that had a firm hold on each member’s life. Rather than focusing on the smallest common element that might “cause” addiction they sought to understand and share in the healing process of sobriety. In so doing, these early pioneers of AA learned that recovery was a spiritual process through which the individual recaptured the spiritual unity that he or she could not find through chemicals. Self-help groups such as Alcoholics Anonymous and Narcotics Anonymous1 do not postulate any specific theory of how chemical addiction comes about (Herman, 1988). They assume that any person whose chemical use interferes with his or her life has an addiction problem. The need to attend AA was, to its founders, self-evident to 1

Although there are many similarities between AA and NA, these are separate programs. On occasion, they might cooperate on certain matters, but each is independent of the other.

the individual in that either you were addicted to alcohol or you were not. Addiction itself was viewed as resting on a spiritual flaw within the individual. Those who were addicted were viewed as being on a spiritual search. They really are looking for something akin to the great hereafter, and they flirt with death to find it. Misguided, romantic, foolish, needful, they think they can escape from the world by artificial means. And they shoot, snort, drink, pop or smoke those means as they have to leave their pain and find their refuge. At first, it works. But, then it doesn’t. (Baber, 1998, p. 29)

In a very real sense, the drugs do not bring about addiction; rather, the individual comes to abuse or be addicted to drugs because of what he or she believes to be important (Peele, 1989). Such spiritual flaws are not uncommon, and they usually pass unnoticed in the average person. But for the alcohol/drug-addicted person, his or her spiritual foundation is such that chemical use is deemed acceptable, appropriate, and desirable as a means to reach a goal that is ill-defined at best. One expression of this spiritual flaw is the individual’s hesitation to take responsibility for the “self” (Peele, 1989). Personal suffering is, in a sense, a way of owning responsibility for one’s life. Most certainly, suffering is an inescapable fact of life. We are thus granted endless opportunities to take personal responsibility for our lives. Unfortunately, modern society looks down on the process of individual growth and the pain inherent in growth. With its emphasis on individual happiness, society views any pain as unnecessary, if not dysfunctional. Further, modern society advocates that pain automatically be eradicated through the use of medications, as long as the pills are prescribed by a physician (Wiseman, 1997). A reflection of this modern neurosis is that many people are willing to go to quite extraordinary lengths to avoid our problems and the suffering they cause, proceeding far afield from all that is clearly good and sensible in order to find an easy way out, building the most


Chapter Five

elaborate fantasies in which to live, sometimes to the total exclusion of reality. (Peck, 1978, p. 17)

In this, the addicted person is not unique. Many people find it difficult to accept the suffering that life offers to us. We all must come to terms with personal responsibility and with the pain of our existence. But the addicted person chooses a different path from that of the average person. Addiction might be viewed as an outcome of a process through which the individual utilizes chemicals to avoid acknowledging and accepting life’s problems. The chemicals lead the individual away from what he or she believes is good and acceptable in return for the promise of comfort and relief.

Diseases of the Mind—Diseases of the Spirit: The Mind-Body Question As B. S. Siegel (1986) and many others have observed, modern medicine enforces an artificial dichotomy between the individual’s “mind” and “body.” As a result, modern medicine has become rather mechanical, with the physician treating “symptoms,” or “diseases,” rather than the “patient” as a whole (Cousins, 1989; B. S. Siegel, 1989). In a sense, the modern physician has become a very highly skilled technician who often fails to appreciate the unique person now in the role of a patient. Diseases of the body are viewed as falling in the realm of physical medicine, whereas diseases of the mind fall into the orbit of the psychological sciences. Diseases of the human spirit, according to this view, are the specialty of clergy (Reiser, 1984). The problem with this perspective is that the patient in reality is not a “spiritual being” or a “psychosocial being” or a “physical being” but a unified whole. Thus, when a person abuses chemicals, the drug use will affect that person “physically, emotionally, socially, and spiritually” (Adams, 1988, p. 20). Unfortunately, society has difficulty accepting that a disease of the spirit—such as addiction—is just as real as a disease of the physical body. But we are indeed spiritual beings, and self-help programs such as Alcoholics Anonymous and Narcotics Anonymous view addiction to chemicals as spiritual illnesses. Their success in helping people to achieve

and maintain abstinence suggests that there is some validity to this claim. However, society struggles to adhere to the artificial mind-body dichotomy and in the process, to come to terms with the disease of addiction, which is neither totally a physical illness nor exclusively one of the mind.

The Growth of Addiction: The Circle Narrows As the disease of alcoholism progresses, the individual comes to center his or her life around the use of the alcohol. Indeed, one might view alcohol as being the “axis” (Brown, 1985, p. 79) around which the alcoholic’s life revolves. Alcohol comes to assume a role of “central importance” (p. 78) both for the alcoholic and the family. It is difficult for those who have never been addicted to chemicals to understand the importance that the addict attaches to the drug of choice. Those who are addicted will demonstrate a preoccupation with their chemical use and will protect their source of chemicals. To illustrate, it is not uncommon for cocaine addicts to admit that if it they had to make a choice, they would choose cocaine over friends, lovers, or even family. In many cases, the drugdependent person has already made this choice—in favor of the chemicals. The grim truth is that the active addict is, in a sense, insane. One reflection of this moral insanity is that the drug has taken on a role of central importance in the addict’s life. Other people, other commitments, become secondary. Addicted people might be said “never . . . to outgrow the self-centeredness of the child” (Narcotics Anonymous World Service Office, Inc., 1983, p. 1). In exploring this point, the book Narcotics Anonymous (Narcotics Anonymous World Service Office, Inc., 1982) noted: Before coming to the fellowship of NA, we could not manage our own lives. We could not live and enjoy life as other people do. We had to have something different and we thought we found it in drugs. We placed their use ahead of the welfare of our families, our wives, husbands, and our children. We had to have drugs at all costs. (p. 11, italics in original deleted)

Addiction as a Disease of the Human Spirit

As experienced mental health professionals can affirm, there are many people whose all-consuming interest is themselves. They care for nothing outside that little portion of the universe known as “self.” In this sense, chemical addiction is a form of self-love, or perhaps more accurately, a perversion of self-love. It is through the use of chemicals that such people seek to cheat themselves of the experience of reality, replacing it with the distorted desires of the “self.” To say that those who are addicted demonstrate an ongoing preoccupation with chemical use is something of an understatement. They generally demonstrate an exaggerated concern about maintaining their supply of the drug, and they may avoid those who might prevent further drug use. For example, consider an alcoholic who, with six or seven cases of beer in storage in the basement, goes out to buy six more “just in case.” This behavior demonstrates the individual’s preoccupation with maintaining an “adequate” supply. Other people, when their existence is recognized at all, are viewed by the addict either as being assets in the continued use of chemicals or impediments to drug use. But nothing is allowed to come between the individual and his or her drug, if at all possible. It is for this reason that recovering addicts speak of their still-addicted counterparts as being morally insane.

The Circle of Addiction: Addicted Priorities The authors of Narcotics Anonymous concluded that addiction was a disease composed of three elements: (a) a compulsive use of chemicals, (b) an obsession with further chemical use, and (c) a spiritual disease that is expressed through a total self-centeredness. It is this complete self-absorption, the spiritual illness, that causes the person to demand “what I want when I want it!” and makes the individual vulnerable to addiction. But for the person who holds this philosophy to admit to it would mean that he or she would have to face the need for change. So those who are addicted to chemicals will use the defense mechanisms of denial, rationalization, projection, and/or minimization to justify their increasingly narrow range of interests both to themselves and to significant others.


To support their addiction, people must renounce more and more of the “self” in favor of new beliefs and behaviors that make it possible to continue to use chemicals. This is the spiritual illness of addiction, for the individual comes to believe that “nothing should come between me and my drug use!” No price is too high nor is any behavior unthinkable if it allows for further drug use. People will be forced to lie, cheat, and steal to support their addiction, and yet they will seldom count the cost—as long as they can obtain the alcohol/drugs that they crave. Although many addicts have examined the cost demanded of their drug use and have turned away from chemicals with or without formal treatment, there are others who accept this cost willingly. These individuals will go through great pains to hide the evidence of their drug addiction so that they are not forced to look at the grim reality that they are addicted. Those who are alcohol/drug addicts are active participants in this process, but they are also blinded to its existence. If you were to ask them why they use alcohol, you would be unlikely to learn the real reason. As one individual said at the age of 73, “You have to understand that the reason I drink now is because I had pneumonia when I was 3 years old.” For her to say otherwise would be to admit that she had a problem with alcohol, an admission that she had struggled very hard to avoid for most of her adult life. As the addiction comes to control more and more of their lives, those who are addicted must expend greater and greater effort to maintain the illusion that they are living normally. Gallagher (1986) told of one physician, addicted to a synthetic narcotic known as fentanyl, who ultimately would buy drugs from the street because he could no longer divert enough drugs from hospital sources to maintain his drug habit. When the tell-tale scars from repeated injections of street drugs began to form, this same physician intentionally burned himself on the arm with a spoon to hide the scars. Addicted people also find that as the drug comes to control more and more of their existence, they must invest significant effort in maintaining the addiction itself. More than one cocaine or heroin addict has had to engage in prostitution (homosexual or heterosexual) to earn enough money to buy more chemicals.


Chapter Five

Everything is sacrificed to obtain and maintain what the addict perceives as an “adequate” supply of the chemicals.

Some Games of Addiction One major problem in working with those who are addicted to chemicals is that these individuals will often seek out sources of legitimate pharmaceuticals either to supplement their drug supply or to serve as their primary source. There are many reasons for this. First, as Goldman (1991) observed, they may purchase pharmaceuticals legally if there is a legitimate medical need for the medication. The drug user does not need to fear arrest with a legitimate prescription for a medication signed by a physician. Second, for the drug-addicted person who is able to obtain pharmaceuticals, the medication is of a known product, at a known potency level. The drug user does not have to worry about low potency “street” drugs, impurities that may be part of the drugs purchased on the street (as when PCP is mixed with low-potency marijuana), or misrepresentation (as when PCP is sold as “LSD”). Also, the pharmaceuticals are usually much less expensive than street drugs. For example, the pharmaceutical analgesic hydromophone costs about $1 per tablet at a pharmacy. On the street, each tablet might sell for as much as $45 to $100 (Goldman, 1991). To manipulate physicians into prescribing desired medications, addicts are likely to “use ploys such as outrage, tears, accusations of abandonment, abject pleading, promises of cooperation, and seduction” (Jenike, 1991, p. 7). The physician who works with addicted individuals must remember that they care little for the physician’s feelings. For them, the goal is to obtain more drugs at virtually any cost. One favorite manipulative ploy is for the addict (or an accomplice) to visit the hospital emergency rooms (Klass, 1989) or the physician’s office in an attempt to seek medication. The addict will then either simulate an illness or use a real physical illness, if one is present, as an excuse to obtain desired medications. Sometimes the presenting complaint is “kidney stones,” or a story about how other doctors or emergency room personnel have not been able to help the patient, or a story about how the individual “lost” the medication, or how the “dog ate it,” and so on.

Patients who have been asked to submit a urine sample for testing have sometimes secretly pricked their fingers with needles to squeeze some blood into the urine to support their claim that they were passing a kidney stone. Others have inserted foreign objects into the urethra to irritate the tissues lining it so they could provide a “bloody” urine sample. The object of these games is to obtain a prescription for narcotics from a sympathetic doctor who wants to treat the patient’s obvious “kidney stone.” Addicted individuals have been known to go to an emergency room with a broken bone, have the bone set, and go home with a prescription for a narcotic analgesic (provided to help the patient deal with the pain of a broken bone). Once at home, the patient (or an accomplice) removes the cast and the patient goes to another hospital emergency room to have yet another cast applied to the injured limb, in the process receiving another prescription for a narcotic analgesic. In a large city, this process might be repeated 10 times or more (Goldman, 1991). It is also not unusual for addicted persons to study medical textbooks to learn what symptoms to fake and how to provide a convincing presentation of these symptoms to health care professionals. In many cases, the addicted person knows more about the simulated disorder than does the physician who is treating it! A Thought on Playing the Games of Addiction A friend who worked in a maximum security penitentiary for men was warned by older, more experienced corrections workers not to try to “out con a con”—that is, don’t try to out-manipulate the individual whose entire life centers on manipulating others. “Remember that while you are home watching the evening news, or going out to see a movie, these people have been working on perfecting their ‘game.’ It is their game, their rules, and in a sense their whole life.” This lesson applies when working with addicted individuals, for addiction is a lifestyle, one that involves to a large degree the manipulation of others into supporting the addiction. Of course, the addict can, if necessary, “change his spots,” at least for a short time. This is especially true early in the addiction process or during the early stages of treatment. Often, addicts will go “on the wagon” for a few days, or perhaps even a few weeks, to prove both to themselves

Addiction as a Disease of the Human Spirit

and to others that they can “still control it.” Unfortunately, they fail to realize that by attempting to “prove” their control, they are actually demonstrating their lack of control over the chemicals. However, as the addiction progresses, more and more effort is required to motivate these people to give up their drug, even for a short time. Eventually, even “a short time” becomes too long. There is no limit to the manipulations that addicted individuals will use to support their addiction. Vernon Johnson (1980) spoke at length of how they will even use compliance as a defense against treatment. Overt compliance is often utilized as a defense against acceptance of their own spiritual, emotional, and physical deficits (Johnson, 1980).

Honesty as a Part of the Recovery Process One of the core features of the physical addiction to a chemical is “a fundamental inability to be honest . . . with the self ” (Knapp, 1996, p. 83, italics in original). Honesty is the way to break through this deception, to bring the person face to face with the reality of the addiction. The authors of Narcotics Anonymous (1982) warned that progression toward the understanding that one was addicted was not easy. Indeed, self-deception was part of the price the addict paid for addiction; “only in desperation did we ask ourselves, ‘Could it be the drugs?’ ” (pp. 1–2). Addicted people will often say with pride that they have been more or less “drug free” for various periods of time. The list of reasons the individual is drug free is virtually endless. This person is drug free because her husband threatened divorce if she continued her use of chemicals. (But she secretly longs to return to chemical use and will do so if she can find a way.) Another person is drug free because his probation officer has a reputation for sending people to prison if their urine sample (drawn under strict supervision) is positive for chemicals. (But he is counting the days until he is no longer on probation and possibly will even sneak an occasional drink or episode of drug use if he thinks he can get away with it.) In each instance, the person is drug free only because of an external threat. In virtually every case, as soon as the external threat is removed, the individual will usually drift back to chemicals. It is simply


impossible for one person to provide the motivation for another person to remain drug free forever. Many addicted people have admitted, often only after repeated and strong confrontation, that they had simply switched addictions to give the appearance of being drug free. It is not uncommon for an opiate addict in a methadone maintenance program to use alcohol, marijuana, or cocaine. The methadone does not block the euphoric effects of these drugs as it does the euphoria of narcotics. Thus, the addicted person can maintain the appearance of complete cooperation, appearing each day to take the methadone without protest, while still using cocaine, marijuana, or alcohol at will. In a very real sense, the addicted person has lost touch with reality. Over time, those who are addicted to chemicals come to share many common personality traits. There is some question whether this personality type, the so-called addicted personality, predates addiction or evolves as a result of the addiction (Bean-Bayog, 1988; Nathan, 1988). However, this chicken-or-egg question does not alter the reality that for the addict, the addiction always comes first. Many addicted people have admitted going without food for days, but very few would willingly go without using chemicals for even a short period of time. A cocaine addict will admit to avoiding sexual relations with a spouse or significant other in order to continue using cocaine. Just as the alcoholic will often sleep with an “eye opener” (an alcoholic drink) already mixed by the side of the bed, addicts have spoken about how they had a “rig” (a hypodermic needle) loaded and ready for use so that they could inject the drug even before they got out of bed for the day. Many physicians have boasted that the patients they worked with had no reason to lie to them. One physician declared that he knew a certain patient did not have prescriptions from other doctors because the patient “told me so!” The chemical dependency professional needs to remember at all times the twin realities that (a) for the person who is addicted, the chemical comes first, and (b) the addicted person centers his or her life around the chemical. For the physician to lose sight of this reality is to run the danger of being trapped in the addict’s web of lies, half-truths, manipulations, or outright fabrications.


Chapter Five

Recovering addicts will admit how manipulative they were, often saying they were their own worst enemy. For as they move along the road to recovery, addicted people will realize that they would also deceive themselves as part of the addiction process. One inmate said, “Before I can run a game on somebody else, I have to believe it myself.” As the addiction progresses, addicts do not question their perception but come to believe what they need to believe in order to maintain the addiction.

False Pride: The Disease of the Spirit Every addiction is, in the final analysis, a disease of the spirit. Edmeades (1987) tells that in 1931, Carl Jung was treating an American, Rowland H., for alcoholism. Immediately after treatment, Rowland H. relapsed but was not accepted back into analysis by Jung. His only hope of recovery, according to Jung, lay in a spiritual awakening, which he later found through a religious group in America. Carl Jung identified alcoholism (and by implication all forms of addiction) as diseases of the spirit (Peluso & Peluso, 1988). The Twelve Steps and Twelve Traditions of Alcoholics Anonymous (1981) speaks of addiction as being a sickness of the soul. In support of this perspective, Kandel and Raveis (1989) found that a “lack of religiosity” (p. 113) was a significant predictor of continued use of cocaine or marijuana for young adults. For each addicted individual, a spiritual awakening appears to be an essential element of recovery. In speaking with addicted people, one is impressed by how often they have suffered in their lives. It is almost as if a path can be traced from the emotional trauma to the addiction. Yet the addict’s spirit is not crushed at birth, nor does the trauma that proceeds addiction come about overnight. The individual’s spirit comes to be diseased over time, as the addict-to-be comes to lose his or her way in life. Fromm (1968) observed that “we all start out with hope, faith and fortitude” (p. 20). However, the assorted insults of life often join forces to bring about disappointment and a loss of faith. The individual comes to feel an empty void within. It is at this point that if something is not found to fill the addict’s “empty heart, he will fill his stomach with artificial stimulants and sedatives” (Graham, 1988, p. 14). An excellent example of this

process might be seen in the Poland of a decade ago. Many of that country’s young adults saw no productive future for themselves, shaped by years of economic hardship and the martial law of the 1980s. Often they turned to heroin to ease their pain (Ross, 1991). Few of us escape moments of extreme disappointment or awareness (Fromm, 1968). It is at these times that people are faced with a choice. They may “reduce their demands to what they can get and do not dream of that which seems to be out of their reach” (Fromm, 1968, p. 21). The Narcotics Anonymous pamphlet The Triangle of Self-Obsession (Narcotics Anonymous World Service Office, Inc., 1983) observed that this process is, for most, a natural part of growing up. But the person who is in danger of addiction refuses to reduce those expectations. Rather, the addicted person comes to demand “What I want when I want it!” The Triangle of Self-Obsession (Narcotics Anonymous World Service Office, Inc., 1983) noted that addicted people tend to “refuse to accept that we will not be given everything. We become self-obsessed; our wants and needs become demands. We reach a point where contentment and fulfillment are impossible” (p. 1). Despair exists when people consider themselves powerless. Existentialists speak of the realization of ultimate powerlessness as awareness of one’s nonexistence. In this sense, the individual is confronted with the utter futility of existence. Faced with the ultimate experience of powerlessness, people have a choice. They may either accept their true place in the universe, or they may continue to distort their perceptions and thoughts to maintain the illusion of self-importance. Only when they accept their true place in the universe, along with the pain and suffering that life might offer, are they capable of any degree of spiritual growth (Peck, 1978) . Their choice is to accept reality or to turn away from it. Many choose to turn away for it does not offer them what they think they are entitled to. In so doing, these people exhibit the characteristic false pride so frequently encountered in addiction. People cannot accomplish the illusion of being more than they are without an increasingly large investment of time, energy, and emotional resources. This lack of humility, the denial of what one is in order to give an illusion of being better than this, plants the seeds of despair (Merton, 1961). Humility implies an

Addiction as a Disease of the Human Spirit

honest, realistic view of self-worth. Despair rests upon a distorted view of one’s place in the universe. This despair grows with each passing day, as reality threatens time and again to force upon the individual an awareness of the ultimate measure of his or her existence. In time, external supports are necessary to maintain this false pride. Brown (1985) identified one characteristic of alcohol as being its ability to offer people an illusion of control over their feelings. This is a common characteristic of every drug of abuse. If life does not provide the pleasure drug users feel entitled to, at least they might find this comfort and pleasure in a drug, or combination of drugs, which frees them from life’s pain and misery—at least for awhile. What they do not realize, often not until after the seeds of addiction have been planted, is that the chemical offers an illusion only. There is no substance to the self-selected feelings brought about by the chemical, only a mockery of peace. The deeper feelings made possible through the acceptance of one’s lot in life (which is humility) seem to be a mystery to those who are addicted. “How can you be happy?” they ask; “you are nothing like me! You don’t use!!!” Humility is the honest acceptance of one’s place in the universe (Merton, 1961). Included in this is the open acknowledgment of one’s strengths and weaknesses. When people become aware of the reality of their existence, they may accept their lot in life or they might choose to struggle against existence itself. This struggle against acceptance ultimately leads to despair, the knowledge that one is lost (Fromm, 1968). This despair is often so all-inclusive that the “self” seems unable to withstand its attack. Addicts have described this despair as an empty, black void within. Then, as Graham (1988) noted, they have attempted to fill this void with the chemicals they find around them. The Twelve Steps and Twelve Traditions (1981) viewed false pride as a sickness of the soul. In this light, chemical use might be viewed as a reaction against the ultimate despair of encountering one’s lot in life—the false sense of being that says “not as it is, but as I want it!” in response to one’s discovery of personal powerlessness. Surprisingly, in light of this self-centered approach to life, various authors have come to view the substanceabusing person as essentially seeking to join with a higher power. But in place of the spiritual struggle necessary to achieve inner peace, the addicted person


seems to take a shortcut through the use of chemicals (Chopra, 1997; Gilliam, 1998; Peck 1978, 1993, 1997b). Thus, May (1988) was able to view alcohol/ drug addiction as side-tracking “our deepest, truest desire for love and goodness” (p. 14). In taking the shortcut through chemical abuse, people find that their lives are dominated by the drugs. They center their existence more and more around further chemical use, until at last they believe that they cannot live without it. Further spiritual growth is impossible when people see chemical use as their first priority. In side-tracking their drive for truth and spiritual growth, addicts develop a sense of false pride, expressed almost as a form of narcissism. The clinical phenomenon of narcissism is a reaction against perceived worthlessness, loss of control, and an emotional pain so intense that it seems almost physical (Millon, 1981). In speaking of the Narcissistic Personality, Millon (1981) observed that such people view their own self-worth in such a way that “they rarely question whether it is valid” (p. 167); they “place few restraints on either their fantasies or rationalizations [and] their imagination is left to run free.” Drug-dependent people are not usually narcissistic personalities in the pure sense of the word, but significant narcissistic traits are present in addiction. One finds that false pride, which is based on the lack of humility, causes people to distort not only their perceptions of “self,” but also of “other,” in the service of their pride and their chemical use (Merton, 1961). People who are self-centered in this way “imagine that they can only find themselves by asserting their own desires and ambitions and appetites in a struggle with the rest of the world” (Merton, 1961, p. 47). In Merton’s words are found hints of the seeds of addiction, for the individual’s chemical of choice allows the individual to impose his or her own desires and ambitions on the rest of the world. Brown (1985) speaks at length of the illusion of control over one’s feelings that alcohol gives to the individual. May (1988) also speaks of how chemical addiction reflects a misguided attempt to achieve complete control over one’s life. The drugs of abuse also give an illusion of control to users, a dangerous illusion that allows them to believe that they are imposing their own appetites onto the external world, whereas in reality they are losing their own wills to the chemical.


Chapter Five

Addicted people sometimes talk with pride about their use, not realizing that other people see these descriptions as horrors the users have endured in the service of their addiction. This is known as “euphoric recall,” a process in which addicts selectively recall mainly the pleasant aspects of their drug use, while selectively forgetting the pain and suffering they have experienced as a consequence (Gorski, 1993). In listening to the alcohol/drug-addicted person, one is almost left with the impression that they are speaking about the joys of a valued friendship rather than a drug of abuse (Byington, 1997). Addicted people, for example, have spoken at length of the quasi-sexual thrill that they achieved through cocaine or heroin, dismissing the fact that their abuse of this same drug cost them spouses, families, and perhaps several tens of thousands of dollars. There is a name for this distorted view of one’s self and of one’s world that comes about with chronic chemical use: It is called the insanity of addiction.

Denial, Rationalization, Projection, and Minimization: The Four Horsemen of Addiction The traditional view of addiction is that all human behavior, including the addictive use of chemicals, rests on a foundation of characteristic psychological defenses. In the case of chemical dependency, the defense mechanisms that are thought to be involved are denial, rationalization, projection, and minimization. These, like all psychological defenses, are thought to operate unconsciously, in both the intrapersonal and interpersonal spheres. They exist in order to protect the individual from the conscious awareness of anxiety. Often without knowing it, addicted individuals will utilize these defense mechanisms to avoid recognizing the reality of their addiction. For once the reality of the addiction is acknowledged, there is an implicit social expectation that the users will deal with their addiction. Thus, to understand addiction, one must also understand each of these characteristic defense mechanisms. Denial. Clinical lore among substance-abuse rehabilitation professionals holds that the individual’s substance use problem hides behind a wall of denial.

The characteristic denial of users’ growing dependence on chemicals and the impact the drugs are having on their lives is thought to be the most common reason that individuals fail to seek help for alcoholism (Wing, 1995). Simply, denial is “a disregard for a disturbing reality” (Kaplan & Sadock, 1996, p. 20). It is a form of unconscious self-deception, used by people’s unconscious to help them avoid anxiety and emotional distress (Shader, 1994). They accomplish this through a process of selective perception of the past and present so that painful and frightening elements of reality are not recognized or accepted. This has been called “tunnel vision” by the Alcoholics Anonymous program (to be discussed in a later section). Denial is classified as being a primitive form of unconscious defense, usually found in the person who is experiencing significant internal and interpersonal distress (Perry & Cooper, 1989). Projection. This mechanism is an unconscious one, through which material that is emotionally unacceptable in oneself is unconsciously rejected and attributed to others (Kaplan & Sadock, 1996). Johnson (1980) defined projection differently, noting that the act of projection is the act of “unloading self-hatred onto others” (p. 31, italics in original deleted). At times, the defense mechanism of projection will express itself through the behaviors of misinterpreting the motives or intentions of others (Kaplan & Sadock, 1996). Young children will often cry out “See what you made me do?!” when they have misbehaved in order to project responsibility for their action onto others. Individuals with substance-use problems will often do this as well, blaming their addiction or unacceptable aspects of their behavior on others. Rationalization. The third common defense mechanism allows addicted individuals to justify feelings, motives, or behavior that they would otherwise find unreasonable, illogical, or intolerable (Kaplan & Sadock, 1996). Kaplan and Sadock later noted that rationalization may express itself through users’ “invention of a convincing fallacy” (p. 184) through which their behavior might seemingly be justified. Some examples of rationalization used by addicts include blaming their spouse or family (“if you were married to _____ , you would drink, too!”), or medical problems (a 72-year-old alcoholic might blame his drinking on the fact that he had pneumonia when he was 12, for example).

Addiction as a Disease of the Human Spirit

Minimization. This mechanism operates in a different manner from the three reviewed earlier. In a sense, minimization operates like rationalization, but in a more specific way. By a variety of mechanisms, addicted individuals who use minimization as a defense will actively reduce the amount of chemicals that they admit to using, or deny the impact that their chemical use has had on their lives.. Alcohol-dependent individuals, for example, might pour their drinks into an oversized container, perhaps the size of three or four regular glasses, and then claim that they have “only three drinks a night!” (overlooking the fact that each drink is equal to three regularsized drinks). Individuals with a substance-use problem might minimize their chemical use by claiming to “only drink four nights a week,” and hope that the interviewer does not think to ask whether a “week” means a 5-day workweek or the full 7-day week. In such cases, it is not uncommon to find that such clients drink four nights out of five during the workweek and are intoxicated from Friday evening until they go to bed on Sunday night—with the final result being that they drink six nights out of each full week. Another expression of rationalization occurs when individuals claim time when they were in treatment, in jail, or hospitalized as “straight time” (i.e., time when they were not using chemicals), overlooking the fact that they were unable to get alcohol/drugs because they were incarcerated.2 Another common rationalization is that an individual might only become addicted to artificial chemicals, such as alcohol, amphetamines, or heroin. Obviously, as marijuana is an herb that grows naturally (it is rationalized), the individual could not possibly become addicted to it. Another popular rationalization is that it is “better to be an alcoholic than a needle freak. After all, alcohol is legal!” Reactions to the Spiritual Disorder Theory of Addiction. Although the traditional view of substance abuse in the United States has been that the defense mechanisms of denial, rationalization, projection, and minimization are traditionally found in cases of chemical dependency, 2

This is often classified as “situational” abstinence by rehabilitation professionals, especially if the clients admit that they would have used chemicals during these “dry” periods if they could have done so without being caught.


this view is not universally accepted. There is a small, increasingly vocal minority that has offered alternative frameworks within which substance-abuse professionals might view the defense mechanisms that they encounter in their work with addicted individuals. In the 1980s and 1990s, Stanton Peele proved to be a very vocal critic of the medical model of chemical dependency. In his (1989) work on the subject, he spoke at length of how treatment centers often utilize the individual’s refusal to admit to his or her addiction as being a confirmation that the individual is addicted. The individual is automatically assumed to be “in denial” of his or her chemical abuse problem. However, a second possibility, all too often overlooked by treatment center staff, according to Peele, is that the individual might not be addicted to chemicals to begin with!!! The automatic assumption that the client is “in denial” might blind treatment center staff to the possibility that the individual’s refusal to admit to being addicted to chemicals might be a reflection of reality and not an expression of denial. This possibility underscores the need for an accurate assessment of the client’s substance-use patterns to determine whether there is a need for active intervention or treatment. Miller and Rollnick (2002) offered a theory that radically departs from the belief that addicts typically utilize denial as a major defense against the admission of being “sick.” The authors suggest that alcoholics, as a group, do not utilize denial more frequently than any other average group. Rather, the authors suggest that a combination of two factors has made it appear that addicts frequently utilize defense mechanisms such as denial, rationalization, and projection in the service of their dependency. First, the authors suggest that the process of selective perception on the part of treatment center staff makes it appear that substancedependent people frequently use the defense mechanisms discussed earlier. The authors point to the phenomenon known as the “illusion of correlation” to support this theory. According to the illusion of correlation, human beings tend to remember information that confirms their preconceptions and to forget or overlook information that fails to meet their conceptual model. Substance-abuse professionals would be more likely to remember clients who did use the defense mechanisms of denial, rationalization,


Chapter Five

projection, or minimization, according to the authors, because that is what they were trained to expect. Second, Miller and Rollnick (2002) suggested that when substance-abuse rehabilitation professionals utilize the wrong treatment approach for the client’s unique stage of growth, the resulting conflict is interpreted as evidence of denial, rationalization, projection, or minimization. On the basis of their work with addicted individuals, Berg and Miller (1992) also suggested that “denial” is found when the therapist utilizes the wrong treatment approach for the client that he or she is working with. Thus, both teams of clinicians have concluded that defense mechanisms such as “denial” are not a reflection of a pathological condition on the part of the client but the result of the wrong intervention being utilized by the professional who is working with the individual. These theories offer a challenging alternative to the traditional model that shows the addicted person using characteristic defense mechanisms such as those discussed in this chapter.

Summary Many human service professionals who have had limited contact with addiction tend to have a distorted view of the nature of drug addiction. Having heard the

term disease applied to chemical dependency, the inexperienced human service worker may think in terms of more traditional illnesses and may be rudely surprised at the deception that is inherent in drug addiction. Although chemical dependency is a disease, it is a disease like no other. It is, as noted in an earlier chapter, a disease that requires the active participation of the “victim.” Further, self-help groups such as Alcoholics Anonymous or Narcotics Anonymous view addiction as a disease of the spirit and offer a spiritual program to help its members achieve and maintain their recovery. Addiction is, in a sense, a form of insanity. The insanity of addiction rests upon a foundation of psychological defense mechanisms such as denial, rationalization, projection, and minimization. These mechanisms, plus self-deception, keep the person from becoming aware of the reality of his or her addiction until the disease process has progressed quite far. To combat self-deception, Alcoholics Anonymous places emphasis on honesty, opennness, and a willingness to try to live without alcohol. Honesty, both with self and with others, is the central feature of the AA program, which is designed to foster spiritual growth to help the individual overcome his or her spiritual weaknesses.


An Introduction to Pharmacology1

The Prime Effect and Side Effects of Chemicals

Introduction It is virtually impossible to discuss the effects of the various drugs of abuse without touching on a number of essential pharmacological concepts. In this chapter, some of the basic principles of pharmacology will be reviewed, and this should help you to better understand the impact that the different drugs of abuse may have on the user’s body.2 There are many misconceptions about recreational chemicals. For example, some people believe that recreational chemicals are somehow unique. This is not true: They work the same way that other pharmaceuticals do. Alcohol and the drugs of abuse act by changing (strengthening/weakening) a potential that already exists within the cells of the body (Ciancio & Bourgault, 1989; Williams & Baer, 1994). In the case of the drugs of abuse, all of which exert their desired effects in the brain, they modify the normal function of the neurons of the brain. The second misconception about the drugs of abuse is that they are somehow different from legitimate pharmaceuticals. This is also incorrect. Many of the drugs of abuse are—or were—once legitimate pharmaceuticals used by physicians to treat disease. Thus, the drugs of abuse obey the same laws of pharmacology that apply to the other medications in use today.

One rule of pharmacology is that whenever a chemical is introduced into the body, there is an element of risk (Laurence & Bennett, 1992). Every chemical agent presents the potential to cause harm to the individual, although the degree of risk varies as a result of a number of factors such as the specific chemical being used, the individual’s state of health, and so on. The treatment of a localized infection caused by a fungus on the skin presents us with a localized site of action, that is, on the surface the body. This makes it easy to limit the impact that a medication used to treat the “athlete’s foot” infection might have on the organism as a whole. The patient is unlikely to need more than a topical medication that can be applied directly to the infected region. But consider for a moment the drugs of abuse. As mentioned in the last section, the site of action for each of the recreational chemicals lies deep within the central nervous system (CNS). There is increasing evidence that each of the various drugs of abuse ultimately will impact the limbic system of the brain. However, the drugs of abuse are very much like a blast of shotgun pellets: They will have an impact not only on the brain but also on many other organ systems in the body. For example, as we will discuss in the chapter on cocaine, this drug causes the user to experience a sense of well-being, or euphoria. These sensations that might result from cocaine abuse are called the primary effects of the cocaine abuse. But the chemical has a number of side effects; one of these is causing the coronary arteries of the user’s heart to constrict. Coronary artery constriction is hardly a desired effect, and, as will be discussed in Chapter 12, might appear to be the cause of heart

1This chapter is designed to provide the reader with a brief overview of some of the more important principles of pharmacology. It is not intended to serve as, nor should it be used for, a guide to patient care. 2Individuals interested in reading more on pharmacology might find several good selections in any medical or nursing school bookstore.



Chapter Six

attacks in cocaine users.3 Such unwanted effects are often called secondary effects, or side effects. The side effects of a chemical might range from simply making the patient feel uncomfortable to being life threatening. A second example is aspirin, which inhibits the production of chemicals known as prostaglandins at the site of an injury. This helps to reduce the individual’s pain from an injury. But the body also produces prostaglandins in the kidneys and stomach, where these chemicals help control the function of these organs. Because aspirin tends to nonselectively block prostaglandin production throughout the body, including the stomach and kidneys, this unwanted effect of aspirin may put the user’s life at risk as the aspirin interferes with the normal function of these organs. A third example of the therapeutic effect/side effect phenomenon might be seen when a person with a bacterial infection of the middle ear (a condition known as otitis media) takes an antibiotic such as penicillin. The desired outcome is for the antibiotic to destroy the bacteria causing the infection in the middle ear. However, a side effect might be a case of drug-induced diarrhea, as the antibiotic interferes with normal bacteria growth patterns in the intestinal tract. Thus, one needs to keep in mind that all pharmaceuticals, and the drugs of abuse, have both desired effects and numerous, possibly undesirable, side effects.

Drug Forms and How Drugs Are Administered A drug is essentially a foreign chemical that is introduced into the individual’s body to bring about a specific, desired response. Antihypertensive medications are used to control excessively high blood pressure, whereas antibiotics are used to eliminate unwanted bacterial infections. The recreational drugs are introduced into the body, as a general rule, to bring about feelings of euphoria, relaxation, and relief from stress. The specific 3Shannon, Wilson, and Stang (1995) refer to a chemical’s primary effects as the drug’s therapeutic effects (p. 21). However, their text is devoted to medications and their uses, not to the drugs of abuse. In order to maintain the differentiation between the use of a medication in the treatment of disease and the abuse of chemicals for recreational purposes, this text will use the term primary effects.

form in which a drug is administered will have a major effect on (a) the speed with which that chemical is able to work, and (b) the way the chemical is distributed throughout the body. In general, the drugs of abuse are administered by either the enteral or parenteral route. Enteral Forms of Drug Administration Medications that are administered by the enteral route are administered orally, sublingually, or rectally (Ciancio & Bourgault, 1989; Williams & Baer, 1994). The most common means by which a medication is administered orally is the tablet. Essentially, a tablet is “a compounded form in which the drug is mixed with a binding agent to hold the tablet together before administration. . . . Most tablets are designed to be swallowed whole” (Shannon, Wilson, & Stang, 1995, p. 8). A number of the drugs of abuse are often administered in tablet form, including aspirin, the hallucinogens LSD and MDMA, and on occasion, illicit forms of amphetamine. Amphetamine tablets are frequently made in illicit laboratories and are known on the street by a variety of names (e.g., “white cross” or “cartwheels”). A second common form that oral medication might take, according to the authors, is that of a capsule. Essentially, capsules are modified tablets, with the inside medication being surrounded by a gelatin capsule. The capsule is designed to be swallowed whole, and once it reaches the stomach the gelatin capsule breaks down, allowing the medication to be released into the gastrointestinal tract (Shannon, Wilson, & Stang, 1995). Medications can take many other forms. For example, some medications are administered in liquid form, for oral use. Antibiotics and some over-the-counter analgesics often are administered in liquid forms, especially when the patient is a very young child. Liquid forms of a drug make it possible to tailor each dose to the patient’s weight and are ideal for patients who have trouble taking pills or capsules by mouth. Of the drugs of abuse, alcohol is perhaps the best example of a chemical that is administered in liquid form. Some medications, and a small number of the drugs of abuse, might be absorbed through the blood-rich tissues under the tongue. A chemical that enters the body by this method is said to be administered sublingually. The sublingual method of drug administration is considered a variation of the oral form of drug administration. Certain drugs, like nitroglycerin and fentanyl,


An Introduction to Pharmacology

are well absorbed by the sublingual method of administration. However, for the most part, the drugs of abuse are not administered this way. Parenteral Forms of Drug Administration The parenteral method of drug administration essentially involves injecting the medication directly into the body. There are several forms of parenteral administration, which are commonly used in both the world of medicine and the world of drug abuse. First, there is the subcutaneous method. In this process, a chemical is injected just under the skin. This allows the drug to avoid the dangers of passing through the stomach and gastrointestinal tracts. However, drugs that are administered in a subcutaneous injection are absorbed more slowly than are chemicals injected into either muscle tissue or a vein. As we will see in the chapter on narcotics addiction, heroin addicts will often use subcutaneous injections, a process that they call “skin popping.” A second method of parenteral administration involves the intramuscular injection of a medication. Muscle tissues have a good supply of blood, and medications injected into muscle tissue will be absorbed into the general circulation more rapidly than when injected just under the skin. As we will discuss in the chapter on anabolic steroid abuse, it is quite common for individuals abusing anabolic steroids to inject them into the muscle tissue. The third method of parenteral administration is the intravenous (IV) injection. Here the chemical is injected directly into a vein, going straight to the general circulation (Schwertz, 1991). Of the drugs of abuse, heroin, cocaine, and some forms of amphetamine are examples of chemicals administered by intravenous injection. Because of the speed with which the chemical reaches the general circulation when administered by intravenous injection, there is a very real potential for undesirable reactions. The very nature of intravenously administered drugs provides the body very little time to adapt to the arrival of the foreign chemical (Ciancio & Bourgault, 1989). This is one reason that users of intravenously administered chemicals, such as heroin, frequently experience a wide range of adverse effects in addition to the desired euphoria caused by the chemical being abused. Just because a parenteral method of drug administration was utilized, the chemical in question will not

have an instantaneous effect. The speed at which all forms of drugs administered by parenteral administration begin to work are influenced by a number of factors, which will be discussed in the section on drug distribution later in this chapter. Other Forms of Drug Administration A number of additional methods of drug administration at least need to be briefly identified. Some chemicals might be absorbed through the skin, a process that involves a transdermal method of drug administration. Eventually, chemicals absorbed transdermally reach the general circulation and are then distributed throughout the body. Physicians will often use transdermal drug administration to provide the patient with a low, steady, blood level of a chemical. A drawback of transdermal drug administration is that it is a very slow way to introduce a drug into the body. For certain agents, however, it is useful. An example is the skin patch used to administer nicotine to patients who are attempting to quit smoking. Some antihistamines are administered transdermally, especially when used for motion sickness. There also is a transdermal patch available for the narcotic analgesic fentanyl, although its success as a means of providing analgesia has been quite limited. Occasionally, chemicals are administered intranasally. The intranasal administration of a chemical involves “snorting” the material in question so that it is deposited on the blood-rich tissues of the sinuses. From that point, it is possible for many chemicals to be absorbed into the general circulation. For example, both cocaine and heroin powders might be—and frequently are—“snorted.” The process of “snorting” is similar to the process of inhalation, which is used by both physicians and illicit drug users. Inhalation of a compound takes advantage of the fact that the blood is separated from exposure to the air by a layer of tissue that is less than 1/100,000ths of an inch (or 0.64 microns) thick (Garrett, 1994). Many chemical molecules are small enough to pass through the lungs into the general circulation, as is the case with surgical anesthetics. Some of the drugs of abuse, such as heroin and cocaine, might also be abused by inhalation when they are smoked. In another form of inhalation, the particles being inhaled are suspended in the smoke. These particles are small enough to reach


Chapter Six

the deep tissues of the lungs, where they are then deposited. In a brief period of time, the particles are broken down into smaller units until they are small enough to pass through the walls of the lungs and reach the general circulation. This is the process that takes place when tobacco products are smoked. Each subform of inhalation takes advantage of the blood-rich, extremely large surface area of the lungs, through which chemical agents might be absorbed (Benet, Kroetz, & Sheiner, 1995). Further, depending on how fast the chemical being inhaled can cross over into the general circulation, chemicals can be introduced into the body relatively quickly. However, research has shown that the actual amount of a chemical absorbed through inhalation tends to be quite variable for a number of reasons. First, the individual must inhale at just the right time to allow the chemical to reach the desired region of the lungs. Second, some chemicals pass through the tissues of the lung very poorly and thus are not well absorbed by inhalation. As we will see in the chapter on marijuana, the individual who smokes marijuana must use a different technique from that used in smoking tobacco in order to get the maximum effect from the chemical that is inhaled. The variability in the amount of chemical absorbed through the lungs limits the utility of inhalation as a means of administering medications. However, for some of the drugs of abuse, inhalation is the preferred method. Pharmaceuticals can be introduced into the body in other ways. For example, the chemical might be prepared in such a way that it might be administered rectally, or through enteral tubes. However, because the drugs of abuse are generally introduced into the body by injection, orally, intranasally, or through smoking, we will not discuss the more obscure methods of drug administration.

Bioavailability In order to work, a drug being abused must enter the body in sufficient strength to achieve the desired effect. Pharmacists refer to this as the bioavailability of the chemical. Bioavailability is the concentration of the unchanged chemical at the site of action (Loebl, Spratto, & Woods, 1994; Sands, Knapp, & Ciraulo, 1993). The bioavability of a chemical in the body is influenced, in turn, by several factors (Benet, Kroetz, & Sheiner, 1995; Sands, Knapp, &

Ciraulo, 1993): (a) absorption, (b) distribution, (c) biotransformation, and (d) elimination. To understand the process of bioavailability, we will consider in more detail each of the factors that might influence the bioavailability of a chemical. Absorption Except for topical agents, which are deposited directly on the site of action, chemicals must be absorbed into the body. Ultimately, the concentration of a chemical in the serum and at the site of action is influenced by the process of absorption (Loebl, Spratto, & Woods, 1994). This process involves the movement of drug molecules from the site of entry, through various cell boundaries, to the site of action. The human body is composed of layers of specialized cells that are organized into specific patterns in order to carry out certain functions. For example, the cells of the bladder are organized in such a way as to form a muscular reservoir in which waste products are stored and from which excretion takes place. The cells of the circulatory system are organized to form tubes (blood vessels) that contain the cells and fluids of the circulatory system. As a general rule, each layer of cells the drug must pass through to reach the general circulation will slow the absorption down that much more. For example, just one layer of cells separates the air in our lungs from the general circulation. Drugs that are able to pass across this boundary may reach the circulation in just a few seconds. In contrast, a drug that is ingested orally must pass through several layers of cells before reaching the general circulation from the gastrointestinal tract. Thus, the oral method of drug administration is generally recognized as being one of the slowest methods by which a drug might be admitted into the body. Figure 6.1 demonstrates the process of drug absorption. Drug molecules can take advantage of a number of specialized cellular transport mechanisms to pass through the walls of the cells at the point of entry. These cellular transport mechanisms are quite complex and function at the molecular level. Some drug molecules simply diffuse through the cell membrane, a process known as passive diffusion or passive transport across the cell boundary. This is the most common method of drug transport into the body’s cells and operates on the principle that chemicals tend to diffuse from areas of


An Introduction to Pharmacology Blood vessel collecting waste products and returning to liver

Drug molecules at site of entry, being absorbed

Drug molecules being transferred from cell bodies to blood vessels

Cells lining wall of gastrointestinal tract

FIGURE 6.1 The process of drug absorption.

high concentration to areas of lower concentration. Other drug molecules take advantage of one of several molecular transport mechanisms that move various essential molecules into (and out of) cells. Collectively, these different molecular transport mechanisms provide a system of active transport across cell boundaries and into the interior of the body. Several specialized absorption-modification variables influence the speed at which drugs might be absorbed from the site of entry. For example, there is the rate of blood flow at the site of entry and the molecular characteristics of the drug molecule being admitted to the body. However, for this text, simply remember that the process of absorption refers to the movement of drug molecules from the site of entry to the site of action. In the next section, we will discuss the second factor that influences how a chemical acts in the body—its distribution. Distribution The process of distribution refers to how the chemical molecules are moved about in the body. This includes both drug transport and the pattern of drug accumulation within the body at normal dosage levels. As a general rule, very little is known about drug distribution patterns in the overdose victim (Jenkins & Cone, 1998). As an example of drug distribution data, the hallucinogen PCP has been found to accumulate in the brain and in

adipose (fat) tissue. Drug distribution is highly variable between individuals and is affected by such factors as the individual’s sex, muscle/adipose tissue ratio, blood flow patterns to various body organs, the amount of water in different parts of the body, the individual’s genetic heritage, and his or her age (Jenkins & Cone, 1998). Drug transport. Once a chemical has reached the general circulation, that substance can then be transported to the site of action. But the main purpose of the circulatory system is not to provide a distribution system for drugs! In reality, a drug molecule is a foreign substance in the circulatory system that takes advantage of the body’s own chemical distribution system to move from the point of entry to the site of action. A chemical can use the circulatory system to reach the site of action in several different ways. Some chemicals are able to mix freely with the blood plasma. These are classified as water-soluble drugs. Because water is such a large part of the human body, the drug molecules from water-soluble chemicals are rapidly and easily distributed throughout the fluid in the body. Alcohol is an excellent example of a water-soluble chemical. Shortly after gaining admission into the body, alcohol is rapidly distributed throughout the body to all blood-rich organs, including the brain. A different approach is utilized by other drugs. Their chemical structure allows them to “bind” to fat molecules


known as lipids that are found floating in the general circulation. Chemicals that bind to these fat molecules are often called lipid soluble. Because fat molecules are used to build cell walls within the body, lipids have the ability to move rapidly out of the circulatory system into the body tissues. Indeed, one characteristic of blood lipids is that they are constantly passing out of the circulatory system and into the body tissues. Thus, chemicals that are lipid soluble will be distributed throughout the body, especially to organs with a high concentration of lipids. In comparison to the other organ systems in the body, which are made up of 6% to 20% lipid molecules, fully 50% of the weight of the brain is made up of lipids (Cooper, Bloom, & Roth, 1986). Thus, chemicals that are highly lipid soluble will tend to concentrate rapidly within the tissues of the brain. The ultrashort and short-acting barbiturates are good examples of drugs that are lipid soluble. Although all the barbiturates are lipid soluble, there is a great deal of variability in the speed with which various barbiturates can bind to lipids. The speed at which a given barbiturate will begin to have an effect will depend, in part, on its ability to form bonds with lipid molecules. For the ultrashort-acting barbiturates, which are extremely lipid soluble, the effects might be felt within seconds of the time they are injected into a vein. This is one reason the ultrashort-duration barbiturates are so useful as surgical anesthetics. Remember that drug molecules are foreign substances in the body. Their presence might be tolerated, but only until the body’s natural defenses against chemical intruders are able to eliminate the foreign substance. The body will thus be working to detoxify (biotransform) and/or eliminate the foreign chemical molecules in the body almost from the moment they arrive. One way that drugs are able to avoid the danger of biotransformation and/or elimination before they have an effect is to join with protein molecules in the blood. These protein molecules are normally present in human blood, for reasons that need not be discussed further here. It is sufficient to understand that some protein molecules are normally present in the blood. But by coincidence, the chemical structures of many drugs allow the individual molecules to bind with protein molecules in the general circulation. This most often involves a protein known as albumin. For

Chapter Six

this reason such chemicals are said to become “protein bound” (or if they bind to albumin, they might be said to be “albumin bound”).4 The advantage of protein binding is that while a drug molecule is protein bound, it is difficult for the body to either biotransform or excrete it. The strength of the chemical bond that forms between the chemical and the protein molecules will vary. Some drugs form stronger chemical bonds with protein molecules than do others. The strength of this chemical bond then determines how long the drug will remain in the body before elimination. The dilemma is that while they are protein bound, drug molecules are also unable to have any biological effect. Thus, to have an effect, the molecule must be free of chemical bonds (unbound). Fortunately, although a chemical might be strongly protein bound, a certain percentage of the drug molecules will always be “unbound.” For example, if 75% of a given drug’s molecules are protein bound, then 25% of that drug’s molecules are said to be unbound, or free. It is this unbound fraction of drug molecules that is able to have an effect on the bodily function, to be biologically active. The protein-bound molecules are unable to have any effect at the site of action and are biologically inactive while bound (Rasymas, 1992; Shannon, Wilson, & Stang, 1995). Thus, for chemicals that are largely protein bound, the unbound drug molecules must be extremely potent. For example, the antidepressant amitriptyline is 95% protein bound. This means that only 5% of a given dose of this drug is actually biologically active at any time (Ciraulo, Shader, Greenblatt, & Barnhill, 1995). Another drug that is strongly protein bound is diazepam. Over 99% of the diazepam molecules that reach the general circulation will become protein bound. Thus, the sedative effects of diazepam (see Chapter 10) are actually caused by the small fraction (approximately 1%) of the diazepam molecules that remained unbound after the drug reaches the circulation. As noted earlier, unbound drug molecules may easily be biotransformed and/or excreted (the process of drug biotransformation and excretion of chemicals will be discussed in a later section of this chapter). Thus, one advantage of protein binding is that the protein-bound 4 In general, acidic drugs tend to bind to albumin, whereas basic drugs tend to bind to alpha1-acid glycoprotein (Ciancio & Bourgault, 1989).


An Introduction to Pharmacology

drug molecules form a “reservoir” of drug molecules that have not yet been biotransformed. These drug molecules are gradually released back into the general circulation as the chemical bond between the drug and the protein molecules weakens, or as other molecules compete with the drug for the binding site. The drug molecules that gradually are released back into the general circulation then replace those molecules that have been biotransformed and/or excreted. It is the proportion of unbound to bound molecules that remains approximately the same. Thus, if 75% of the drug was protein bound and 25% was unbound when the drug was at its greatest concentration in the blood, then after some of that drug had been eliminated from the body the proportion of bound to unbound drug would continue to be approximately 75:25. Although, at first glance, the last sentence might seem to be in error, remember that as some drug molecules are being removed from the general circulation, some of the protein-bound molecules are also breaking the chemical bonds that held them to the protein molecule to once again become unbound. Thus, while the amount of chemical in the general circulation will gradually diminish as the body biotransforms or eliminates the unbound drug molecules, the proportion of bound:unbound drug molecules will remain essentially unchanged. The characteristic of protein binding actually is related to another trait of a drug: the biological half-life of that chemical. This topic will be discussed in more detail later in this chapter. However, protein binding allows the drug in question to have a longer duration of effect. As the protein-bound molecules are gradually released back into the general circulation over an extended period of time, the total period of time in which that drug is present in sufficient quantities to remain biologically active is extended. Biotransformation Because a drug is a foreign substance, the natural defenses of the body try to eliminate the drug almost immediately. In some cases, the body is able to eliminate the drug without the need to modify its chemical structure. Penicillin is an example of a drug that is excreted unchanged from the body. Many inhalants and surgical anesthetics are also eliminated from the body without being metabolized to any significant

degree. But as a general rule, the chemical structure of most chemicals must be modified before they can be eliminated from the body. This elimination is accomplished through what was once referred to as detoxification. However, as researchers have come to understand how the body prepares a drug molecule for elimination, the term detoxification has been replaced with the term biotransformation.5 Drug biotransformation usually is carried out in the liver, although on occasion this process occurs in other tissues of the body. The microsomal endoplasmic reticulum of the liver produces a number of enzymes6 that transform toxic molecules into a form that can be more easily eliminated from the body. Technically, the new compound that emerges from each step of the process of drug biotransformation is known as a metabolite of the chemical that was introduced into the body. The original chemical is occasionally called the parent compound of the metabolite that emerges from the process of biotransformation. In general, metabolites are less biologically active than the parent compound. However, there are exceptions to this rule. Depending on the substance being biotransformed, the metabolite might actually have a psychoactive effect of its own. On rare occasions, a drug might actually have a metabolite that is actually more biologically active than the parent compound.7 For this reason pharmacologists have come to use the term biotransformation rather than the older terms detoxification or metabolism. Although it is easier to speak of drug biotransformation as if it were a single process, in reality there are four different subforms of this procedure known as (a) oxidation, (b) reduction, (c) hydrolysis, and (d) conjugation (Ciraulo, Shader, Greenblatt, & Barnhill, 1995). The specifics of each form of drug biotransformation are quite complex and are best reserved for pharmacology 5This

process is inaccurately referred to as “metabolism” of a drug. Technically, the term drug metabolism refers to the total ordeal of a drug molecule in the body, including its absorption, distribution, biotransformation, and excretion. 6The most common of which is the P-450 metabolic pathway, or the microsomal P-450 pathway. 7For example, after Gamma-hydroxybutyrate (GHB) was banned by the Food and Drug Administration, illicit users switched to the compound gammabutyrolactone—a compound with reported health benefits such as improved sleep patterns—which is biotransformed into the banned substance GHB in the user’s body.


texts. It is enough for the reader to remember that there are four different processes collectively called drug metabolism, or biotransformation. Many chemicals must go through more than one step in the biotransformation process before being ready for the next step: elimination. One major goal of the process of metabolism is to transform the foreign chemical into a form that can be rapidly eliminated from the body (Clark, Bratler, & Johnson, 1991). But this process does not take place instantly. Rather, the process of biotransformation is accomplished through chemical reactions facilitated by enzymes produced in the body. The process is carried out over a period of time, and depending on the drug involved, a number of intermediate steps often occur before that chemical is ready for elimination from the body. Simply stated, the goal of the drug biotransformation process is to change the chemical structure of the foreign substance in such a way that it would then be less lipid soluble and thus more easily eliminated from the body. There are two major forms of drug biotransformation. In the first subtype, a constant fraction of the drug is biotransformed in a given period of time, such as a single hour. This is called a first order biotransformation process. Certain antibiotics are metabolized in this manner, with a set percentage of the medication in the body being biotransformed each hour. Other chemicals are eliminated from the body by what is known as a zero order biotransformation process. Drugs that are biotransformed through a zero order biotransformation process are metabolized at a set rate, no matter how high the concentration of that chemical in the blood. Alcohol is a good example of a chemical that is biotransformed through a zero order biotransformation process. As we will discuss in Chapter 7, alcohol is biotransformed at the rate of about what a person ingests if he or she were to drink one regular mixed drink or one can of beer per hour. It does not matter whether the person were to ingest just one can of beer or one regular mixed drink, or 20 cans of beer or regular mixed drinks in an hour. The body would still biotransform only the equivalent of one can of beer/mixed drink per hour, for alcohol is biotransformed through a zero order biotransformation process. As a general rule, chemicals that are administered orally must pass through the stomach to the small intestine

Chapter Six

before they can be absorbed. However, the human circulatory system is designed in such a way that chemicals absorbed through the gastrointestinal system are carried first to the liver. This makes sense, as the liver is given the task of protecting the body from toxins. By taking chemicals absorbed from the gastrointestinal tract to the liver, the body is able to begin to break down any toxins in the substance that was introduced into the body before those toxins might damage other organ systems. Unfortunately, one effect of this process is that the liver is often able to biotransform many medications that are administered orally before they have had a chance to reach the site of action. This is called first pass metabolism. First pass metabolism is one reason it is so hard to control pain through the use of orally administered narcotic analgesic medications. When taken by mouth, a significant part of the dose of an orally administered narcotic analgesic such as morphine will be metabolized by the liver into inactive forms, before reaching the site of action. Elimination In the human body, biotransformation and elimination are closely intertwined. Indeed, some authorities on pharmacology consider these to be a single process, as one goal of drug biotransformation is to change the foreign chemical into a water-soluble metabolite that can then be easily removed from the circulation (Clark, Bratler, & Johnson, 1991). The most common method of drug elimination is by the kidneys (Benet, Kroetz, & Sheiner, 1995). However, the biliary tract, lungs, and sweat glands may also play a role (Shannon, Wilson, & Stang, 1995). For example, a small percentage of the alcohol that a person has ingested will be excreted when that person exhales. A small percentage of the alcohol in the system is also eliminated through the sweat glands. These characteristics of alcohol contribute to the characteristic smell of the intoxicated individual.

The Drug Half-Life There are several different measures of drug half-life, all of which provide a rough estimate of the period of time that a drug remains active in the human body. The distribution half-life is the time that it takes for a drug to work its way from the general circulation into body tissues


An Introduction to Pharmacology

Percentage of drug in body tissues

100 90 80 70 60 50 40 30 20 10 0 0






Half-life periods

FIGURE 6.2 Drug elimination in half-life stages.

such as muscle and fat (Reiman, 1997). This is important information in overdose situations, for example, when the physician treating the patient has to estimate the amount of a compound in the patient’s circulation. Another measure of drug activity in the body is the therapeutic half-life, or the period of time that it takes for the body to inactivate 50% of a single dose of a compound. The therapeutic half-life is intertwined with the concept of the elimination half-life. This is the time that it takes for 50% of a single dose to be eliminated from the body. As an example, different chemicals might rapidly migrate from the general circulation into adipose or muscle tissues, with the result that that compound would have a short distribution half-life. THC, the active agent in marijuana, is such a compound. However, for heavy users, a reservoir of unmetabolized THC forms in the adipose tissue, and this is gradually released back into the user’s circulation when the person stops using marijuana. As a result, THC has a long elimination half-life in the chronic user, although the therapeutic half-life of a single dose is quite short. For this text, all of these different measures of halflife will be lumped together under the term of biological half-life (or half-life) of that chemical. Sometimes, the half-life is abbreviated by the symbol t1/2. The half life of a chemical will be viewed as the period of time needed for the individual’s body to reduce the amount

of active drug in the circulation by one-half (Benet, Kroetz, & Sheiner, 1995). The concept of t1/ 2 is based on the assumption that the individual ingested only one dose of the drug, and the reader should keep in mind that the dynamics of a drug following a single dose are often far different from those for the same drug when it is used on a continuing basis. Thus, although the t1/2 concept is often a source of confusion even among health professionals, it does allow health care workers to roughly estimate how long a drug’s effects will last when that chemical is used at normal dosage levels. One popular misconception is that it takes only two half-lives for the body to totally eliminate a drug. In reality, 25% of the original dose remains at the end of the second half-life period, and 12% of the original dose is still in the body at the end of three half-life periods. As a general rule, five half-life periods are required before the body is able to eliminate virtually all of a single dose of a chemical (Williams & Baer, 1994). Figure 6.2 shows drug elimination in half-life stages. Generally, drugs with long half-life periods tend to remain biologically active for longer periods of time. The reverse is also true: Chemicals with a short biological half-life tend to be active for shorter periods of time. Here is where the process of protein binding comes into play: Drugs with longer half-lives tend to become protein bound. As stated earlier, the process of


Chapter Six

protein binding allows a reservoir of an unmetabolized drug to be released gradually back into the general circulation as the drug molecules become unbound. This allows a chemical to remain in the circulation at a sufficient concentration to have an effect for an extended period of time.

The Effective Dose The concept of the effective dose (ED) is based on doseresponse calculations, in which pharmacologists calculate the percentage of a population that will respond to a given dose of a chemical. Scientists usually estimate the percentage of the population that is expected to experience an effect by a chemical at different dosage levels. For example, the ED10 is the dosage level at which 10% of the population will achieve the desired effects from the chemical being ingested. The ED50 is the dosage level at which 50% of the population would be expected to respond to the drug’s effects. Obviously, for medications, the goal is to find a dosage level at which the largest percentage of the population will respond to the medication. However, you cannot keep increasing the dose of a medication forever; sooner or later you will raise the dosage level to the point that it will become toxic and people may quite possibly die from the effects of the chemical.

The Lethal Dose Index Drugs are, by their very nature, foreign to the body. When they are introduced into the body, drugs will disrupt the function of the body in one way or another. One common characteristic of both legitimate pharmaceuticals and the drugs of abuse is that the person who administered that chemical hopes to alter the body’s function to bring about a desired effect. But chemicals that are introduced into the body hold the potential to disrupt the function of one or more organ systems to the point that it is no longer possible for them to function normally. At the extreme, chemicals may disrupt the body’s activities sufficiently to put the life of the individual in danger. Scientists express this continuum as a form of modified dose-response curve. In the typical dose-response curve scientists calculate the percentage of the population that would be expected to benefit from a certain

exposure to a chemical; the calculation for a fatal exposure level, however, is slightly different. In such a dose-response curve, scientists calculate the percentage of the general population that would, in theory, die as a result of being exposed to a certain dose of a chemical or toxin. This figure is then expressed in terms of a “lethal dose” (or LD) ratio. The percentage of the population that would die as a result of exposure to that chemical/toxin source is identified as a subscript to the LD heading. Thus, if a certain level of exposure to a chemical or toxin resulted in a 25% death rate, this would be abbreviated as the LD25 for that chemical or toxin. A level of exposure to a toxin or chemical that resulted in a 50% death rate would be abbreviated as the LD50 for that substance. For example, as we will discuss in the next chapter, a person with a blood alcohol level of .350 mg/mL would stand a 1% chance of death without medical intervention. Thus, a blood alcohol level of .350 mg/mL is the LD01 for alcohol. It is possible to calculate the potential lethal exposure level for virtually every chemical. These figures provide scientists with a way to estimate the relative safety of different levels of exposure to chemicals or radiation, and a way to determine when medical intervention is necessary.

The Therapeutic Index In addition to their potential to benefit the user, all drugs hold the potential for harm. Because they are foreign substances being introduced into the body, there is a danger that if used in amounts that are too large, the drugs might actually harm the individual rather than help him or her. Scientists have devised what is known as the therapeutic index (TI) as a way to measure the relative safety of a chemical. Essentially, the TI is the ratio between the ED50 and the LD50. In other words, the TI is a ratio between the effectiveness of a chemical and the potential for harm inherent in using that chemical. A smaller TI means that there is only a small margin between the dosage level needed to achieve the therapeutic effects and the dosage level at which the drug becomes toxic to the individual. A large TI suggests that there is a great deal of latitude between the normal therapeutic dosage range and the dosage level at which that chemical might become toxic to the user.


An Introduction to Pharmacology


Peak effect

Minimum effective dose Therapeutic threshold


FIGURE 6.3 Hypothetical dose-response curve.

Unfortunately, as we will see in the next few chapters, many of the drugs of abuse have a small TI. These chemicals are potentially quite toxic to the user. For example, as we will discuss in the chapter on barbiturate abuse, the ratio between the normal dosage range and the toxic dosage range for the barbiturates is only about 1:3. In contrast to this, the ratio between the normal dosage range and the toxic dosage level for the benzodiazepines is estimated to be about 1:200. Thus, relatively speaking, the benzodiazepines are said to be much safer than the barbiturates.

Peak Effects The effects of a chemical within the body develop over time until the drug reaches what is known as the therapeutic threshold. This is the point at which the concentration of a specific chemical in the body allows it to begin to have the desired effect on the user. The chemical’s effects continue to become stronger and stronger until finally the strongest possible effects from a dose of that drug are reached. This is the period of peak effects. Then, gradually, the impact of the drug becomes less and less pronounced as the chemical is eliminated/biotransformed over a period of time. Eventually, the concentration of the chemical in the body falls below the therapeutic level. Scientists have learned to calculate dose-response curves in order to

estimate the potential for a chemical to have an effect at any given point in time after it was administered. Figure 6.3 shows a hypothetical dose-response curve. The period of peak effects following a single dose of a drug varies from one chemical to another. For example, the peak effects of an ultrashort-acting barbiturate might be achieved in a matter of seconds following a single dose, while the long-term barbiturate phenobarbital might take hours to achieve its strongest effects. Thus, clinicians must remember that the period of peak effects following a single dose of a chemical will vary for each chemical.

The Site of Action To illustrate the concept of the site of action, consider the case of a person with an “athlete’s foot” infection. This condition is caused by a fungus that attacks the skin. Obviously, the individual who has such an infection will want to have it cured, and there are several excellent over-the-counter antifungal compounds available. In most cases, the individual need only select one, and then apply it to the proper area on his or her body to be cured of the infection. At about this point, somebody is asking what antifungal compounds have to do with drug abuse. Admittedly, it is not the purpose of this chapter to sell antifungal compounds. But the example of the athlete’s foot infection


Chapter Six

helps to illustrate the concept of the site of action. To put it simply, the site of action is where the drug being used will have its prime effect. In the case of the medication being used for the athlete’s foot infection, the site of action is the infected skin on the person’s foot. For the drugs of abuse, the central nervous system (or CNS) will be the primary site of action. The Central Nervous System (CNS) The CNS is, without question, the most complex organ system in the human body. At its most fundamental level, it comprises perhaps 100 billion neurons. These cells are designed to both send and receive messages from other neurons in a process known as information processing. To accomplish this task, each neuron may communicate with tens, hundreds, or thousands of its fellows through a system of perhaps 100 trillion synaptic junctions (Stahl, 2000).8 To put this number into perspective, it has been estimated that the average human brain has more synaptic junctions than there are individual grains of sand on all of the beaches of the planet Earth. Although most of the CNS is squeezed into the confines of the skull, the individual neurons do not actually touch. Rather, they are separated by microscopic spaces called synapses. To communicate across the sympatic void, one neuron will release a cloud of chemical molecules that function as neurotransmitters. When a sufficient number of these molecules contact a corresponding receptor site in the cell wall of the next neuron, a profound change is triggered in the postsynaptic neuron. Such changes may include the postsynaptic neuron “making, strengthening, or destroying synapses; urging axons to sprout; and synthesizing various proteins, enzymes, and receptors that regulate neurotransmission in the target cell” (Stahl, 2000, p. 21). Another change may be to force the postsynaptic neuron to release a cloud of neurotransmitter molecules in turn, passing the message that it just received on to the next neuron in that neural pathway. 8Although

the CNS is, by itself, worthy of a lifetime of study, for the purpose of this text the beauty and complexities of the CNS must be compressed into just a few short paragraphs. The reader who wishes to learn more about the CNS should consult a good textbook on neuropsychology or neuroanatomy.

The Receptor Site The receptor site is the exact spot either on the cell wall or within the cell itself where the chemical molecule carries out its main effects (Olson, 1992). To understand how receptor sites work, consider the analogy of a key slipping into the slot of a lock. The structure of the transmitter molecule fits into the receptor site in much the same way as a lock into a key, although on a greatly reduced scale. The receptor site is usually a pattern of molecules that allows a single molecule to attach itself to the target portion of the cell at that point. Under normal circumstances, receptor sites allow the molecules of naturally occurring compounds to attach to the cell walls in order to carry out normal biological functions. By coincidence, however, many chemicals may be introduced into the body that also have the potential to bind to these receptor sites and possibly alter the normal biological function of the cell in a desirable way. Those bacteria susceptible to the antibiotic penicillin, for example, have a characteristic “receptor site,” in this case, the enzyme transpeptidase. This enzyme carries out an essential role in bacterial reproduction. By blocking the action of transpeptidase, penicillin prevents the bacteria cells from reproducing. As the bacteria continue to grow, the pressure within the cell increases until the cell wall is no longer able to contain it, and the cell ruptures. Neurotransmitter receptor sites are a specialized form of receptor site found in the walls of neurons at the synaptic junction. Their function is to receive the chemical messages from the presynaptic neuron in the form of neurotransmitter molecules at specific receptor sites. To prevent premature firing, a number of receptor sites must be occupied at the same instant before the electrical potential of the receiving (postsynpatic) neuron is changed, allowing it to pass the message on to the next cell in the nerve pathway. Essentially, all of the known chemicals that function as neurotransmitters within the CNS fall into two groups: those that stimulate the neuron to release a chemical “message” to the next cell, and those that inhibit the release of neurotransmitters. By altering the flow of these two classes of neurotransmitters, the drugs of abuse alter the way the CNS functions. Co-transmission. When neurotransmitters were first identified, scientists thought that each neuron utilized just one form of neurotransmitter molecule. In recent


An Introduction to Pharmacology

Postsynaptic neuron

Presynaptic neuron

Direction of nerve impulse Postsynaptic neuron Neurotransmitter molecules Synaptic vesicles

Axon of presynaptic neuron

Molecule-sized receptor sites in cell wall Neurotransmitter molecules being passed from first neuron to second

FIGURE 6.4 Neurotransmitter diagram.

years, they have discovered that in addition to one “main” neurotransmitter, neurons often both receive and release “secondary” neurotransmitter molecules that are quite different from the main neurotransmitter (Stahl, 2000). The process of releasing secondary neurotransmitters is known as co-transmission, with opiate peptides most commonly being utilized as secondary neurotransmitters (Stahl, 2000). The process of cotransmission may explain why many drugs that affect the CNS have such wide-reaching secondary, or side effects. Neurotransmitter reuptake/destruction. In many cases, neurotransmitter molecules are recycled. This does not always happen, however, and in some cases once a neurotransmitter is released it is destroyed by an enzyme designed to carry out this function. But

sometimes a neuron will activate a molecular “pump” that absorbs as many of a specific neurotransmitter molecules from the synaptic junction as possible for reuse. This process is known as “reuptake.” In both cases, the neuron will also work to manufacture more of that neurotransmitter for future use, storing both the reabsorbed and newly manufactured neurotransmitter molecules in special sacs within the nerve cell, until needed. Figure 6.4 is a neurotransmitter diagram. Upregulation and downregulation. The individual neurons of the CNS are not passive participants in the process of information transfer. Rather, each individual neuron is constantly adapting its sensitivity by either increasing or decreasing the number of neurotransmitter receptor sites on the cell wall. If a neuron is subjected to low levels of a given neurotransmitter, that nerve cell will


respond by increasing (upregulating) the number of possible receptor sites in the cell wall to give the neurotransmitter molecules a greater number of potential receptor sites. An anology might be using a directional microphone to enhance faint sounds. But if a neuron is exposed to a large number of neurotransmitter molecules, it will decrease the total number of possible receptor sites by absorbing/inactivating some of the receptor sites in the cell wall. This is downregulation, a process by which a neuron decreases the total number of receptor sites where the neurotransmitter (or drug) molecule can bind to that neuron. Again, an analogy would be turning down the volume of a sound amplification system so that it becomes less sensitive to distant sound sources. Tolerance and cross-tolerance. The concept of drug “tolerance” was introduced in the last chapter. In brief, tolerance is a reflection of the body’s ongoing struggle to maintain normal function. Because a drug is a foreign substance, the body will attempt to continue its normal function in spite of the presence of the chemical. Part of the process of adaptation in the CNS is the upregulation/downregulation of receptor sites as the neurons attempt to maintain a normal level of firing. As the body adapts to the effects of the chemical, the individual will find that he or she no longer achieves the same effect from the original dose and must use larger and larger doses to maintain the original effect. When a chemical is used as a neuropharmaceutical—such as a drug that is intentionally introduced into the body by a physician to alter the function of the CNS in a desired manner—tolerance is often referred to as the process of neuroadaptation. If the drug being used is a recreational substance, the same process is usually called tolerance. However, neuroadaptation and tolerance are essentially the same biological adaptation. The only difference is that one involves a pharmaceutical and the other involves a recreational chemical. The concepts of a drug agonist and antagonist. To understand how the drugs of abuse work, it is necessary to introduce the twin concepts of a drug agonist and the antagonist. These may be difficult concepts for students of drug abuse to understand. Essentially, a drug agonist mimics the effect(s) of a chemical that is naturally found in the body (Shannon, Wilson, & Stang, 1995). The agonist either tricks the body into reacting as if the endogeneous chemical were present,

Chapter Six

or, it enhances the effect(s) of the naturally occurring chemical. For example, as we will discuss in the chapter on the abuse of opiates, there are morphine-like chemicals found in the human brain that help to control the level of pain that the individual is experiencing. Heroin, morphine, and the other narcotic analgesics mimic the actions of these chemicals, and for this reason might be classified as agonists of the naturally occurring painkilling chemicals. The antagonist essentially blocks the effects of a chemical already working within the body. In a sense, aspirin might be classified as a prostaglandin antagonist because aspirin blocks the normal actions of the prostaglandins. Antagonists may also block the effects of certain chemicals introduced into the body for one reason or another. For example, the drug Narcan blocks the receptor sites in the CNS that opiates normally bind to in order to have their effect. Narcan thus is an antagonist for opiates and is of value in reversing the effects of an opiate overdose. Because the drugs of abuse either simulate the effects of actual neurotransmitters or alter the action of existing neurotransmitters, they either enhance or retard the frequency with which the neurons of the brain “fire” (Ciancio & Bourgault, 1989). The constant use of any of the drugs of abuse force the neurons to go through the process of neuroadaptation, as they struggle to maintain normal function in spite of the artificial stimulation or inhibition caused by the drugs of abuse. In other words, depending on whether the drugs of abuse cause a surplus or a deficit of neurotransmitter molecules, the neurons in many regions of the brain will upregulate or downregulate the number of receptor sites in an attempt to maintain normal function. This will cause the individual’s responsiveness to that drug to be different over time, a process that is part of the process of tolerance. When the body begins to adapt to the presence of one chemical, it will often also become tolerant to the effects of other drugs that use the same mechanism of action. This is the process of cross tolerance. For example, a chronic alcohol user will often require higher doses of CNS depressants than a nondrinker in order to achieve a given level of sedation. Physicians have often noticed this effect in the surgical theater: Chronic alcohol users will require larger doses of anesthetics to achieve a given level of unconsciousness than nondrinkers. Anesthetics and alcohol are both classified as CNS depressants.


An Introduction to Pharmacology

The individual’s tolerance to the effects of alcohol will, through the development of cross tolerance, cause him or her to require a larger dose of many anesthetics in order to allow the surgery to proceed.

The Blood-Brain Barrier The blood-brain barrier (BBB) is a unique structure in the human body. Its role is to function as a “gateway” to the brain. In this role, the BBB will admit only certain molecules needed by the brain. For example, oxygen and glucose, both essential to life, will pass easily through the BBB (Angier, 1990). But the BBB exists to try to protect the brain from toxins, or infectious organisms. To this end, endothelial cells that form the lining of the BBB have established tight seals, with overlapping cells. Initially, students of neuroanatomy may be confused by the term blood-brain barrier, for when we usually speak of a barrier, we speak of a single structure. But the BBB actually is the result of a unique feature of the cells that form the capillaries through which cerebral blood flows. Unlike capillary walls throughout the rest of the body, those of the cerebral circulatory system are securely joined together. Each endothelial cell is tightly joined to its neighbors, forming a tight tube-like structure that protects the brain from direct contact with the general circulation. Thus, many chemicals in the general circulation are blocked from entering the CNS. However, the individual cells of the brain require nutritional support, and some of the very substances needed by the brain are those blocked by the endothelial cell boundary. Thus, water-soluble substances like glucose or iron, needed by the neurons of the brain for proper function, are blocked by the lining of the endothelial cells. To overcome this problem specialized “transport systems,” have evolved in the endothelial cells in the cerebral circulatory system. These transport systems selectively allow needed nutrients to pass through the

BBB to reach the brain (Angier, 1990). Each of these transport systems will selectively allow one specific type of water-soluble molecule, such as a glucose, to pass through the lining of the endothelial cell to reach the brain. But lipids also pass through the lining of the endothelial cells and are able to reach the central nervous system beyond. Lipids are essentially molecules of fat. Lipids are essential elements of cell walls, which are made up of lipids, carbohydrates, and protein molecules, arranged in a specific order. As the lipid molecule reaches the endothelial cell wall, it gradually merges with the molecules of the cell wall and passes through into the interior of the endothelial cell. Later, it will also pass through the lining of the far side of the endothelial cell to reach the neurons beyond the lining of the BBB.

Summary In this chapter, we have examined some of the basic components of pharmacology. It is not necessary for students in the field of substance abuse to have the same depth of knowledge possessed by pharmacists to begin to understand how the recreational chemicals achieve their effects. However, it is important for the reader to understand at least some of the basic concepts of pharmacology to understand the ways that the drugs of abuse achieve their primary and secondary effects. Basic information regarding drug forms, methods of drug administration, and biotransformation/elimination were discussed in this chapter. Other concepts discussed include those of a drug’s bioavailability, the therapeutic half-life of a chemical, the effective dose and lethal dose ratios, the therapeutic dose ratio, and how drugs need receptor sites in order to work. The student should have at least a basic understanding of these concepts before starting to review the different drugs of abuse covered in the next chapters.


Alcohol Humans’ Oldest Recreational Chemical

alter the individual’s perception of reality (Glennon, 2004; Walton, 2002). In this context, alcohol is the prototype intoxicant. Some anthropologists now believe that early civilization came about in response to the need for a stable home base from which to ferment a form of beer known as mead (Stone, 1991). Most certainly, the brewing and consumption of beer was a matter of considerable importance to the inhabitants of Sumer,1 for many of the clay tablets that have been found there are devoted to the process of brewing beer (Cahill, 1998). If this theory is correct, it would seem that human civilization owes much to alcohol, which is also known as ethanol, or ethyl alcohol.2

Introduction Klatsky (2002) suggested that fermentation occurs naturally and that early humans discovered, but did not invent, alcohol-containing beverages such as wine and beer. Most certainly, this discovery occurred well before the development of writing, and scientists believe that man’s use of alcohol dates back at least 10,000 to 15,000 years (Potter, 1997). Prehistoric humans probably learned about the intoxicating effects of fermented fruit by watching animals eat such fruit from the forest floor and then act strangely. Curiosity may have compelled one or two brave souls to try some of the fermented fruits that the animals seemed to enjoy, introducing prehistoric humans to the intoxicating effects of alcohol (R. Siegel, 1986). Having discovered alcohol’s intoxicating action and desiring to repeat the use of fermented fruits, prehistoric humans started to experiment and eventually discovered how to produce alcoholcontaining beverages at will. It is not unrealistic to say that “alcohol, and the privilege of drinking have always been important to human beings” (Brown, 1995, p. 4). Indeed, it has been suggested that humans have an innate drive to alter their awareness through the use of chemical compounds, and one of the reasons that early hominids may have climbed out of the trees of Africa was to gain better access to mushrooms with a hallucinogenic potential that grew in the dung of savanna-dwelling grazing animals (Walton, 2002). Although this theory remains controversial, (a) virtually every known culture discovered or developed a form of alcohol production and (b) every substance that could be fermented has been made into a beverage at one time or another (Klatsky, 2002; Levin, 2002). Almost every culture discovered by anthropologists has advocated the use of certain compounds to

A Brief History of Alcohol The use of fermented beverages dates back before the invention of writing. Anthropologists believe that the process of making mead, a form of beer made from fermented honey, was discovered during the late paleolithic era, or what is commonly called the latter part of the stone age. Historical evidence suggests that mead was in common use around the year 8000 B.C.E3 (Ray & Ksir, 1993). This thick liquid was quite nutritious and provided both vitamins and amino acids to the drinker’s diet. By comparison, modern beer is very thin and appears almost anemic. Both beer and 1See

Glossary. at least 45 other forms of alcohol exist, but these are not normally used for human consumption and will not be discussed further in this text. 3Which stands for before common era. Remember that 8000 B.C.E. was actually 10,000 years ago. 2Technically,




wine are mentioned in Homer’s epic stories the Iliad and the Odyssey, legends that date back thousands of years. Given the casual way these substances are mentioned in the epics, it is clear that their use was commonplace for an unknown period before the stories were developed. Scientists have discovered that ethyl alcohol is an extraordinary source of energy. The human body is able to obtain almost as much energy from alcohol as it can from fat, and far more energy gram for gram than it can obtain from carbohydrates or proteins (Lieber, 1998). Although ancient people did not understand these facts, they did recognize that alcohol-containing beverages such as wine and beer were an essential part of the individual’s diet, a belief that persisted until well into modern times.4 The earliest written record of wine making was found in an Egyptian tomb that dates back to around 3000 B.C.E. (“A Very Venerable Vintage,” 1996), although scientists have uncovered evidence suggesting that ancient Sumerians might have used wine made from fermented grapes around 5400 B.C.E. (“A Very Venerable Vintage,” 1996). The earliest written records of how beer is made date back to approximately 1800 B.C.E. (Stone, 1991). These findings suggest that alcohol played an important role in the daily life of early people, since only important information was recorded in early writing. Ethyl alcohol, especially in the form of wine, was central to daily life in both ancient Greece and Rome5 (Walton, 2002). Indeed, ancient Greek prayers for warriors suggested that they would enjoy continual intoxication in the afterlife, and in pre-Christian Rome intoxication was seen as a religious experience (Walton, 2002). When the Christian church began to play a major role in the Roman empire in the fourth century C.E., it began to stamp out the use of large amounts of alcohol for religious celebrations as reflecting pagan religions and began to force its own morality onto the


the Puritans set sail for the new world, for example, they carried with them 14 tons of water and 42 tons of beer (Freeborn, 1996). The fact that they ran out of beer was one reason they decided to settle where they did (McAnalley, 1996). 5This is perhaps best reflected in the Roman proverb “Bathing, wine, and Venus exhaust the body, but are what life is about.”

inhabitants of the Empire6 (Walton, 2002). The Puritan ethic that evolved in England in the 14th and 15th centuries placed further restrictions on drinking, and by the start of the 19th century public intoxication was seen not as a sign of religious ecstasy as it had been in the preChristian Roman empire, but as a public disgrace.

How Alcohol Is Produced As we saw in the last section, humans discovered early that if you crush certain forms of fruit and allow it to stand for a period of time in a container, alcohol will sometimes appear. We now know that unseen microorganisms called yeast settle on the crushed fruit, find that it is a suitable food source, and begin to digest the sugars in the fruit through a chemical process called fermentation. The yeast breaks down the carbon, hydrogen, and oxygen atoms it finds in the sugar for food and in the process produces molecules of ethyl alcohol and carbon dioxide as waste. Waste products are often toxic to the organism that produces them, and so it is with alcohol. When the concentration of alcohol in a container reaches about 15%, it becomes toxic to the yeast, and fermentation stops. Thus, the highest alcohol concentration that one might achieve by natural fermentation is about 15%. Several thousand years elapsed before humans learned to obtain alcohol concentrations above this 15% limit. Although Plato had noted that a “strange water” would form when one boiled wine (Walton, 2002), it was not until around the year 800 C.E. that an unknown person thought to collect this fluid and explore its uses. This is the process of distillation, which historical evidence suggests was developed in the Middle East, and which had reached Europe by around 1100 C.E. (Walton, 2002). Because ethyl alcohol boils at a much lower temperature than water, when wine is boiled some of the alcohol content boils off as a vapor, or steam. This steam contains more ethyl alcohol than water vapor. If it is collected and allowed to cool down, the resulting liquid will have a higher concentration of alcohol and a lower concentration of water than did the original mixture. Over 6Just 300 years later, around A.D. 700, the Qur’an was written, with an injunction against the use of alcohol by adherents of the Islamic faith, upon the threat of thrashing (Walton, 2002).


Chapter Seven

time, people discovered that the cooling process could take place in a metal coil, allowing the liquid to drip from the end of the coil into a container of some kind. This device is the famous “still” of lore and legend. Around the year 1000 C.E. Italian wine growers had started using the distillation process to produce different beverages by mixing the obtained “spirits” that resulted from distillation with various herbs and spices. This produced various combinations of flavors for the resulting beverage, and physicians of the era were quick to draw on these new alcohol-containing fluids as potent medicines. The flavorful beverages also became popular for recreational consumption. Unfortunately, as a result of distillation, many of the vitamins and minerals in the original wine and beer are lost. It is for this reason that many dietitians refer to alcohol as a source of “empty” calories. Over time, the chronic ingestion of alcohol-containing beverages can contribute to a state of vitamin depletion called avitaminosis, which will be discussed in the next chapter.

Alcohol Today Over the last nine hundred years since the development of the distillation process, assorted forms of fermented wines using various ingredients, different forms of beer, and distilled spirits combined with diverse flavorings have emerged. The widespread use of alcohol has resulted in multiple attempts to control or eliminate its use over the years, but these programs have had little success. Given the widespread, ongoing debate over the proper role of alcohol in society, it is surprising to learn there is no definition of what constitutes a “standard” drink or the alcohol concentrations that might be found in different alcoholic beverages (Dufour, 1999). At this time in the United States most beer has an alcohol content of between 3.5% and 5% (Dufour, 1999; Herman, 1993). However, some brands of “light” beer might have less than 3% alcohol content, and “speciality” beers or malt liquors might contain up to 9% alcohol (Dufour, 1999). In the United States, wine continues to be made by allowing fermentation to take place in vats containing various grapes or other fruits. Occasionally, especially in other countries, the fermentation involves products other than grapes, such as the famous “rice wine” from Japan called sake. In the United States, wine usually has an alcohol content of

8% to 17% (Herman, 1993), although what are classified as “light” wines might be about 7% alcohol by content, and wine “coolers” contain 5% to 7% alcohol as a general rule (Duvour, 1999). In addition to wine, there are the “fortified” wines. These are produced by a process in which distilled wine is mixed with fermented wine to raise the total alcohol content to 20% to 24% (Dufour, 1999). Examples of fortified wines include various brands of sherry and port (Herman, 1993). Finally, there are the “hard liquors,” the distilled spirits, whose alcohol content generally contains 40% to 50% alcohol by volume (Dufour, 1999). However, there are exceptions to this rule, and some beverages contain 80% or higher alcohol concentrations, such as the famous “Everclear” distilled in the southern United States.

Scope of the Problem of Alcohol Use Beverages that contain alcohol are moderately popular drinks. In the year 2001, 101 million adults (49% of the adult population) in the United States consumed alcohol at least once (Naimi et al., 2003). For much of the last quarter of the 20th century there was a gradual decline in the per capita amount of alcohol consumed until 1996. Since then, the annual per capita consumption of alcohol in the United States has gradually increased each year (Naimi et al., 2003). Currently, the average adult in the United States consumes 8.29 liters (or 2.189 gallons)7 of pure alcohol a year, compared to 12.34 liters per year for adults in Greenland, 9.44 liters per year for the average adult in Finland, and 16.01 liters per year for the average adult in the Republic of Ireland (Schmid et al., 2003). Note that these figures are averages and there is a significant inter-individual variation in the amount of alcohol consumed. For example, by some estimates just 10% of those who drink alcohol in the United States consume 60% of all the alcohol ingested, and the top 30% of drinkers consume 90% of all of the alcohol ingested (Kilbourne, 2002). Beer is the most common form of alcohol-containing beverage utilized in the United States (Naimi et al., 2003). Unfortunately, with an increase in the individual’s frequency of alcohol 7

See Glossary.



use and the amount of alcohol ingested, the person becomes more likely to develop some of the complications induced by excessive alcohol use. Some of these complications might be encountered after consuming a surprisingly small amount of alcohol, a matter of some concern as “binge” drinking seems to be on the increase (Motluk, 2004). The impact of excess alcohol use will be discussed more in the next chapter. In this chapter, we will focus on the casual, nonabusive, drinker.

Pharmacology of Alcohol Ethyl alcohol, or simply alcohol, may be introduced into the body intravenously or inhaled as a vapor,8 but the most common means is by oral ingestion as a liquid. The alcohol molecule is quite small and is soluble in both water and lipids, although it shows a preference for water (Jones, 1996). When consumed in sufficient quanities, alcohol molecules are rapidly distributed to all blood-rich tissues throughout the body including the brain. Indeed, because alcohol is so easily dissolved in lipids, high concentrations of alcohol in the brain are very rapidly achieved. Although alcohol does diffuse into adipose9 and muscle tissues, it does not do so as easily as it does in water-rich tissues such as the brain. Thus, very obese or very muscular people achieve a slightly lower blood alcohol level than would a leaner person after ingesting a given dose of alcohol. The main route of alcohol absorption is through the small intestine (Baselt, 1996). But when alcohol is ingested in the absence of food, about 10% (Kaplan, Sadock, & Grebb, 1994) to 25% (Baselt, 1996; Levin, 2002) of the alcohol is immediately absorbed through the stomach lining, with the first molecules of alcohol appearing in the drinker’s blood in as little as one minute (Rose, 1988). Although the liver is the primary organ where alcohol is biotransformed in the human body, people produce an enzyme in the gastrointestinal tract known as gastric alcohol dehydrogenase, which 8

One company actually has introduced this as a way for the individual to consume alcohol without the carbohydrates found in the typical alcohol-containing beverage. Fortunately, this practice remains virtually unknown in the United States. 9 See Glossary.

begins the process of alcohol biotransformation in the stomach (Frezza et al., 1990). The levels of gastric alcohol dehydrogenase are highest in rare social drinkers and are significantly lower in regular/chronic drinkers or people who ingested an aspirin tablet before drinking (Roine, Gentry, Hernandez-Munoz, Baraona, & Lieber, 1990). Alcohol consumed with food is absorbed more slowly than that consumed on an empty stomach. When one consumes alcohol without food, he or she will experience peak blood levels in 30 to 120 minutes following a single drink (Baselt, 1996). When consumed with food, alcohol is absorbed more slowly, and peak blood levels will not be reached until 1–6 hours after a single drink was ingested (Baselt, 1996). However, all of the alcohol consumed will eventually be absorbed into the drinker’s circulation. Researchers have long known that men tend to have lower blood alcohol levels than do women after consuming a given amount of alcohol for several reasons. First, males tend to produce more gastric alcohol dehydrogenase than do women (Frezza et al., 1990). Also, women tend to have lower body weights and lower muscle-to-body-mass ratios and have 10% less water volume in their bodies than do men (Zealberg & Brady, 1999). In the early 20th century, alcohol’s effects were thought to be caused by its ability to disrupt the structure and the function of the lipids in the cell wall of neurons (Tabakoff & Hoffman, 1992). This theory was known as the membrane fluidization theory, or the membrane hypothesis. This theory suggested that since alcohol was known to disrupt the structure of lipids, this might make it more difficult for neurons in the brain to maintain normal function. However, as scientists have come to better understand the molecular functioning of neurons, this theory has gradually fallen into disfavor. Scientists now believe that the alcohol molecule “binds” to specific protein molecules located in the walls of neurons that act as receptor sites for neurotransmitter molecules, altering their sensitivity and function (Tabakoff & Hoffman, 2004). However, alcohol’s effects are not limited to one neurotransmitter system or to neurons located in just one region of the brain. One of the neurotransmitter receptor sites in the brain that is affected by alcohol is utilized by the amino acid neurotransmitter N-methyl-D-aspartate (NMDA). NMDA functions as an excitatory amino acid within the brain


(Hobbs, Rall, & Verdoorn, 1995;Valenzuela & Harris, 1997). Alcohol blocks the influx of calcium atoms through the ion channels normally activated when NMDA binds at those sites, slowing down the rate at which that neuron can “fire.” It is for this reason that ethyl alcohol might be said to be an NMDA antagonist (Tsai, Gastfriend, & Coyle, 1995). At the same time, alcohol enhances the influx of chloride atoms through one of the subtypes of the gamma-amino-butyric acid (GABA) receptor site, which is known as the GABAa1 receptor subtype (Tabakoff & Hoffman, 2004). This subform of the GABA receptor is found only in certain regions of the brain, which seems to explain why alcohol does not affect all neurons in the brain equally. GABA is the main inhibitory neurotransmitter in the brain, and approximately 20% of all neurotransmitter receptors in the brain utilize GABA (Mosier, 1999). Neurons that utilize GABA are found in the cortex,10 the cerebellum, the hippocampus, the superior and inferior colliculi regions of the brain, the amygdala, and the nucleus accumbens. By blocking the effects of the excitatory amino acid NMDA while facilitating the inhibitory neurotransmitter GABA in these various regions of the brain, alcohol is able to depress the action of the central nervous system. Scientists disagree on how alcohol is able to cause the drinker to feel a sense of euphoria. One theory suggests that the euphoria some drinkers experience after drinking alcohol is brought on by its ability to directly activate the endorphin reward system within the brain. Evidence does suggest that at moderate to high blood levels alcohol promotes the binding of opiate agonists11 at the Mu opioid receptor site12 (Tabakoff & Hoffman, 2004). However, other researchers believe that alcohol’s euphoric effects are brought on by its ability to stimulate the release of the neurotransmitter dopamine. This theory is supported by evidence suggesting that alcohol ingestion forces the neurons to empty their stores of dopamine back into the synaptic junction (Heinz et al., 1998). When dopamine is released in the 10See

Glossary. Glossary. 12The various types of opiate receptor sites are discussed in Chapter 14. 11See

Chapter Seven

nucleus accumbens region of the brain, the user experiences a sense of pleasure, or euphoria. A third possibility is that alcohol’s ability to potentiate the effects of the neurotransmitter serotonin at the 5-HT3 receptor site plays a role in the euphoric and intoxicating effects of alcohol (Hobbs, Rall, & Verdoorn, 1995; Tabakoff & Hoffman, 2004). This receptor site is located on certain neurons that inhibit behavioral impulses, and it is this action that seems to account at least in part for alcohol’s disinhibitory effects. As is obvious from the above material, alcohol’s effects on the function of the neurons of the central nervous system (CNS) is widespread and complex. It is thought that alcohol affects both the function of the primary neurotransmitters and various “secondary” messengers within the neurons affected by ethyl alcohol (Tabakoff & Hoffman, 2004). The Biotransformation of Alcohol In spite of its popularity as a recreational drink, ethyl alcohol is essentially a toxin, and after it has been ingested the body works to remove it from the circulation before it can cause widespread damage. Depending on the individual’s blood alcohol level, between 2% and 10% of the alcohol ingested will be excreted unchanged through the lungs, skin, and urine, with higher percentages of alcohol being excreted unchanged in those individuals with greater blood alcohol levels (Sadock & Sadock, 2003; Schuckit, 1998). But the liver is the primary site where foreign chemicals such as ethyl alcohol are broken down and removed from the blood (Brennan, Betzelos, Reed, & Falk, 1995). Alcohol biotransformation is accomplished in two steps. First, the liver produces an enzyme known as alcohol dehydrogenase (or ADH), which breaks the alcohol down into acetaldehyde. Evolution is thought to have equipped our ancestors with ADH to give them the ability to biotransform fermented fruits that might be ingested or the small amount of alcohol produced endogenously (Jones, 1996). In high concentrations, acetaldehyde is quite toxic to the body, although there is evidence to suggest that small amounts might function as a stimulant (Schuckit, 1998). Fortunately, many different parts of the body produce a second enzyme,



aldehyde dehydrogenase, which breaks acetaldehyde down into acetic acid.13 Ultimately, alcohol is biotransformed into carbon dioxide, water, and fatty acids (carbohydrates). The speed of alcohol biotransformation. There is some individual variation in the speed at which alcohol is biotransformed in the body (Garriott, 1996). However, a rule of thumb is that the liver can biotransform about one mixed drink of 80-proof alcohol, 4 ounces of wine, or one 12-ounce can of beer, every 60 to 90 minutes (Fleming, Mihic, & Harris, 2002; Renner, 2004). As was discussed in the last chapter, alcohol is biotransformed through a zero order biotransformation process, and the rate at which alcohol is biotransformd by the liver is relatively independent of the concentration of alcohol in the blood (Levin, 2002). Thus, if the person consumes more than one standard drink per hour, the alcohol concentration in the blood would increase, possibly to the point that the drinker would become intoxicated. The alcohol-flush reaction. After drinking even a small amount of alcohol, 3% to 29% of people of European descent, and 47% to 85% of people of Asian descent experience what is known as the alcohol-flush reaction (Collins & McNair, 2002). This reaction is caused by a genetic mutation that is found predominantly in people of Asian descent. Because of this genetic mutation, the liver is unable to manufacture sufficient aldehyde dehydrogenase for it to rapidly biotransform the acetaldehyde that is manufactured in the first stage of alcohol biotransformation. People with this syndrome will experience symptoms such as facial flushing, heart palpitations, dizziness, and nausea as the blood levels of acetaldehyde climb to 20 times the level seen in normal individuals who have consumed the same amount of alcohol. Acetaldehyde is a toxin and the person with a significant amount of this chemical in his or her blood will become quite ill. This phenomenon is thought to be one reason that heavy drinking is so rare in persons of Asian descent.

The Blood Alcohol Level Because it is not yet possible to measure the alcohol level in the brain of a living person, physicians have to settle for a measurement of the amount of alcohol in a person’s body known as the blood alcohol level (BAL).14 The BAL is essentially a measure of the level of alcohol actually in a given person’s bloodstream. It is reported in terms of milligrams of alcohol per 100 milliliters of blood (or mg/mL). A BAL of 0.10 is thus one-tenth of a milligram of alcohol per 100 milliliters of blood. The BAL provides a rough approximation of the individual’s subjective level of intoxication. For reasons that are still not clear, the individual’s subjective level of intoxication is highest when the BAL is still rising, a phenomenon known as the Mellanby effect (Drummer & Odell, 2001; Lehman, Pilich, & Andrews, 1994). Further, as will be discussed further in the next chapter, individuals who drink on a chronic basis become somewhat tolerant to the intoxicating effects of alcohol. For these reasons a person who is tolerant to the effects of alcohol might have a rather high BAL while appearing relatively normal. The BAL that will be achieved by two people who consume a similar amount of alcohol will vary as a result of a number of different factors such as the individual’s body size (or volume). To illustrate this confusing characteristic of alcohol, consider the hypothetical example of a person who weighs 100 pounds, who consumed two regular drinks in one hour’s time. Blood tests would reveal that this individual had a BAL of 0.09 mg/mL (slightly above legal intoxication in most states) (Maguire, 1990). But an individual who weighs 200 pounds would, after consuming the same amount of alcohol, have a measured BAL of only 0.04 mg/mL. Each person would have consumed the same amount of alcohol, but it would be more concentrated in the smaller individual, resulting in a higher BAL. A variety of other factors influence the speed with which alcohol enters the blood and the individual’s blood alcohol level. However, Figure 7.1 provides a rough estimate of the blood alcohol levels that might be achieved through the consumption of different


medication Antabuse (disulfiram) works by blocking the enzyme aldehyde dehydrogenase. This allows acetaldehyde to build up in the individual’s blood, causing the individual to become ill from the toxic effects of the acetaldehyde.


the term blood alcohol concentration (BAC) will be used in place of the blood alcohol level.


Chapter Seven

Number of drinks in one hour

Weight (pounds) 100































































Level of legal intoxication with measured blood alcohol level of 0.08 mg/dl. Individuals at or below this line are legally too intoxicated to drive.

*Rounded off.

FIGURE 7.1 Approximate blood alchohol levels. Note: This figure is intended only to illustrate the cumulative effects of alcohol ingestion. It is not intended to serve as a guide for alcohol use and should not be used for such a purpose.

amounts of alcohol. This chart is based on the assumption that one “drink” is either one can of standard beer or one regular mixed drink. It should be noted that although the BAL provides an estimate of the individual’s current level of intoxication, it is of little value in screening individuals for alcohol abuse problems (Chung et al., 2000).

Subjective Effects of Alcohol on the Individual: At Normal Doses in the Average Drinker Both as a toxin and as a psychoactive agent, alcohol is quite weak. To show the relative potency of alcohol compared to morphine, to achieve the same effects of a 10 mg intravenous dose of morphine, the individual must ingest 15,000–20,000 mg of alcohol (Jones, 1996).15 However, when it is consumed in sufficient quantities, alcohol does have an effect on the user, and it is for its psychoactive effects that most people consume alcohol.



is the approximate amount of alcohol found in one standard

At low to moderate dosage levels, people’s expectations play a role in both how they interpret the effects of alcohol and their drinking behavior (Brown, 1990; Smith, Goldman, Greenbaum, & Christiansen, 1995). These expectations about alcohol’s effects begin to form early in life, perhaps as early as 3 years of age, and that such expectations solidify between the ages of 3 and 7 (Jones & McMahon, 1998). This is clearly seen in the observation that adolescents who abused alcohol were more likely to anticipate a positive experience when they drank than were their nondrinking counterparts (Brown, Creamer, & Stetson, 1987). After a person has had one or two drinks, alcohol causes a second effect, known as the disinhibition effect on the individual. Researchers now believe that the disinhibition effect is caused when alcohol interferes with the normal function of inhibitory neurons in the cortex. This is the part of the brain most responsible for “higher” functions, such as abstract thinking, speech, and so on. The cortex is also the part of the brain where much of our voluntary behavior is planned. As the alcohol interferes with cortical nerve function, one tends to temporarily “forget” social inhibitions (Elliott, 1992; Julien, 1992). During periods of alcohol-induced



disinhibition, the individual may engage in some behavior that, under normal conditions, he or she would never carry out. It is this disinhibition effect that may contribute to the relationship between alcohol use and aggressive behavior. For example, 40% to 50% of those who commit homicide (Parker, 1993) and up to two-thirds of those who engage in self-injurious acts (McClosky & Berman, 2003) used alcohol prior to or during the act itself. Individuals with either developmental or acquired brain damage are especially at risk for the disinhibition effects of alcohol (Elliott, 1992). This is not to say, however, that the disinhibition effect is seen only in individuals with some form of neurological trauma. Individuals without any known form of brain damage may also experience alcohol-induced disinhibition.

Effects of Alcohol at Intoxicating Doses: For the Average Drinker For a 160-pound person, 2 drinks in an hour’s time would result in a BAL of 0.05 mg/mL. At this level of intoxication, the individual’s reaction time and depth perception become impaired (Hartman, 1995). The individual will feel a sense of exhilaration and a loss of inhibitions (Renner, 2004). Four drinks in an hour’s time will cause a 160-pound person to have a BAL of 0.10 mg/mL or higher (Maguire, 1990). At about this level of intoxication, the individual’s reaction time is approximately 200% longer than it is for the nondrinker (Garriott, 1996), and she or he will have problems coordinating muscle actions (a condition called ataxia). The drinker’s speech will be slurred, and he or she will stagger rather than walk (Renner, 2004). If our hypothetical 160-pound drinker were to drink more than four drinks in an hour’s time, his or her blood alcohol level would be even higher. Research has shown that individuals with a BAL between 0.10 and 0.14 mg/mL are 48 times as likely as the nondrinker to be involved in a fatal car accident (Alcohol Alert, 1996). A person with a BAL of 0.15 mg/mL would be above the level of legal intoxication in every state and would definitely be experiencing some alcoholinduced physical problems. Also, because of alcohol’s effects on reaction time, individuals with a BAL of 0.15 mg/mL are between 25 times (Hobbs et al., 1995) and 380 times (National Institute on Alcohol Abuse

and Alcoholism, 1996) as likely as a nondrinker to be involved in a fatal car accident. The person who has a BAL of 0.20 mg/mL will experience marked ataxia (Garriott, 1996; Renner, 2004). The person with a BAL of 0.25 mg/mL would stagger around and have difficulty making sense out of sensory data (Garriott, 1996; Kaminski, 1992). The person with a BAL of 0.30 mg/mL would be stuporous and confused (Renner, 2004). With a BAL of 0.35 mg/mL, the stage of surgical anesthesia is achieved (Matuschka, 1985). At higher concentrations, alcohol’s effects are analogous to those seen with the anesthetic ether (Maguire, 1990). Unfortunately, the amount of alcohol in the blood necessary to bring about a state of unconsciousness is only a little less than the level necessary for a fatal overdose. This is because alcohol has a therapeutic index (TI) of between 1:4 and 1:10 (Grinspoon & Bakalar, 1993). In other words, the minimal effective dose of alcohol (i.e., the dose at which the user becomes intoxicated) is a significant fraction of the lethal dose. Thus, when a person drinks to the point of losing consciousness she or he is dangerously close to the point of overdosing on alcohol. Because of alcohol’s low TI, it is very easy to die from an alcohol overdose, or acute alcohol poisoning, something that happens 200 to 400 times a year in the United States (Garrett, 2000). Even experienced drinkers have been known to die from an overdose of alcohol. About 1% of drinkers who achieve a BAL of 0.35 mg/mL will die without medical treatment (Ray & Ksir, 1993).16 At or above a BAL of 0.35 mg/mL, alcohol is thought to interfere with the activity of the nerves that control respiration (Lehman et al., 1994). Note that since a BAL of 0.35 mg/mL or above may result in death, all cases of known or suspected alcohol overdose should be immediately treated by a physician. A BAL of 0.40 mg/mL will cause the drinker to fall into a coma and has about a 50% death rate without medical intervention (Bohn, 1993). The LD50 is thus around 0.40 mg/mL. Segal and Sisson (1985) reported that the approximate lethal BAL in human beings was 0.5 mg/mL, while Renner (2004) suggested that it is 0.60 mg/mL. In theory, the LD100 is reached when the drinker has a BAL between 0.5 and 0.8 mg/mL for the nontolerant drinker. However, there are cases on record in which an alcohol-tolerant person was still conscious and able to 16

Thus, the LD01 dosage level for alcohol is about 0.35 mg/mL.


Chapter Seven

TABLE 7.1 Effects of Alcohol on the Infrequent Drinker Blood alcohol level (BAL)

Behavioral and physical effects


Feeling of warmth, relaxation


Skin becomes flushed. Drinker is more talkative, feels euphoria. At this level, psychomotor skills are slightly to moderately impaired, and ataxia develops. Loss of inhibitions, increased reaction time, and visual field disturbances


Slurred speech, severe ataxia, mood instability, drowsiness, nausea, staggering gait, confusion


Lethargy, combativeness, stupor, incoherent speech, vomiting


Coma, respiratory depression

Above 0.40


Sources: Based on material provided by Baselt (1996); Lehman, Pilich, & Andrews (1994), pp. 305–309; and Morrison, Rogers, & Thomas (1995), pp. 371–389.

talk with a BAL as high as 0.78 mg/mL (Bohn, 1993; Schuckit, 2000). The effects of alcohol on the rare drinker are summarized in Table 7.1. At high doses, the stomach will begin to excrete higher levels of mucus than is normal and will also close the pyloric valve between the stomach and the small intestine to try to slow down the absorption of the alcohol that is still in the stomach (Kaplan et al., 1994). These actions contribute to feelings of nausea, which will reduce the drinker’s desire to consume more alcohol and might also contribute to the urge to vomit that many drinkers report they experience at the higher levels of intoxication. Vomiting will allow the body to rid itself of the alcohol the drinker has ingested. But alcohol interferes with the normal vomit reflex and might even cause the drinker to attempt to vomit when unconscious, causing the person to run the risk of aspirating some of the material being regurgitated. This can contribute to the condition known as aspirative pneumonia,17 or can cause death by blocking the airway with stomach contents. 17See


Medical Complications of Alcohol Use in the Average Drinker The hangover. Although there is evidence that humans have known about the alcohol-induced hangover for thousands of years, the exact mechanism by which alcohol is able to cause the drinker to suffer a hangover is still unknown (Swift & Davidson, 1998). Indeed, researchers are still divided over whether the condition is caused by the alcohol ingested by the drinker, a metabolite of alcohol (such as acetaldehyde), or some of the compounds found in the alcoholic beverage that give it flavor, aroma, and taste (called congeners ) (Swift & Davidson, 1998). Some researchers believe that the hangover is a symptom of an early alcohol withdrawal syndrome (Ray & Ksir, 1993; Swift & Davidson, 1998). Other researchers suggest that the alcohol-induced hangover is caused by the lower levels of -ßendorphin that result during alcohol withdrawal (Mosier, 1999). What is known about the alcohol-induced hangover is that 75% of individuals who drink to excess will experience a hangover at some point in their lives, although there is evidence that some drinkers are more prone to experience this alcohol-use aftereffect than are others (Swift & Davidson, 1998). Some of the physical manifestations of the alcohol-hangover include fatigue, malaise, sensitivity to light, thirst, tremor and nausea, dizziness, depression, and anxiety (Swift & Davidson, 1998). Although the hangover may, at least in severe cases, make the victim wish for death (O’Donnell, 1986), there usually is little physical risk for the individual, and in general the symptoms resolve in 8 to 24 hours (Swift & Davidson, 1998). Conservative treatment such as antacids, bed rest, solid foods, fruit juice, and over-the-counter analgesics are usually all that is required to treat an alcohol-induced hangover (Kaminski, 1992; Swift & Davidson, 1998). The effects of alcohol on sleep. Alcohol, like the other CNS depressants, may induce a form of sleep, but it does not allow for a normal dream cycle. Alcoholinduced sleep disruption is strongest in the chronic drinker, but alcohol can disrupt the sleep of even the rare social drinker. The impact of chronic alcohol use on the normal sleep cycle will be discussed in the next chapter. Even moderate amounts of alcohol consumed within 2 hours of going to sleep can contribute to


episodes of sleep apnea.18 The use of alcohol prior to going to sleep can weaken pharyngeal muscle tone, increasing the chances that the sleeper will experience increased snoring and sleep breathing problems (Qureshi & Lee-Chiong, 2004). Thus, people with a respiratory disorder, especially sleep apnea, should discuss their use of alcohol with their physician to avoid alcohol-related sleep breathing problems. Alcohol use and cerebrovascular accidents. There is mixed evidence that alcohol use increases the individual’s risk of a cerebrovascular accident (CVA, or, stroke). J. W. Smith (1997) concluded that even light alcohol use, defined as the individual’s ingesting 1–14 ounces of pure alcohol per month, more than doubled the risk for hemorrhagic stroke. Note that the lower limit of this range of alcohol use, one ounce of pure alcohol per month, is less than the amount of alcohol found in just a single can of beer. Yet Jackson, Sesso, Buring, and Gaziano (2003) concluded that the moderate use of alcohol (defined as no more than 1 standard drink in 24 hours) reduced the individual’s risk of both ischemic and hemorrhagic strokes in a sample of male physicians who had already suffered one CVA. The reason for these apparently contradictory findings is not known at this time. Drug interactions involving alcohol.19 There has been little research into the effects of moderate alcohol use (defined as 1–2 standard drinks per day) on the action of pharmaceutical agents (Weathermon & Crabb, 1999). It is known that alcohol functions as a CNS depressant, and thus it may potentiate the action of other CNS depressants such as antihistamines, opiates, barbiturates, anesthetic agents, and benzodiazepines and thus should not be used by patients using these agents (Weathermon & Crabb, 1999; Zernig & Battista, 2000). Patients who take nitroglycerin, a medication often used in the treatment of heart conditions, frequently develop significantly reduced blood pressure levels, possibly to the point of dizziness and loss of consciousness, if they drink while using this medication (Zernig & Battista, 2000). Patients 18

See Glossary. The list of potential alcohol-drug interactions is quite extensive. Patients who are taking either a prescription or over-the-counter medication should not consume alcohol without first checking with a physician or pharmacist to determine if there is a danger for an interaction between the two substances.



taking the antihypertensive medication propranolol should not drink, as the alcohol will decrease the effectiveness of this antihypertensive medication (Zernig & Battista, 2000). Further, patients taking the anticoagulant medication warfarin should not drink, as moderate to heavy alcohol use can cause the user’s body to biotransform the warfarin more quickly than normal (Alcohol Alert, 1995a; Graedon & Graedon, 1995). There is some evidence that the antidepressant amitriptyline might enhance alcohol-induced euphoria (Ciraulo, Creelman, Shader, & O’Sullivan, 1995). The mixture of alcohol and certain antidepressant medications such as amitriptyline, desimipramine, or doxepin might also cause the user to experience problems concentrating, as alcohol will potentiate the sedation caused by these medications, and the interaction between alcohol and the antidepressant might contribute to rapid blood pressure changes (Weathermon & Crabb, 1999). A person who drinks while under the influence of one of the selective serotonin reuptake inhibitors (SSRIs) may experience the serotonin syndrome as a result of the alcohol-induced release of serotonin within the brain and the blockade effect of the SSRIs (Brown & Stoudemire, 1998). Surprisingly, there is some animal research to suggest that individuals who take beta carotene and who drink to excess on a chronic basis might experience a greater degree of liver damage than would the heavy drinker who did not take this vitamin supplement (Graedon & Graedon, 1995). When combined with aspirin, alcohol might contribute to bleeding in the stomach. This is because both alcohol and aspirin are irritants to the stomach lining, and when used together increase the chances of damage to the stomach lining (Sands, Knapp, & Ciraulo, 1993). Although acetaminophen does not irritate the stomach lining, the chronic use of alcohol causes the liver to release enzymes that transform the acetaminophen into a poison, even when the drug is used at recommended dosage levels (Zernig & Battista, 2000). Patients taking oral medications for diabetes should not drink, as the antidiabetic medication may interfere with the body’s ability to biotransform alcohol. This may possibly result in acute alcohol poisoning from even moderate amounts of alcohol for the individual who combines alcohol and oral antidiabetic medications. Further, because the antidiabetic medication prevents


the body from being able to biotransform alcohol, the individual will remain intoxicated far longer than he or she would normally. In such a case, the individual might underestimate the time before which it would be safe to drive a motor vehicle. Patients who are on the antidepressant medications known as monoamine oxidase inhibitors (MAO inhibitors, or MAOIs) should not consume alcohol under any circumstances. The fermentation process produces an amino acid, tyramine, along with the alcohol. Normally, this is not a problem. Indeed, tyramine is found in certain foods and it is a necessary nutrient. But tryamine interacts with the MAO inhibitors, causing dangerously high, and possibly fatal, blood pressure levels (Brown & Stoudemire, 1998). Patients who take MAO inhibitors are provided a list of foods that they should avoid while they are taking their medication, lists that usually include alcohol. Researchers have found that the calcium channel blocker Verapamil inhibits the process of alcohol biotransformation, increasing the period of time in which alcohol might cause the user to be intoxicated (Brown & Stoudemire, 1998). Although early research studies suggested that the medications Zantac (ranitidine)20 and Tagamet (cimetidine) interfered with the biotransformation of alcohol, subsequent research failed to support this hypothesis (Jones, 1996). Patients who are taking the antibiotic medications chloramphenicol, furazolidone, metronidazole, or the antimalarial medication quinacrine should not drink alcohol. The combination of these antibiotics with alcohol may produce a painful reaction very similar to that seen when the patient on disulfiram (to be discussed in a later chapter) would consume alcohol (Meyers, 1992). Individuals taking the antibiotic erythromycin should not consume alcohol, as this medication can contribute to abnormally high blood alcohol levels due to enhanced gastric emptying (Zernig & Battista, 2000). Persons taking the antibiotic doxycycline should not drink, since alcohol can decrease the blood levels of this medication, possibly to the point that it will no longer be effective (Brown & Stoudemire, 1998). People who are taking the antitubercular drug 20

The most common brand name is given first, with the generic name in parentheses.

Chapter Seven

isoniazid (or, “INH” as it is often called) should also avoid the use of alcohol. The combination of these two chemicals will reduce the effectiveness of the isoniazid and may increase the individual’s chances of developing hepatitis. Although there has been little research into the possible interaction between alcohol and marijuana, as the latter substance is illegal, preliminary evidence does suggest that alcohol’s depressant effects might exacerbate the CNS depressant effects of marijuana (Garriott, 1996). Alcohol is a very potent chemical, and it is not possible to list all of the potential interactions between alcohol and the various medications currently in use. Thus, before mixing alcohol with any medication, people should consult a physician or pharmacist in order to avoid potentially dangerous interactions between pharmaceutical agents and alcohol.

Alcohol Use and Accidental Injury or Death Advertisements in the media proclaim the benefits of recreational alcohol use at parties, social encounters, or occasions to celebrate good news, but they rarely mention alcohol’s role in accidental injury. The grim reality is that there is a known relationship between alcohol use and accidental injury. For example, in 2002, 17,970 people were killed on U.S. roads in alcohol-related motor vehicle accidents (41% of the total number of traffic-related deaths that year) (“National Traffic Death Toll,” 2003). A BAL between 0.05 and 0.079, which is below the legal limit of 0.08, still increases the individual’s risk of being involved in a motor vehicle accident by 546%, and a BAL above 0.08 increases the risk at least 1,500% above that of a nondrinking driver (Movig et al., 2004). In addition to its role in motor vehicle deaths, alcohol use has been found to be a factor in 51% of all boating fatalities (Smith, Keyl, et al., 2001), and an estimated 70% of the motorcycle drivers who are killed in an accident are thought to have been drinking prior to the accident (Colburn, Meyer, Wrigley, & Bradley, 1993). Alcohol use is a factor in 17% to 53% of all falls, and 40% to 64% of all fatalities associated with fires (Lewis, 1997). Thirty-two percent of the adults who die



in bicycling accidents were found to have alcohol in their systems (Li, Baker, Smialek, & Soderstrom, 2001). Indeed, 52% of those individuals treated at one major trauma center had alcohol in their blood at the time of admission (Cornwell et al., 1998). No matter how you look at it, even casual alcohol use carries with it a significantly increased risk of accidental injury or death. Indeed, the recommendation has been made that any patient involved in an alcohol-related accident, or who suffered an injury while under the influence of alcohol, should be examined to determine whether he or she has an alcohol use disorder (Reynaud, Schwan, Loiseaux-Meunier, Albuisson, & Deteix, 2001) (discussed in next chapter).

Summary This chapter has briefly explored the history of alcohol, including its early history as humans’ first recreational chemical. In this chapter, the process of distillation was discussed, as was the manner in which wine is obtained from fermented fruit. The use of distillation to achieve concentrations of alcohol above 15% was reviewed, and questions surrounding the use of alcohol were discussed. Alcohol’s effects on the rare social drinker were reviewed as were some of the more significant interactions between alcohol and pharmaceutical agents. The history of alcohol consumption in the United States and the pattern of alcohol use in the United States were examined.


Chronic Alcohol Abuse and Addiction

alcohol will demonstrate at least one transient alcoholrelated problem (such as blackouts, which will be discussed later in this chapter) at some point in their lives (American Psychiatric Association, 2000; Sadock & Sadock, 2003). The physical addiction to alcohol (alcohol dependence, or alcoholism) is the most extreme form of an alcohol-use disorder. Estimates of the scope of alcoholuse problems in the United States range from an estimated 9 million alcohol-dependent people to another 6 million alcohol abusers (Ordorica & Nace, 1998). In this country, alcoholism is predominantly a male disease, with alcohol-dependent males outnumbering alcohol-dependent females by a ratio of about 2⬊1 (Blume, 1994). Clinicians who specialize in the area of substance abuse often hear clients deny that they are alcoholic, claiming that they are “only problem drinkers.” Unfortunately, there is little evidence to suggest that “problem drinkers” are different from alcohol-dependent individuals (Prescott & Kendler, 1999; Schuckit, Zisook, & Mortola, 1985). At best, research data suggest that the so-called problem drinker differs from the alcoholdependent individual only in the number and severity of the person’s alcohol-related problems. There is little evidence supporting the theory that problem drinkers are significantly different from alcohol-dependent individuals (Schuckit, Zisook, & Mortola, 1985). Alcohol dependence usually develops after 10 (Meyer, 1994) to 20 years (Alexander & Gwyther, 1995) of heavy drinking, and once established it can have lifelong implications for the individual. For example, once a person does become dependent upon alcohol, even if that person should stop drinking for a period of time, the physical addiction can reassert itself “in a matter of days to weeks”

Introduction The focus of the last chapter was on the acute effects of alcohol on the “average” or rare social drinker. Unfortunately, alcohol abuse/addiction is the third leading preventable cause of death in the United States, causing between 85,000 and 175,000 premature deaths each year (Mokdad, Marks, Stroup, & Gerberding, 2004; Schuckit & Tapert, 2004). Alcohol abuse/addiction can also cause or exacerbate a wide range of physical, social, financial, and emotional problems for drinkers and their families. Indeed, given the potential of alcohol for harm, one could argue that if it were to be discovered only today, its use might never be legalized (Miller & Hester, 1995). In this chapter, some of the manifestations and consequences of alcohol addiction will be discussed.

Scope of the Problem At some point in their lives, fully 90% of all adults in the United States will consume alcohol (Schuckit & Tapert, 2004). Most of those who consume alcohol do so in a responsible manner, but the alcohol-use disorders are still the most common psychiatric disorders encountered by mental health professionals (Gold & Miller, 1997b). Approximately 10% of people who consume alcohol develop an alcohol-use problem (Fleming, Mihic, & Harris, 2001). Alcohol abuse has been shown to impact the individual’s social life, interpersonal relationships, and educational or vocational activities; it often causes or contributes to legal problems for the drinker. But not every person who experiences a single alcoholrelated problem is dependent on this chemical. Indeed, 60% of the men and 30% of the women who consume



Chronic Alcohol Abuse and Addiction

if he or she should resume drinking (Meyer, 1994, p. 165). Thus, once people become dependent upon alcohol, it would appear unlikely that they could return to nonabusive drinking.

Is There a “Typical” AlcoholDependent Person? The “binge” drinker. A “binge” is defined as the consumption of five or more cans of beer or regular mixed drinks during a single episode of alcohol consumption by a person who is not a daily drinker (Naimi et al., 2003). The authors used this definition to determine that 14.3% of the adults in the United States had engaged in at least one period of binge drinking in any given 30-day period of the year 2001. Males accounted for the greatest percentage of binge drinking, involved in 81% of the estimated 1.5 billion annual episodes of binge drinking that take place in the United States (Naimi et al., 2003). Not surprisingly, heavy drinkers were more likely to engage in binge drinking and were more likely to consume more alcohol during a binge than were light to moderate drinkers. Alcohol abusers/addicts are frequently “masters of denial” (Knapp, 1996, p. 19), able to offer a thousand and one rationalizations as to why they cannot possibly have an alcohol-use problem: They always go to work, never go to the bar to drink, know 10 people who drink as much as if not more than they do, and so on. One of the most common rationalizations offered by the person with an alcohol-use problem is that he or she has nothing in common with the stereotypical “skid row” derelict. In reality, only about 5% of those who are dependent on alcohol fit the image of the skid row alcoholic (Knapp, 1996). The majority of those with alcohol-use problems might best be described as “high functioning” (Knapp, 1996, p. 12) individuals, with jobs, responsibilities, families, and public images to protect. In many cases, individuals’ growing dependence on alcohol is hidden from virtually everybody, including themselves. It is only in secret moments of introspection that the alcohol-dependent person wonders why he or she cannot drink “like a normal person.”

Alcohol Tolerance, Dependence, and “Craving”: Signposts of Alcoholism Certain symptoms, when present, suggest that the drinker has moved past the point of simple social drinking to the place where there might be an alcoholuse problem or even physical dependence on alcohol and its effects. The first of these signs is known as tolerance. As people repeatedly consume alcohol, their bodies will begin to make certain adaptations to try to maintain normal function in spite of their use of alcohol. One reflection of this process is when the individual’s liver becomes more efficient at the biotransformation of alcohol. This improvement in the liver’s ability to biotransform alcohol is seen in the earlier stages of the individual’s drinking career and is known as metabolic tolerance. As the metabolic tolerance to alcohol develops drinkers notice that they must consume more alcohol to achieve a desired level of intoxication (Nelson, 2000). In clinical interviews, a drinker might admit that when he was 21, it “only” took 6 to 8 beers before he became intoxicated; now it takes 12 to 15 beers consumed over the same period of time before he is drunk. Another expression of tolerance to alcohol’s effects is known as behavioral tolerance. Where a novice drinker might appear quite intoxicated after five or six beers, the experienced drinker might show few outward signs of intoxication even after consuming a significant amount of alcohol. On occasion, even skilled lawenforcement or health care professionals are shocked to learn that the apparently sober person in their care has a BAL well into the range of legal intoxication, which is why objective test data are used to determine whether the individual is or is not legally intoxicated at the time he or she is stopped by the police. Pharmacodynamic tolerance is the last subform of tolerance that will be discussed in this chapter. As the cells of the central nervous system attempt to carry out their normal function in spite of the continual presence of alcohol, they become less and less sensitive to the intoxicating effects of the chemical. Over time, the individual has to consume more and more alcohol to achieve the same effect on the CNS. As pharmacodynamic tolerance develops, the individual might switch


Chapter Eight

TABLE 8.1 Effects of Alcohol on the Chronic Drinker Blood alcohol level (BAL)

Behavioral and physical effects


None to minimal effect


Mild ataxia, euphoria


Mild emotional changes, ataxia


Drowsiness, lethargy, stupor

Above 0.40

Coma, death

Sources: Based on material provided by Baselt (1996); Lehman, Pilich, & Andrews (1994), pp. 305–309; Morrison, Rogers, & Thomas (1995), pp. 371–389.

from beer to “hard” liquor, or increase the amount of beer ingested to achieve a desired state of intoxication. If any of these subtypes of tolerance has developed, the patient will simply be said to be “tolerant” to the effects of alcohol. Compare the effects of alcohol for the chronic drinker in Table 8.1 with those of Table 7.1 (in the preceding chapter). Tolerance requires great effort on the part of the individual’s body, and eventually the different organs prove unequal to the task of maintaining normal function in the face of the constant presence of alcohol. When this happens, the individual actually becomes less tolerant to alcohol’s effects. It is not uncommon for chronic drinkers to admit that in contrast to the past, they now can become intoxicated on just a few beers or mixed drinks. An assessor would say that this individual’s tolerance is “on the downswing,” a sign that the drinker has entered the later stages of alcohol dependence. Another warning sign that suggests drinkers are addicted to alcohol is their growing dependence on alcohol. There are two subforms of alcohol dependence. First, there is psychological dependence. In psychological dependence, people repeatedly self-administer alcohol because they find it rewarding or because they believe that the alcohol is necessary to help them socialize, relax, sleep better, and so on. These people use alcohol as a “crutch,” believing that they are unable to be sexual, to sleep, or to socialize without first using alcohol. Often, the alcohol-dependent person believes that he or she deserves to have a drink(s), for one reason or another. The second form of dependence is known as physical dependence. Remember that the chronic use of alcohol

will force the body to attempt to adapt to the constant presence of the chemical. Indeed, in a very real sense the body might be said to now need the foreign chemical in order to maintain normal function. When the chemical is suddenly removed from the body, the body will go through a period of readjustment as it relearns how to function without the foreign chemical. This period of readjustment is known as the withdrawal syndrome. Like many drugs of abuse, alcohol has a characteristic withdrawal syndrome. But unlike many of the other drugs of abuse, the alcohol withdrawal syndrome involves not only some degree of subjective discomfort for the individual but also the potential for life-threatening medical complications. It is for this reason that all cases of alcohol withdrawal should be evaluated and treated by a physician. Several factors influence the severity of the alcohol withdrawal syndrome, including (a) how frequently and (b) in what amount, the individual consumed alcohol, and (c) his or her state of health. The longer the period of alcohol use and the greater the amount ingested, the more severe the alcohol withdrawal syndrome will be. The symptoms of alcohol withdrawal for the chronic alcoholic will be discussed in more detail in a later section of this chapter. Often, the recovering alcoholic will speak of a “craving” for alcohol that continues long after drinking has stopped. Some individuals suggest that this is a feeling of being “thirsty” or they find themselves preoccupied with the possibility of drinking. At this point, it is not known why alcohol-dependent persons “crave” alcohol. However, the fact that the individual does become preoccupied with alcohol use or craves a drink is a sign that he or she has become dependent upon alcohol. The topic of “craving” will be discussed in more detail in the chapter on treatment problems. The TIQ hypothesis. In the late 1980s Trachtenberg and Blum (1987) suggested that chronic alcohol use significantly reduces the brain’s production of the endorphins, the enkephalins, and the dynorphins. These neurotransmitters function in the brain’s pleasure center to help moderate an individual’s emotions and behavior. They proposed that a byproduct of alcohol metabolism and neurotransmitters normally found within the brain combined to form the compound tetrahydroisoquinoline (or TIQ) (Blum, 1988). They also suggested that the TIQ was capable of binding to opiate-like receptor sites within the brain’s pleasure center, causing the individual to


Chronic Alcohol Abuse and Addiction

experience a sense of well-being (Blum & Payne, 1991; Blum & Trachtenberg, 1988). However, TIQ’s effects were thought to be short-lived, forcing the individual to drink more alcohol in order to regain or maintain the initial feeling of euphoria achieved through the use of alcohol. Over time, it was thought that the individual’s chronic use of alcohol would cause his or her brain to reduce its production of enkephalins, as the ever-present TIQ was substituted for these naturally produced opiate-like neurotransmitters (Blum & Payne, 1991; Blum & Trachtenberg, 1988). The cessation of alcohol intake was thought to result in a neurochemical deficit, which the individual would then attempt to relieve through further chemical use (Blum & Payne, 1991; Blum & Trachtenberg, 1988). Subjectively, this deficit was experienced as the “craving” for alcohol commonly reported by recovering alcoholics, according to the authors. While the TIQ theory had a number of strong adherents in the late 1980s and early 1990s, it has gradually fallen into disfavor. A number of research studies have failed to find evidence to support the TIQ hypothesis, and there are few researchers in the field of alcohol addiction who believe that TIQ plays a major role in the phenomenon of alcohol “craving.”

Complications of Chronic Alcohol Use Alcohol is a mild toxin, and over time its chronic use will often result in damage to one or more organ systems. It is important to recognize that chronic alcohol abuse includes both “weekend” and “binge” drinking. Repeated episodic alcohol abuse may bring about many of the same effects seen with chronic alcohol use. Unfortunately, there is no simple formula by which to calculate the risk of alcohol-related organ damage or which organs will be affected (Segal & Sisson, 1985). As the authors noted two decades ago: Some heavy drinkers of many years’ duration appear to go relatively unscathed, while others develop complications early (e.g. after five years) in their drinking careers. Some develop brain damage; others liver disease; still others, both. The reasons for this are simply not known. (p. 145)

These observations remain true today. However, it is known that in different individuals, the chronic use of

alcohol will have an impact on virtually every body system. We will briefly discuss the effects of chronic alcohol use on various organ systems below. The Effects of Chronic Alcoholism on the Digestive System As was discussed in the last chapter, during distillation many of the vitamins and minerals that were in the original wine are lost. Thus, where the original wine might have contributed something to the nutritional requirements of the individual, even this modest contribution is lost through the distillation process. Further, when the body breaks down alcohol, it finds “empty calories.” The body obtains carbohydrates from the alcohol it metabolizes, without the protein, vitamins, calcium, and other minerals needed by the body. Also, the frequent use of alcohol interferes with the absorption of needed nutrients from the gastrointestinal tract and may cause the drinker to experience chronic diarrhea (Fleming et al., 2001). These factors may contribute to a state of vitamin depletion called avitaminosis. Although alcohol does not appear to cause cancer, it does seem to facilitate the development of some forms of cancer and thus can be classified as a cocarcinogenic agent (Bagnardi et al., 2001). The chronic use of alcohol is associated with higher rates of cancer of the upper digestive tract, the respiratory system, the mouth, pharynx, larynx, esophagus, and the liver (Bagnardi et al., 2001). Alcohol use is associated with 75% of all deaths due to cancer of the esophagus (Rice, 1993). Further, although the exact mechanism is not known, there is an apparent relationship between chronic alcohol use and cancer of the large bowel in both sexes, and cancer of the breast in women (Bagnardi et al., 2001). The combination of cigarettes and alcohol is especially dangerous. Chronic alcoholics experience an almost sixfold increase in their risk of developing cancer of the mouth or pharynx (Garro, Espina, & Lieber, 1992, p. 83). For comparison, consider that cigarette smokers have slightly over a sevenfold increased risk of developing cancer of the mouth or pharynx. Surprisingly, however, alcoholics who also smoke have a 38-fold increased risk of cancer in these regions, according to the authors.1 1

The relationship between tobacco use and drinking is discussed in Chapter 19.


The body organ most heavily involved in the process of alcohol biotransformation is the liver, which bears the brunt of alcohol-induced organ damage (Sadock & Sadock, 2003). Unfortunately, scientists do not know how to determine the level of exposure necessary to cause liver damage for any given individual, but it is known that chronic exposure to even limited amounts of alcohol may result in liver damage (Frezza et al., 1990; Lieber, 1996; Schenker & Speeg, 1990). Indeed, chronic alcohol use is the most common cause of liver disease in both the United States (Hill & Kugelmas, 1998) and the United Kingdom (Walsh & Alexander, 2000). Approximately 80% to 90% (Ordorica & Nace, 1998; Walsh & Alexander, 2000) of heavy drinkers will develop the first manifestation of alcohol-related liver problems: a “fatty liver” (also called “steatosis”). This is a condition in which the liver becomes enlarged and does not function at full efficiency (Nace, 1987). There are few indications of a “fatty” liver that would be noticed without a physical examination, but blood tests would detect characteristic abnormalities in the patient’s liver enzymes (Schuckit, 2000). This condition will reverse itself with abstinence (Walsh & Alexander, 2000). Approximately 35% of individuals with alcoholinduced “fatty” liver who continue to drink go on to develop a more advanced form of liver disease: alcoholic hepatitis. In alcohol-induced hepatitis, the cells of the liver become inflamed as a result of the body’s continual exposure to alcohol. Symptoms of alcoholic hepatitis may include a low grade fever, malaise, jaundice, an enlarged tender liver, and dark urine (Nace, 1987). Blood tests would also reveal characteristic changes in the blood chemistry (Schuckit, 2000), and the patient might complain of abdominal pain (Hill & Kugelmas, 1998). Even with the best of medical care, 20% to 65% of the individuals with alcohol-induced hepatitis will die (Bondesson & Sapperston, 1996). Doctors do not know why some chronic drinkers develop alcohol-induced hepatitis and other do not, although the individual’s genetic inheritance is thought to play a role in this process. Alcohol-induced liver damage usually develops after 15 to 20 years of heavy drinking (Walsh & Alexander, 2000). Individuals who have alcohol-induced hepatitis should avoid having surgery, if possible, as they are poor surgical risks. Unfortunately, if the patient were to be examined by a physician who was not aware of the patient’s history of

Chapter Eight

alcoholism, symptoms such as abdominal pain might be misinterpreted as being caused by other conditions such as appendicitis, pancreatitis, or an inflammation of the gall bladder. If the physician were to attempt surgical interventions, the patient’s life might be placed at increased risk because of the complications caused by the undiagnosed alcoholism. Alcoholic hepatitis is “a slow, smoldering process which may proceed or coexist with” (Nace, 1987, p. 25) another form of liver disease, known as cirrhosis of the liver. As a result of alcohol-induced hepatitis, the cells of the liver begin to die because of their chronic exposure to alcohol. Eventually, these dead liver cells are replaced by scar tissue. A physical examination of the patient with cirrhosis of the liver will reveal a hard, nodular liver, an enlarged spleen, “spider” angiomas on the skin, tremor, jaundice, mental confusion, signs of liver disease on various blood tests, and possibly a number of other symptoms such as testicular atrophy in males (Nace, 1987). Although some researchers believe that alcoholic hepatitis precedes the development of cirrhosis of the liver, this has not been proven. Indeed, “alcoholics may progress to cirrhosis without passing through any visible stage resembling hepatitis” (National Institute on Alcohol Abuse and Alcoholism, 1993b, p. 1). Thus, many chronic alcoholics never appear to develop alcoholic hepatitis, and the first outward sign of serious liver disease is the development of cirrhosis of the liver. Statistically, only about 20% of alcohol-dependent persons develops cirrhosis, but this still means that there are about 3 million people in the United States with alcohol-related liver disease (Karsan, Rojter, & Saab, 2004). Cirrhosis can develop in people who consume as little as 2 to 4 drinks a day for just 10 years (Karsan et al., 2004). A number of different theories have been advanced to explain the phenomenon of alcohol-induced liver disease. One theory suggests that “free radicals” that are generated during the process of alcohol biotransformation might contribute to the death of individual liver cells, initiating the development of alcohol-induced cirrhosis (Walsh & Alexander, 2000). It is known that as individual liver cells are destroyed, they are replaced by scar tissue. Over time, large areas of the liver may be replaced by scar tissue as significant numbers of liver cells die. Unfortunately, scar tissue is essentially nonfunctional. As more and more liver cells die the


Chronic Alcohol Abuse and Addiction

liver becomes unable to effectively cleanse the blood, allowing various toxins to accumulate in the circulation. Some toxins, like ammonia, are thought to then damage the cells of the CNS (Butterworth, 1995). At one point, it was thought that malnutrition was a factor in the development of alcohol-induced liver disease. However, research has found that the individual’s dietary habits do not seem to influence the development of alcohol-induced liver disease (Achord, 1995). Recently, scientists have developed blood tests capable of detecting one of the viruses known to infect the liver. The virus is known as the “Hepatitis Virus-C” (or Hepatitis-C, or HVC), and normally this virus is found in about 1.6% of the general population. But between 25% and 60% of chronic alcohol users have been found to be infected with HVC (Achord, 1995). This fact suggests that there may be a relationship between HVC infection, chronic alcohol use, and the development of liver disease. Whatever its cause, cirrhosis itself can bring about severe complications, including liver cancer and sodium and water retention (Nace, 1987; Schuckit, 2000). As the liver becomes enlarged, it begins to squeeze the blood vessels that pass through it, causing the blood pressure to build up within the vessels, adding to the stress on the drinker’s heart. This condition is known as portal hypertension, which can cause the blood vessels in the esophagus to swell from the back pressure. Weak spots form on the walls of the vessels much like weak spots form on an inner tube of a tire. These weak spots in the walls of the blood vessels of the esophagus are called esophageal varices, which may rupture. Ruptured esophageal varices is a medical emergency that, even with the most advanced forms of medical treatment, results in death for 20% to 30% of those who develop this disorder (Hegab & Luketic, 2001). Between 50% and 60% of those who survive will develop a second episode of bleeding, resulting in an additional 30% death rate. Ultimately, 60% of those afflicted with esophageal varices will die as a result of blood loss from a ruptured varix2 (Giacchino & Houdek, 1998). As if that were not enough, alcohol has been identified as the most common cause of a painful inflammation of the pancreas, known as pancreatitis (Fleming

et al., 2001). Although pancreatitis can be caused by other things, such as exposure to a number of toxic agents such as the venom of scorpions or certain insecticides, the chronic exposure to ethyl alcohol is the most common cause of toxin-induced pancreatitis in this country, accounting for 66% to 75% of the cases of pancreatitis (McCrady & Langenbucher, 1996; Steinberg & Tenner, 1994). Pancreatitis develops slowly and usually requires “10 to 15 years of heavy drinking” (Nace, 1987, p. 26) before it can develop. Even low concentrations of alcohol appear to inhibit the stomach’s ability to produce sufficient levels of prostaglandins necessary to protect it from digestive fluids (Bode et al., 1996), and there is evidence that beverages containing just 5% to 10% alcohol can contribute to damage of the lining of the stomach (Bode et al., 1996). This process seems to explain why about 30% of chronic drinkers develop gastritis,3 as well as bleeding from the stomach lining and the formation of gastric ulcers (McAnalley, 1996; Willoughby, 1984). If an ulcer forms over a major blood vessel, the stomach acid will eat through the stomach lining and blood vessel walls, causing a “bleeding ulcer.” This is a severe medical emergency, which may be fatal. Physicians will try to “seal” a bleeding ulcer through the use of laser beams, but in extreme cases conventional surgery is necessary to save the patient’s life. The surgeon may remove part of the stomach to stop the bleeding. This, in turn, will contribute to the body’s difficulties in absorbing suitable amounts of vitamins from food that is ingested (Willoughby, 1984). This, either by itself or in combination with further alcohol use, helps to bring about a chronic state of malnutrition in the individual. Unfortunately, the vitamin malabsorption syndrome that develops following the surgical removal of the majority of the individual’s stomach will, in turn, make the drinker a prime candidate for the development of tuberculosis (or TB) if he or she continues to drink (Willoughby, 1984). The topic of TB is discussed in more detail in Chapter 33. However, at this point it should be pointed out that upwards of 95% of alcohol-dependent individuals who had a portion of their stomach removed secondary to bleeding ulcers and who continue to drink ultimately developed TB (Willoughby, 1984).



Varix is the singular form of varicies.



Chapter Eight

The chronic use of alcohol can cause or contribute to a number of vitamin malabsorption syndromes, in which the individual’s body is no longer able to absorb needed vitamins or minerals from food. Some of the minerals that might not be absorbed by the body of the chronic alcoholic include zinc (Marsano, 1994) as well as sodium, calcium, phosphorus, and magnesium (Lehman, Pilich, & Andrews, 1994). The chronic use of alcohol will also interfere with the body’s ability to absorb or properly utilize vitamin A, vitamin D, vitamin B-6, thiamine, and folic acid (Marsano, 1994). Chronic drinking is also a cause of a condition known as glossitis,4 as well as possible stricture of the esophagus (Marsano, 1994). Each of these conditions can indirectly contribute to a failure on the part of the individual to ingest an adequate diet, further contributing to alcohol-related dietary deficiencies within the drinker’s body. Further, as was noted in the last chapter, alcohol-containing beverages are a source of “empty” calories. Many chronic drinkers obtain up to one-half of their daily caloric intake from alcoholic beverages, rather than from more traditional food sources (Suter, Schultz, & Jequier, 1992). Alcohol-related dietary problems can contribute to a decline in the immune system’s ability to protect the individual from various infectious diseases such as pneumonia and tuberculosis (TB). Alcohol-dependent individuals, for example, are three to seven times as likely to die from pneumonia as are nondrinkers (Schirmer, Wiedermann, & Konwalinka, 2000). The chronic use of alcohol is a known risk factor in the development of a number of different metabolic disorders. For example, although there is mixed evidence to suggest that limited alcohol use5 might serve a protective function against the development of Type 2 diabetes in women, heavy chronic alcohol use is a known risk factor for the development of Type 2 diabetes (National Institute on Alcohol Abuse and Alcoholism, 1993c; Wannamethee, Camargo, Manson, Willett, & Rimm, 2003). Between 45% and 70% of alcoholics with liver disease are also either glucose intolerant (a condition that suggests that the body is having trouble dealing with sugar in the blood) or diabetic (National Institute on Alcohol Abuse and 4See

Glossary. as 1 standard drink or 4 ounces of wine in a 24-hour period.


Alcoholism, 1994). Many chronic drinkers experience episodes of abnormally high (hyperglycemic) or abnormally low (hypoglycemic) blood sugar levels. These conditions are caused by alcohol-induced interference with the secretion of digestive enzymes from the pancreas (National Institute on Alcohol Abuse and Alcoholism, 1993c, 1994). Further, chronic alcohol use may interfere with the way the drinker’s body utilizes fats. When the individual reaches the point that he or she obtains 10% or more of the daily energy requirements from alcohol rather than more traditional foods, the individual’s body will go through a series of changes (Suter et al., 1992). First, the chronic use of alcohol will slow down the body’s energy expenditure (metabolism), which, in turn, causes the body to store the unused lipids as fatty tissue. This is the mechanism by which the so-called beer belly commonly seen in the heavy drinker is formed. The Effects of Chronic Alcohol Use on the Cardiopulmonary System Researchers have long been aware of what is known as the “French paradox,” which is to say a lower-thanexpected rate of heart disease in spite of a diet rich in the foods that supposedly are associated with an increased risk of heart disease (Goldberg, 2003).6 For reasons that are not well understood, the moderate use of alcohol-containing beverages has been found to bring about a 10% to 40% reduction in the individual’s risk of developing coronary heart disease (CHD) (Fleming et al., 2001; Klatsky, 2002, 2003). Mukamal et al. (2003) suggested that the actual form of the alcohol-containing beverage was not as important as the regular use of a moderate amount,7 although there is no consensus on this issue (Klatsky, 2002). However, this effect was moderated by the individual’s genetic heritage, with some drinkers gaining more benefit from moderate alcohol use than others (Hines et al., 2001). 6

For reasons that are not well understood, advocates of the moderate use of alcohol point to the lower incidence of heart disease experienced by the French, who consume wine on a regular basis, but they overlook the significantly higher incidence of alcohol-related liver disease experienced by the French (Walton, 2003). 7“Moderate” alcohol use is defined as no more than 2 twelve-ounce cans of beer, 2 five-ounce glasses of wine, or 1.5 ounces of vodka, gin, or other “hard” liquor in a 24-hour period (Klatsky, 2003).

Chronic Alcohol Abuse and Addiction

One theory for the reduced risk of CHD is that alcohol may function as an anticoagulant.Within the body, alcohol inhibits the ability of blood platelets to “bind” together (Klatsky, 2003; Renaud & DeLorgeril, 1992). This may be a result of alcohol’s ability to facilitate the production of prostacyclin and reduce the fibrogen levels in the body when it is used at moderate levels (Klatsky, 2003, 2002). By inhibiting the action of blood platelets to start the clotting process, the moderate use of alcohol may result in a lower risk of heart attack and certain kinds of strokes by 30% to 40% (Stoschitzky, 2000). It is theorized that moderate alcohol consumption also “significantly and consistently raises the plasma levels of the antiatherogenic HDL cholesterol” (Klatsky, 2002, p. ix), making it more difficult for atherosclerotic plaque to build up. However, physicians still hesitate to recommend that nondrinkers turn to alcohol as a way of reducing their risk of heart disease (Goldberg, 2003). Alcohol use to reduce one’s risk of disease is a “double-edged sword” (Klatsky, 2002, p. ix). Although the moderate use of alcohol might provide a limited degree of protection against coronary artery disease, it also increases the individual’s risk of developing alcohol-related brain damage (Karhunen, Erkinjuntti, & Laippala, 1994). There also is mixed evidence suggesting that consuming only one drink per day might be associated with a 10% increased risk of breast cancer for women for each drink they consume per day8 (Ellison, 2002). Thus, the role of alcohol in reducing the risk of heart attack is limited at best and carries with it other forms of health risks. When used to excess, alcohol not only loses its protective action but may actually harm the cardiovascular system. The excessive use of alcohol results in the suppression of normal red blood cell formation, and both blood clotting problems and anemia are common complications of alcoholism (Nace, 1987). Alcohol abuse is thought to be a factor in the development of cerebral vascular accidents (strokes, or CVAs). Light drinkers (2–3 drinks/day) have a twofold higher risk of a stroke, whereas heavy drinkers (4  drinks/day) almost triple their risk of a CVA (Ordorica & Nace, 8

Thus, a woman who consumed 2 glasses of wine per day would have a 20% higher risk of breast cancer than a nondrinking woman of the same age.


1998). Approximately 23,500 strokes each year in the United States are thought to be alcohol-related (Sacco, 1995). In large amounts (defined as more than the 1–2 drink a day limit identified above), alcohol is known to be cardiotoxic. Animal research has shown that the chronic use of alcohol inhibits the process of muscle protein synthesis, especially the myobibrillar protein necessary for normal cardiac function (Ponnappa & Rubin, 2000). In humans, chronic alcohol use is considered the most common cause of heart muscle disease (Rubin & Doria, 1990). Prolonged exposure to alcohol (6 beers a day or a pint of whiskey a day for 10 years) may result in permanent damage to the heart muscle tissue, hypertension, inflammation of the heart muscle, and a general weakening of the heart muscle known as alcohol-induced cardiomyopathy (Figueredo, 1997). This condition appears to be a special example of a more generalized process in which chronic alcohol use results in damage to all striated muscle tissues, not just those in the heart muscle (Fernandez-Sola et al., 1994). The authors examined a number of men who were and were not alcohol dependent. They found that alcoholic men in general had less muscle strength and greater levels of muscle tissue damage than did the nonalcoholic men in this study. The authors concluded that alcohol is toxic to muscle tissue and that the chronic use of alcohol will result in a loss of muscle tissue throughout the body. Cardiomyopathy itself develops in between 25% (Schuckit, 2000) and 40% of chronic alcohol users (Figueredo, 1997), and accounts for 20% to 50% of all cases of cardiomyopathy in the United States (Zakhari, 1997). But even this figure might not reflect the true scope of alcohol-induced heart disease. Rubin and Doria (1990) suggested that “the majority of alcoholics” (p. 279), which they defined as those individuals who obtained between 30% and 50% of their daily caloric requirement through alcohol, will ultimately develop “pre-clinical heart disease” (p. 279). Because of the body’s compensatory mechanisms, many chronic alcoholics do not show evidence of heart disease except on special tests designed to detect this disorder (Figueredo, 1997; Rubin & Doria, 1990). However, 40% to 50% of those individuals with alcohol-induced cardiomyopathy will die within four years, if they continue to drink (Figueredo, 1997; Stoschitzky, 2000).


Chapter Eight

Although many individuals take comfort in the fact that they drink to excess only occasionally, even binge drinking is not without its dangers. Binge drinking may result in a condition known as the “holiday heart syndrome” (Figueredo, 1997; Klatsky, 2003; Stoschitzky, 2000; Zakhari, 1997). When used on an episodic basis, such as when the individual consumes larger-than-normal quanities of alcohol during a holiday break from work, alcohol can interfere with the normal flow of electrical signals within the heart. This might then contribute to an irregular heartbeat known as atrial fibrillation, which can be fatal if it is not diagnosed and properly treated. Thus, even episodic alcohol use is not without some degree of risk. The Effects of Chronic Alcoholism on the Central Nervous System (CNS) Alcohol is a neurotoxin, as evidenced by the fact that at least half of heavy drinkers show evidence of cognitive deficits (Schuckit & Tapert, 2004). A common example of the toxic effects of alcohol is seen in its ability to interfere with memory formation. Neuropsychological testing has revealed that alcohol may begin to affect memory formation after as little as one drink. Fortunately, one normally needs to consume more than five drinks in an hour’s time before alcohol is able to significantly impact the process of memory formation (Browning, Hoffer, & Dunwiddie, 1993). The extreme form of alcohol-induced memory dysfunction is the blackout.9 A blackout is a period of alcohol-induced amnesia that may last from less than an hour to several days (White, 2003). During a blackout, the individual may appear to be conscious to others, be able to carry on a coherent conversation, and be able to carry out many complex tasks. However, afterward, the drinker will not have any memory of what she or he did during the blackout. In a sense, the alcohol-induced blackout is similar to another condition known as transient global amnesia (Kaplan, Sadock, & Grebb, 1994; Rubino, 1992). 9

White (2003) suggested that alcohol-induced blackouts might be experienced by social drinkers as well as alcohol-dependent persons. However, as alcohol-induced memory impairment is seen after the blood alcohol level reaches 0.14 to 0.20, according to White, it is suggested in this text that heavy drinkers are most prone to alcoholinduced blackouts.

Scientists believe that alcohol prevents the individual from being able to form (encode) memories during the period of acute intoxication (Browning, Hoffer, & Dunwiddie, 1993). The alcohol-induced blackout is “an early and serious indicator of the development of alcoholism” (Rubino, 1992, p. 360). Current theory suggests that alcohol-induced blackouts are caused by alcohol in the brain blocking the normal function of the neurotransmitters gamma-amiobutyric acid (GABA) and N-methyl-D-Aspartate (NMDA) (Nelson et al., 2004). The individual’s vulnerability to alcohol-induced memory disturbances is influenced by the manner in which she or he consumed alcohol and his or her genetic vulnerability (Nelson et al., 2004). A majority of heavy drinkers will admit to having alcohol-induced blackouts if they are asked about this experience (Schuckit, Smith, Anthenelli, & Irwin, 1993). Although it has long been known that the chronic use of alcohol can result in brain damage, the exact mechanism by which alcohol causes damage to the brain remains unknown (Roehrs & Roth, 1995). Unfortuantely, for about 15% of heavy drinkers, the first organ to show damage from their drinking is not the liver but the brain (Berg, Franzen, & Wedding, 1994; Bowden, 1994; Volkow et al., 1992). Alcohol-induced dementia is the single most preventable cause of dementia in the United States (Beasley, 1987) and is the “second most common adult dementia after Alzheimer’s disease” (Nace & Isbell, 1991, p. 56). Up to 75% of chronic alcohol drinkers show evidence of alcohol-induced cognitive impairment following detoxification (Butterworth, 1995; Hartman, 1995; Tarter, Ott, & Mezzich, 1991). This alcoholinduced brain damage might become so severe that institutionalization becomes necessary when the drinker is no longer able to care for himself or herself. It is estimated that between 15% and 30% of all nursing home patients are there because of permanent alcohol-induced brain damage (Schuckit, 2000). A limited degree of improvement in cognitive function is possible in some alcohol-dependent persons who remain abstinent from alcohol for extended periods of time (Grant, 1987; Lø berg, 1986). But research suggests that only 20% of chronic drinkers may return to their previous level of intellectual function after abstaining from alcohol for an extended period of time (Nace & Isbell, 1991). Some limited degree of recovery is possible in perhaps 60% of the cases, and virtually no

Chronic Alcohol Abuse and Addiction

recovery of lost intellectual function is seen in 20% of the cases, according to the authors. The chronic use of alcohol is thought to be a cause of cerebellar atrophy, a condition in which the cerebellum withers away as individual cells in this region of the brain die as a result of chronic alcohol exposure. Fully 30% of alcohol-dependent individuals eventually develop this condition, which is marked by characteristic psychomotor dysfunction, gait disturbance, and loss of muscle control (Berger, 2000). Another central nervous system complication seen as a result of chronic alcohol abuse is vitamin deficiency amblyopia. This condition will cause blurred vision, a loss of visual perception in the center of the visual field known as central scotomata, and in extreme cases, atrophy of the optic nerve (Mirin, Weiss, & Greenfield, 1991). The alcohol-induced damage to the visual system may be permanent. Wernicke-Korsakoff’s syndrome. In 1881, Carl Wernicke first described a brain disorder that subsequently came to bear his name. Wernicke’s encephalopathy is recognized as the most serious complication of chronic alcohol use (Day, Bentham, Callaghan, Kuruvilla, & George, 2004). If not treated, it can cause death in up to 20% of individuals who develop this disorder (Ciraulo, Shader, Ciraulo, Greenblatt, & von Moltke, 1994b; Day et al., 2004; Zubaran, Fernandes, & Rodnight, 1997). About 20% of chronic drinkers develop Wernicke’s encephalopathy, which is thought to be caused by alcohol-induced avitaminosis (Bowden, 1994). As a result of the alcohol-related vitamin malabsorption, the reserves of thiamine (one of the “B” family of vitamins) in an individual’s body will gradually be depleted, contributing to the development of various neurological problems such as Wernicke’s encephalopathy. Between 30% and 80% of chronic drinkers show evidence of clinical/ subclinical thiamine deficiency (Day et al., 2004). Chronic thiamine deficiency results in characteristic patterns of brain damage, often detected on physical examination of the brain following death. The patient who is suffering from Wernicke’s encephalopathy will often appear confused, possibly to the point of being delirious and disoriented. He or she would also be apathetic and unable to sustain physical or mental activities (Day et al., 2004; Victor, 1993). A physical examination would reveal a characteristic pattern of abnormal eye movements known as nystagmus and such symptoms of brain damage as gait disturbances and ataxia (Lehman et al., 1994).


Before physicians developed a method to treat Wernicke’s encephalophy, up to 80% of the patients who developed this condition went on to develop a condition known as Korsakoff’s syndrome. Another name for Korsakoff’s syndrome is the alcohol amnestic disorder (Charness, Simon, & Greenberg, 1989; Day et al., 2004; Victor, 1993). Even when Wernicke’s encephalophy is properly treated through the most aggressive thiamine replacement procedures known to modern medicine, fully 25% of the patients who develop Wernicke’s disease will go on to develop Korsakoff’s syndrome (Sagar, 1991). For many years, scientists thought that Wernicke’s encephalopathy and Korsakoff’s syndrome were separate disorders. It is now known that Wernicke’s encephalopathy is the acute phase of the Wernicke-Korsakoff syndrome. One of the most prominent symptoms of the Korsakoff phase of this syndrome is a memory disturbance, when the patient is unable to remember the past accurately. In addition to this, the individual will also have difficulty in learning new information. This should not be surprising, in that magnetic resonance imaging (MRI) studies of the brains of alcohol-dependent persons reveal atrophy far beyond what one would expect as a result of normal aging (Bjork, Grant, & Hommer, 2003). The observed loss of brain tissue is most conspicuous in the anterior superior temporal cortex region of the brain, which seems to correspond to the behavioral deficits observed in the Wernicke-Korsakoff syndrome (Pfefferbaum, Sullivan, Rosenbloom, Mathalon, & Kim, 1998). However, there are subtle differences between the pattern of brain damage seen in male and female alcohol abusers as compared to normal adults of the same age (Hommer, Momenan, Kaiser, & Rawlings, 2001; Pfefferbaum, Rosenbloom, Deshmukh, & Sullivan, 2001). It is not unusual to observe that in spite of clear evidence of cognitive impairment, the patient frequently appears indifferent to his or her memory loss (Ciraulo et al., 1994b). In the earlier stages, the person might be confused by his or her inability to remember the past clearly and will often “fill in” these memory gaps by making up answers to questions. This process is called confabulation. Confabulation is not always found in cases of Korsakoff’s syndrome, but when it is found, it is most common in the earlier stages of Korsakoff’s syndrome (Parsons & Nixon, 1993; Victor, 1993). Later on, as the individual adjusts to the memory loss, he or she


Chapter Eight

will not be as likely to use confabulation to cover up the memory problem (Blansjaar & Zwinderman, 1992; Brandt & Butters, 1986). In rare cases, people will lose virtually all memories after a certain period of their lives and will almost be “frozen in time.” For example, Sacks (1970) offered an example of a man who, when examined, was unable to recall anything that happened after the late 1940s. The patient was examined in the 1960s, but when asked, would answer questions as if he were still living in the 1940s. This example of confabulation, while extremely rare, can result from chronic alcoholism. More frequent are the less pronounced cases, in which significant portions of the memory are lost but the individual retains some ability to recall the past. Unfortunately, the exact mechanism of Wernicke-Korsakoff’s syndrome is unknown at this time. The characteristic nystagmus seem to respond to massive doses of thiamine.10 It is possible that victims of Wernicke-Korsakoff’s syndrome possess a genetic susceptibility to the effects of the alcoholinduced thiamine deficiency (Parsons & Nixon, 1993). While this is an attractive theory, in that it explains why some chronic drinkers develop Wernicke-Korsakoff’s syndrome and others do not, it remains just a theory. Several different theories about how chronic drinking contributes to brain damage have been advanced over the years. Jensen and Pakkenberg (1993) suggested, after conducting post-mortem examinations of the brains of 55 individuals who were active alcoholics prior to their death, that alcohol causes a disconnection syndrome between neurons. This then prevents those nerve pathways from being activated. If not stimulated, neurons wither and die, a mechanism by which alcohol might cause damage to the brain, according to the authors. Another theory was offered by Pfefferbaum, Rosenbloom, Serventi, and Sullivan (2004) who suggested that the liver dysfunction found in chronic alcohol abusers, combined with the poor nutrition and chronic exposure to alcohol itself, all combined to cause the characteristic pattern of brain damage seen in alcohol-dependent individuals. These are only theories that remain to be proven. It is known that once Wernicke-Korsakoff’s syndrome has developed, only a minority of its victims will escape without lifelong neurological damage. It is estimated

that even with the most aggressive of vitamin replacement therapy, only 20% (Nace & Isbell, 1991) to 25% (Brandt & Butters, 1986) of its victims will return to their previous level of intellectual function. The other 75% to 80% will experience greater or lesser degrees of neurological damage, and at least 10% of the patients with this disorder will be left with permanent memory impairment (Vik, Cellucci, Jarchow, & Hedt, 2004). There is evidence to suggest that chronic alcohol abuse/addiction is a risk factor in the development of a movement disorder known as tardive dyskinesia (TD) (Lopez & Jeste, 1997). This condition may result from alcohol’s neurotoxic effect, according to the authors. Although TD is a common complication in patients who have used neuroleptic drugs for the control of psychotic conditions for long periods of time, there are cases in which the alcohol-dependent individual has developed TD in spite of the fact that she or he had no prior exposure to neuroleptic agents (Lopez & Jeste, 1997). The exact mechanism by which alcohol causes the development of tardive dyskinesia remains to be identified, and scientists have no idea why some alcohol abusers develop TD while others do not. But TD usually develops in chronic alcohol users who have a history of drinking for 10 to 20 years, according to the authors. Alcohol’s effects on the sleep cycle. Although alcohol might induce a form of sleep, the chronic use of alcohol interferes with the normal sleep cycle (Karam-Hage, 2004). Chronic alcohol users tend to require more time to fall asleep11 and as a group they report that their sleep is less sound and less restful than that of nondrinkers (Karam-Hage, 2004). Although the exact mechanism by which chronic alcohol use interferes with sleep is still unknown, scientists believe that the chronic use of alcohol suppresses melatonin production in the brain, which in turn interferes with the normal sleep cycle (Karam-Hage, 2004; Pettit, 2000). Clinicians often encounter patients who complain of sleep problems without revealing their alcohol abuse. By some estimates, 17% to 30% of the general population might suffer from insomnia, but fully 60% of alcohol-dependent persons will experience symptoms of insomnia (Brower, Aldrich, Robinson, Zucker, & Greden, 2001). Indeed, insomnia symptoms might serve



See Glossary.

Known as sleep latency.


Chronic Alcohol Abuse and Addiction

as a relapse trigger for the newly recovered alcoholdependent person unless this problem is addressed through appropriate interventions. Karam-Hage (2004) suggested that gabapentin (sold under the brand name of Neurontin) is quite useful as a hypnotic agent in alcohol-dependent persons. Chronic alcohol ingestion causes the drinker to experience a reduction in the amount of time spent in the rapid eye movement (or REM) phase of sleep. There is a relationship between REM sleep and dreaming. Scientists don’t know why we dream, but they do know that we need to dream and that anything that reduces the amount of time spent in REM sleep will interfere with normal waking cognitive function. When chronic drinkers stop drinking, they will spend an abnormal amount of time in REM sleep, a phenomenon known as REM rebound. The dreams that former drinkers might experience during this period may be so frightening that they are tempted to return to the use of alcohol in order to “get a decent night’s sleep.” The phase of “REM rebound” can last for up to 6 months after the person has stopped drinking (Brower, 2001; Schuckit & Tapert, 2004). Scientists know that the chronic use of alcohol interferes with the normal sleep process, but they do not know whether the individual’s sleep will return to a more normal pattern with continued abstinence. The effects of alcohol can interfere with the normal sleep cycle for one to two years after detoxification (Brower, 2001; Karam-Hage, 2004). In addition to disrupting the normal sleep cycle, the chronic use of alcohol can trigger episodes of sleep apnea both during the period of heavy drinking and for weeks after the individual’s last drink (Berger, 2000; Brower, 2001; Le Bon et al., 1997). The Effects of Chronic Alcohol Use on the Peripheral Nervous System The human nervous system is usually viewed as two interconnected systems. The brain and spinal cord make up the central nervous system; the nerves that are found in the outer regions of the body are classified as the peripheral nervous system. Unfortunately, the effects of alcohol-induced avitaminosis are sufficiently widespread to include the peripheral nerves, especially those in the hands and feet. This is a condition known as peripheral neuropathy. This condition is found in 10% (Schuckit, 1995a) to 33% of chronic alcohol users (Monforte et al.,

1995). Some of the symptoms of a peripheral neuropathy include feelings of weakness, pain, and a burning sensation in the afflicted region of the body (Lehman et al., 1994). Eventually, the person will lose all feeling in the affected region of the body. Approximately 30% of all cases of peripheral neuropathy is thought to be alcohol induced (Hartman, 1995). At this point, the exact cause of alcohol-induced peripheral neuropathies is not known. Some researchers believe that peripheral neuropathy is the result of a deficiency of the “B” family of vitamins in the body (Charness et al., 1989; Levin, 2002; Nace, 1987). In contrast to this theory, Monforte et al. (1995) suggested that peripheral neuropathies might be the result of chronic exposure to either alcohol itself or its metabolites. Again, as was discussed in the last chapter, some of the metabolites of alcohol are themselves quite toxic to the body. The authors failed to find evidence of a nutritional deficit for those hospitalized alcoholics who had developed peripheral neuropathies. But they did find evidence of a dose-related relationship between the use of alcohol and the development of peripheral neuropathies. Surprisingly, in light of alcohol’s known neurotoxic effects, there is evidence to suggest that at some doses it might suppress some of the involuntary movements of Huntington’s disease (Lopez & Jeste, 1997). This is not to suggest that alcohol is an acceptable treatment for this disorder, but this effect of alcohol might account for the finding that patients with movement disorders such as essential tremor, or Huntington’s disease, tend to abuse alcohol more often than close relatives who do not have a movement disorder, according to the authors. The Effects of Chronic Alcohol Use on the Person’s Emotional State The chronic use of alcohol can simulate the symptoms of virtually every form of neurosis, even those seen in psychotic conditions. These symptoms are thought to be secondary to the individual’s malnutrition and the toxic effects of chronic alcohol use (Beasley, 1987). These symptoms might include depressive reactions (Blondell, Frierson, & Lippmann, 1996; Schuckit, 1995a), generalized anxiety disorders, and panic attacks (Beasley, 1987). There is a complex relationship between anxiety symptoms and alcohol-use disorders. For example,


without medical intervention, almost 80% of alcoholdependent individuals will experience panic episodes during the acute phase of withdrawal from alcohol (Schuckit, 2000). The chronic use of alcohol causes a paradoxical stimulation of the autonomic nervous system (ANS). The drinker will often interpret this ANS stimulation as a sign of anxiety, and then turn to alcohol or antianxiety medications to control this apparent anxiety. A cycle is then started in which the chronic use of alcohol actually sets the stage for further anxiety-like symptoms, resulting in the perceived need for more alcohol/medication. Stockwell and Town (1989) discussed this aspect of chronic alcohol use and concluded: “Many clients who drink heavily or abuse other anixolytic drugs will experience substantial or complete recovery from extreme anxiety following successful detoxification” (p. 223). The authors recommend a drug-free period of at least 2 weeks in which to assess the need for pharmacological intervention for anxiety. But this is not to discount the possibility that the individual has a concurrent anxiety disorder and an alcohol-use disorder. Indeed, researchers have discovered that 10% to 40% of those individuals who are alcohol dependent also have an anxiety disorder of some kind. Between 10% and 20% of those patients being treated for some form of an anxiety disorder also have some kind of alcohol-use disorder (Cox & Taylor, 1999). For these individuals, the anxiety coexists with their alcohol-use disorder and does not reflect alcohol withdrawal as is often the case. The diagnostic dilemma for the clinician is to determine which patients have withdrawal-induced anxiety and which patients have a legitimate anxiety disorder in addition to their substance-use problem. This determination is made more difficult by the fact that chronic alcohol use can cause the drinker to experience feelings of anxiety for many months after he or she stops drinking (Schuckit, 1998). The differentiation between “true” anxiety disorders, and alcohol-related anxiety-like disorders is thus quite complex. The team of Kushner, Sher, and Beitman (1990) concluded that alcohol withdrawal symptoms may be “indistinguishable” (p. 692) from the symptoms of panic attacks and generalized anxiety disorder (GAD). One diagnostic clue is found in the observation that in general, problems such as agoraphobia and social

Chapter Eight

phobias usually predate alcohol use, according to the authors. Victims of these disorders usually attempt self-medication through the use of alcohol and only later develop alcohol-use problems. On the other hand, Kushner et al. (1990) concluded that the symptoms of simple panic attacks and generalized anxiety disorder are more likely to reflect the effects of alcohol withdrawal than a psychiatric disorder. Another form of phobia that frequently coexists with alcoholism is the social phobia (Marshall, 1994). Individuals with social phobias fear situations in which they are exposed to other people and are twice as likely to have alcohol-use problems as people from the general population. However, social phobia usually precedes the development of alcohol abuse/addiction. Unfortunately, it is not uncommon for alcoholdependent individuals to complain of anxiety symptoms when they see their physician, who may then prescribe a benzodiazepine to control the anxiety. This, in turn, allows the chronic drinker to control his or her withdrawal symptoms during the day without having the smell of alcohol on the breath. (One alcohol-dependent individual explained, for example, that the effects of 10 mg of diazepam were similar to the effects of having had 3–4 quick drinks). Given this tendency for alcoholdependent individuals to use benzodiazepines, it should not be surprising to learn that 25% to 50% of alcoholics are also addicted to these drugs (Sattar & Bhatia, 2003). If the physician fails to obtain an adequate history and physical (or if the patient lies about his or her alcohol use), there is also a risk that the alcohol-dependent person might combine the use of antianxiety medication, which is a CNS depressant, with alcohol (which is also a CNS depressant). There is a significant potential for an overdose when two different classes of CNS depressants are combined. Thus, the use of alcohol with CNS depressants such as the benzodiazepines or antihistamines presents a very real danger to the patient. The interaction between benzodiazepines and alcohol has been implicated as one cause of the condition known as the paradoxical rage reaction (Beasley, 1987). This is a drug-induced reaction in which a CNS depressant brings about an unexpected period of rage in the individual. During the paradoxical rage reaction, these individuals might engage in assaultive or destructive behavior toward either themselves or

Chronic Alcohol Abuse and Addiction

others and would later have no conscious memory of what they had done during the paradoxical rage reaction (Lehman et al., 1994). If antianxiety medication is needed for long-term anxiety control in recovering drinkers, buspirone should be used first (Kranzler et al., 1994). Buspirone is not a benzodiazepine and thus does not present the potential for abuse seen with the latter family of drugs. The authors found that those alcoholic subjects in their study who suffered from anxiety symptoms and who received buspirone were both more likely to remain in treatment and to consume less alcohol than those anxious subjects who did not receive buspirone. This suggests that buspirone might be an effective medication in treating alcohol-dependent persons with concurrent anxiety disorders. Chronic alcohol use has been known to interfere with sexual performance for both men and women (Jersild, 2001; Schiavi, Stimmel, Mandeli, & White, 1995). Although the chronic use of alcohol has been shown to interfere with the erectile process for men, Schiavi et al. (1995) found that once the individual stopped drinking, the erectile dysfunction usually resolved itself. However, there is evidence that disulfiram (often used in the treatment of chronic alcoholism) itself may interfere with a man’s ability to achieve an erection. Although it was once thought that primary depression was rare in chronic drinkers, it is now believed that there is a relationship between alcohol-use disorders and depression. However, Hasin and Grant (2002) examined the history of 6,050 recovering alcohol abusers and found that former drinkers had a fourfold increased incidence of depression compared to nondrinkers. Further, depression was found to have a negative impact on the individual’s ability to benefit from rehabilitation programs and might contribute to higher dropout rates from substance-use treatment (Charney, 2004; Mueller et al., 1994). The individual’s use of alcohol was found to interfere with the treatment of depressive disorder (Mueller et al., 1994). Even limited alcohol use has been found to exacerbate depression, with the depressing effects of even a 1–2 day alcohol binge lasting for several weeks after abstinence is achieved (Segal & Sisson, 1985). The potential is thus present for a cycle in which the alcohol abuse might ultimately cause more depression than


would be expected in a person with a depressive disorder, leading him or her to abuse alcohol even more. Alcohol-induced depressive episodes will usually clear after 2–5 weeks of abstinence. Some researchers do not recommend formal treatment other than abstinence and recommend that antidepressant medication be used only if the symptoms of depression continue after that period of time (Decker & Ries, 1993; Miller, 1994; Satel, Kosten, Schuckit, & Fischman, 1993). However, Charney (2004) recommended that depressive disorders be aggressively treated with the appropriate medications as soon as they are detected. There is a strong relationship between depression and suicide (Nemeroff, Compton, & Berger, 2001). Because alcohol-dependent people are vulnerable to the development of depression as a consequence of their drinking, it is logical to assume that as a group they are at high risk for suicide. Indeed, research has demonstrated that alcohol-dependent individuals are 58 to 85 times more likely to commit suicide than those who are not alcohol dependent (Frierson, Melikian, & Wadman, 2002). Various researchers have suggested that the suicide rate among alcohol-dependent people is 5% (Preuss et al., 2003), 7% (Conner, Li, Meldrum, Duberstein, & Conwell, 2003), or even as high as 18% (Bongar, 1997; Preuss & Wong, 2000). Each year, 25% of those who commit suicide in the United States are alcohol dependent (Harwitz & Ravizza, 2000). It has been suggested that alcohol-related suicide is most likely to occur late in middle adulthood, when the effects of the chronic use of alcohol begin to manifest as cirrhosis of the liver and other disorders (Nisbet, 2000). The team of Preuss et al. (2003) followed a cohort of 1,237 alcohol-dependent people for 5 years, and found that individuals in their sample were more than twice as likely to commit suicide as were nonalcoholic individuals in the course of the study. Although the authors carried out an extensive evaluation of their subjects prior to the start of the study in an attempt to identify potential predictors of suicide, they concluded that they had failed to do so successfully. There was only a modest correlation between the identified risk factors and completed suicide, and the authors concluded that those factors that had the greatest impact on suicidality had not been identified. Almost a decade earlier, the research team of Murphy, Wetzel, Robins, and McEvoy (1992) attempted to isolate


the factors that seemed to predict suicide in the chronic male alcoholic. On the basis of their research, the authors identified seven different factors that appeared to be suggestive of a possible suicide risk in the male chronic drinker: 1. The victim was drinking heavily in the days and weeks just prior to the act of suicide. 2. The victim had talked about the possibility of committing suicide prior to the act. 3. The victim had little social support. 4. The victim suffered from a major depressive disorder. 5. The victim was unemployed at the time of the suicide. 6. The victim was living alone. 7. The victim was suffering from a major medical problem at the time of the act of suicide. Although the authors failed to find any single factor that seemed to predict a possible suicide in the chronic male alcoholic, they did conclude that “as the number of risk factors increases, the likelihood of a suicidal outcome does likewise” (p. 461). Roy (1993) also identified several factors that seemed to be associated with an increased risk of suicide for adult alcoholics. Like Murphy et al. (1992), he failed to find a single factor that seemed to predict the possibility of suicide for the adult alcoholic. However, Roy (1993) did suggest that the following factors were potential indicators for an increased risk: 1. Gender: Men tend to commit suicide more often than women, and the ratio of male:female suicides for alcoholics may be about 4⬊1. 2. Marital status: Single/divorced/widowed adults are significantly more likely to attempt suicide than are married adults. 3. Co-existing depressive disorder: Depression is associated with an increased risk of suicide. 4. Adverse life events: The individual who has suffered an adverse life event such as the loss of a loved one, or a major illness, or legal problems is at increased risk for suicide. 5. Recent discharge from treatment for alcoholism: The first 4 years following treatment were found to be associated with a significantly higher risk for suicide, although the reason for this was not clear. 6. A history of previous suicide attempts: Approximately one-third of alcoholic suicide victims had attempted suicide at some point in the past.

Chapter Eight

7. Biological factors: Factors such as decreased levels of serotonin in the brain are thought to be associated with increased risk for violent behavior, including suicide. One possible mechanism through which chronic drinking might cause or contribute to depressive disorders is that chronic alcohol use causes an increase in dopamine turnover in the brain and a down regulation in the number of dopamine receptors within the neurons (Heinz et al., 1998). The chronic use of alcohol has also been found to be associated with reduced serotonin turnover, with a 30% reduction in serotonin transporters being found in chronic drinkers by the authors. Low levels of both dopamine and serotonin have been implicated by researchers as causing depression, so that this mechanism might account for how chronic alcohol use contributes to increased levels of depression in heavy drinkers. Alcohol Withdrawal for the Chronic Alcoholic Each year in the United States up to 2 million people experience symptoms of the alcohol withdrawal syndrome, of which only 10% to 20% are hospitalized (Bayard, McIntyre, Hill, & Woodside, 2004). In most cases the symptoms of such alcohol withdrawal usually subside quickly without the need for medical intervention and they might not even be attributed by the individual to the use of alcohol. But the alcohol withdrawal syndrome (AWS) is potentially life threatening, and even with the best of medical care there is a significant risk of death from the AWS. For reasons that are not known, chronic drinkers vary in terms of their risk for developing AWS (Saitz, 1998). However, there is evidence to suggest that repeated cycles of alcohol dependence and withdrawal might contribute to a pattern in which the AWS becomes progressively worse each time for the individual (Kelly & Saucier, 2004; Littleton, 2001). In 90% of the cases the symptoms of AWS develop within 4–12 hours after the individual’s last drink, although in some cases the AWS develops simply because a chronic drinker significantly reduced his or her level of drinking (McKay, Koranda, & Axen, 2004; Saitz, 1998). In a small percentage of cases AWS symptoms do not appear until 96 hours after the last drink or reduction in alcohol intake (Lehman et al., 1994; Weiss & Mirin, 1988). In extreme cases, the person will not begin to

Chronic Alcohol Abuse and Addiction

experience the symptoms of AWS until 10 days after the last drink (Slaby, Lieb, & Tancredi, 1981). The AWS is an acute brain syndrome that might, at first, be mistaken for such conditions as a subdural hematoma, pneumonia, meningitis, or an infection involving the CNS (Saitz, 1998). The severity of AWS depends on the (a) intensity with which that individual used alcohol, (b) the duration of time during which the individual drank, and (c) the individual’s state of health. Symptoms of AWS include agitation, anxiety, tremor, diarrhea, hyperactivity, exaggerated reflexes, insomnia, vivid dreams, nausea, vomiting, loss of appitite, restlessness, sweating, tachycardia, headache, and vertigo (Kelly & Saucier, 2004; Lehman et al., 1994; Saitz, 1998). One factor that might exacerbate the AWS is concurrent nicotine withdrawal (Littleton, 2001). The withdrawal process from nicotine is discussed in Chapter 19. Note, however, that concurrent withdrawal from nicotine and alcohol may result in a more intense AWS than withdrawal from alcohol alone (Littleton, 2001). For this reason, the author recommends that the patient’s nicotine addiction be controlled through the use of transdermal nicotine patches until after he or she has completed the withdrawal process from alcohol. In the hospital setting, the Clinical Institute Withdrawal Assessment for Alcohol Scale-Revised (CIWA-Ar) is the most common assessment tool used to determine the severity of the AWS (Kelly & Saucier, 2004; McKay et al., 2004). This noncopyrighted tool measures 15 symptoms of alcohol withdrawal such as anxiety, nausea, and visual hallucinations and it takes 3–5 minutes to administer. It has a maximum score of 67 points, with each symptom being weighted in terms of severity. A score of 0–4 points indicates minimal withdrawal discomfort, whereas a score of 5–12 points indicates mild alcohol withdrawal. Patients who earn a score of 13–19 points on the CISA-Ar are likely to be in moderately severe alcohol withdrawal, whereas 20  points is indicative of severe alcohol withdrawal. The CISA-Ar might be repeatedly administered over time to provide a baseline measure of the patient’s recovery from the acute effects of alcohol intoxication. Patients who earn a score of 0–4 points on the CISA-Ar may experience few symptoms of alcohol withdrawal, and depending on their alcohol use history they might either remain at this level of withdrawal discomfort or


progress to more severe levels of alcohol withdrawal. In more advanced cases, the above symptoms may become more intense over the first 6 to 24 hours following the individual’s last use of alcohol. The patient may also begin to experience alcoholic hallucinosis. Alcoholic hallucinosis is seen in up to 10% of patients experiencing the AWS, and usually begins 1–2 days after the individual’s last drink. In rare cases alcoholic hallucinosis may develop after drinkers cut back on their alcohol intake (Olmedo & Hoffman, 2000). The hallucinations may be visual, tactile, or auditory, and they occur when the patient is conscious (Kelly & Saucier, 2004). The exact mechanism that causes alcoholic hallucinosis is not understood at this time, but it is known that in 10% to 20% of the cases, the individual enters a chronic psychotic stage (Soyka, 2000). Alcoholic hallucinosis can be quite frightening and may prompt the person experiencing it to attempt suicide or become violent in an attempt to escape from the hallucinations (Soyka, 2000). In extreme cases of alcohol withdrawal, these symptoms will continue to become more intense over the next 24 to 48 hours, and by the 3rd day following the last drink the patient will start to experience fever, incontinence, and/or tremors in addition to the above noted symptoms. Approximately 10% to 16% of heavy drinkers will experience a seizure as part of the withdrawal syndrome (Berger, 2000; D’Onofrio, Rathlev, Ulrich, Fish, & Freedland, 1999; McRae, Brady, & Sonne, 2001). In 90% of such cases, the first seizure takes place within 48 hours after the last drink, although in 2% to 3% of the cases the seizure might occur as late as 5 to 20 days after the last drink (Renner, 2004; Trevisan, Boutros, Petrakis, & Krystal, 1998). Approximately 60% of adults who experience alcohol withdrawal seizures will have multiple seizures (D’Onofrio et al., 1999). Alcohol withdrawal seizures are seen in individuals who both do and do not experience alcoholic hallucinosis. The most severe form of withdrawal, the delirium tremens (DTs), develop in 1% (McRae et al., 2001) to 10% (Weiss & Mirin, 1988) of chronic drinkers. Once the DTs develop, the condition is extremely difficult to control (Palmstierna, 2001). Some of the medical and behavioral symptoms of the DTs include delirium, hallucinations, delusional beliefs that one is being followed, fever, and tachycardia (Lieveld & Aruna,


Chapter Eight

1991). During the period the individual is going through the DTs she or he is vulnerable to the development of rhabdomyolsis12 as a result of alcohol-induced muscle damage (Richards, 2000; Sauret, Marinides, & Wang, 2002). Drawing upon the experiences of 334 patients in Stockholm, Palmstierna (2001) identified five symptoms that seemed to identify patients at risk for the development of the DTs: (a) concurrent infections such as pneumonia, (b) tachycardia, (c) signs of autonomic nervous system overactivity in spite of an alcohol concentration at or above 1 gram per liter of body fluid, (d) previous epileptic seizure, and (e) a history of a previous delirous episode. The author suggested that such patients receive aggressive treatment with benzodiazepines to minimize the risk that the full DTs will develop. In some cases of DTs, the individual will experience a disruption of normal fluid levels in the brain (Trabert, Caspari, Bernhard, & Biro, 1992). This results when the mechanism in the drinker’s body that regulates normal fluid levels is disrupted by the alcohol withdrawal process. The individual might become dehydrated or may retain too much fluid in the body. During alcohol withdrawal, some individuals become hypersensitive to the antidiuretic hormone (ADH). This hormone is normally secreted by the body to slow the rate of fluid loss through the kidneys when the person is somewhat dehydrated. This excess fluid may contribute to the damage that the alcohol has caused to the brain, possibly by bringing about a state of cerebral edema (Trabert et al., 1992). Researchers have found that only patients going through the DTs have the combination of higher levels of ADH and low body fluid levels. This finding suggests that a body fluid dysregulation process might somehow be involved in the development of the DTs (Trabert et al., 1992). In the past, 5% to 25% of people who developed the DTs died from exhaustion (McKay et al., 2004; Schuckit, 2000). However, improved medical care has decreased the mortality from DTs to about 1% (Enoch & Goldman, 2002) to 5% (Kelly & Saucier, 2004; Weaver, Jarvis, & Schnoll, 1999; Yost, 1996). The main causes of death for people going through the DTs include sepsis, cardiac and/or respiratory arrest, cardiac

arrhythmias, hyperthermia, and cardiac and/or circulatory collapse (Kelly & Saucier, 2004; Lieveld & Aruna, 1991). These individuals are also a high risk group for suicide as they struggle to come to terms with the emotional pain and terror associated with this condition (Hirschfeld & Davidson, 1988). Although a number of different chemicals have been suggested as being of value in controlling the symptoms of alcohol withdrawal, the benzodiazepines, especially chlordiazepoxide or diazepam, are considered the drugs of choice for treating the AWS (McKay et al., 2004). The use of pharmaceutical agents to control the alcohol withdrawal symptoms will be discussed in more detail in Chapter 32.




Other Complications From Chronic Alcohol Use Either directly, or indirectly, alcohol contributes to more than half of the 500,000 head injuries that occur each year in the United States (Ashe & Mason, 2001). For example, it is not uncommon for the intoxicated individual to fall and strike his or her head on coffee tables, magazine stands, or whatever happens to be in the way. Unfortunately, the chronic use of alcohol contributes to the development of three different bone disorders (Griffiths, Parantainen, & Olson, 1994): (a) osteoporosis (loss of bone mass), (b) osteomalacia, (a condition in which new bone tissue fails to absorb minerals appropriately), and (c) secondary hyperparathyroidism.13 Even limited regular alcohol use can double the speed at which the body excretes calcium (Jersild, 2001). These bone disorders in turn contribute to the higher than expected level of injury and death seen when alcoholics fall or when they are involved in automobile accidents. Alcohol is also a factor in traumatic brain injury. Chronic alcohol use is thought to be the cause of 40% to 50% of deaths in motor vehicle accidents, up to 67% of home injuries, and 3%–5% of cancer related deaths (Miller, 1999). Chronic alcohol users are 10 times more likely to develop cancer than nondrinkers (Schuckit, 1998), and 4% of all cases of cancer in men and 1% of all cases of cancer in women are thought to be alcohol related (Ordorica & Nace, 1998). Approximately 5% of the total deaths that occur each year in Glossary.


Chronic Alcohol Abuse and Addiction

the United States are thought to be alcohol related (Miller, 1999). In addition, women who drink while pregnant run the risk of causing alcohol-induced birth defects, a condition known as the fetal alcohol syndrome (to be discussed later in Chapter 20). Chronic alcoholism has been associated with a premature aging syndrome, in which the individual appears much older than he or she actually is (Brandt & Butters, 1986). In many cases, the overall physical and intellectual condition of the individual corresponds to that of a person between 15 and 20 years older than the person’s chronological age. One such person, a man in his 50s, was told by his physician that he was in good health . . . for a man about to turn 70! Admittedly, not every alcohol-dependent person will suffer from every consequence reviewed in this chapter. Some chronic alcohol users will never have stomach problems, for example, but they may suffer from advanced heart disease as a result of their drinking. However, Schuckit (1995a) noted that in one research study, 93% of alcohol-dependent individuals admitted to treatment had at least one important medical problem in addition to their alcohol use problem. Research has demonstrated that in most cases, the first alcohol-related problems are experienced when drinkers are in their late 20s or early 30s. The team of Schuckit, Smith, Anthenelli, and Irwin (1993) outlined a progressive course for alcoholism, based on their

study of 636 male alcoholics. The authors admitted that their subjects experienced wide differences in the specific problems caused by their drinking, but as a group the alcoholics began to experience severe alcohol-related problems in their late 20s. By their mid-30s, the individual was likely to have recognized that he had a drinking problem and to begin to experience more severe problems as a result of his continued drinking. However, as the authors pointed out, there is a wide variation in this pattern, and some subgroups of alcoholics might fail to follow it.

Summary This chapter explored the many facets of alcoholism. The scope of alcohol abuse/addiction in this country was reviewed, as was the fact that alcoholism accounts for approximately 85% of the drug-addiction problem in the United States. In this chapter, the different forms of tolerance and the ways that the chronic use of alcohol can affect the body were discussed. The impact of chronic alcohol use on the central nervous system, the cardiopulmonary system, the digestive system, and the skeletal bone structure was reviewed. In addition, the relationship between chronic alcohol use and physical injuries and how chronic alcohol use can lead to premature death and premature aging were examined. Finally, the process of alcohol withdrawal for the alcohol-dependent person was discussed.


Abuse and Addiction to the Barbiturates and Barbiturate-like Drugs

Early Pharmacological Therapy of Anxiety Disorders and Insomnia

Introduction The anxiety disorders are, collectively, the most common form of mental illness found in the United States (Blair & Ramones, 1996). At any point in time, between 7% and 23% of the general population is thought to be suffering from anxiety in one form or another (Baughan, 1995), and over the course of their lives, approximately one-third of all adults will experience at least transient periods of anxiety intense enough to interfere with their daily lives (Spiegel, 1996). Further, at least 35% of the adults in the United States will experience at least transitory insomnia (Brower, Aldrich, Robinson, Zucker, & Greden, 2001; Lacks & Morin, 1992). For thousands of years, alcohol was the only agent that could reduce people’s anxiety level or help them fall asleep. However, as was discussed in the last chapter, the effectiveness of alcohol as an antianxiety1 agent2 is quite limited. Thus, for many hundreds of years, there has been a very real demand for effective antianxiety or hypnotic3 medications. In this chapter, we will review the various medications that were used to control anxiety or promote sleep prior to the introduction of the benzodiazepines in the early 1960s. In the next chapter, we will focus on the benzodiazepine family of drugs and on medications that have been introduced since the benzodiazepines first appeared.

In the year 18704 chloral hydrate was introduced as a hypnotic. It was found that chloral hydrate was rapidly absorbed from the digestive tract, and that an oral dose of 1–2 grams would cause the typical person to fall asleep in less than an hour. The effects of chloral hydrate were found to usually last 8 to 11 hours, making it appear to be ideal for use as a hypnotic. However, physicians quickly discovered that chloral hydrate had several major drawbacks, not the least of which was that it was quite irritating to the stomach lining and the chronic use could result in significant damage to this lining. In addition, it was soon discovered that chloral hydrate is quite addictive and that at high doses it could exacerbate preexisting cardiac problems in patients with heart disease (Pagliaro & Pagliaro, 1998). Further, as physicians became familiar with its pharmacological properties, they discovered that chloral hydrate had a narrow therapeutic window of perhaps 1⬊2 or 1⬊3 (Brown & Stoudemire, 1998), making it quite toxic to the user. Finally, after it had been in use for awhile, physicians discovered that withdrawal from chloral hydrate after extended periods of use could result in lifethreatening withdrawal seizures. Technically, chloral hydrate is a prodrug.5 After ingestion, it is rapidly biotransformed into trichloroethanol, which is the metabolite of chloral hydrate that causes the drug to be effective as a hypnotic. Surprisingly, in

1 Occasionally, mental health professionals will use the term anxiolytic rather than antianxiety. For the purpose of this section, however, the term antianxiety will be utilized. 2 Such medications are often called sedatives. 3 See Glossary.


and Pagliaro (1998) said that this happened in 1869, not

1870. 5See




Abuse and Addiction to the Barbiturates and Barbiturate-like Drugs

spite of its known dangers, chloral hydrate continues to have a limited role in modern medicine. Its relatively short biological half-life makes it of value in treating some elderly patients who suffer from insomnia. Thus, even with all of the newer medications available to physicians, there are still patients who will receive chloral hydrate to help them sleep. Paraldehyde was isolated in 1829 and first used as a hypnotic in 1882. As a hypnotic, paraldehyde is quite effective. It produces little respiratory or cardiac depression, making it a relatively safe drug to use with patients who have some forms of pulmonary or cardiac disease. However, it tends to have a very noxious taste, and users develop a strong odor on their breath after use. Paraldehyde is quite irritating to the mucous membranes of the mouth and throat and must be diluted in a liquid before use. The half-life of paraldehyde ranges from 3.4 to 9.8 hours, and about 70% to 80% of a single dose is biotransformed by the liver prior to excretion. Between 11% and 28% of a single dose leaves the body unchanged, usually by being exhaled, causing the characteristic odor on the user’s breath. Paraldehyde has an abuse/addiction potential similar to that of alcohol, and intoxication on paraldehyde resembles alcohol-induced intoxication. After the barbiturates were introduced, paraldehyde gradually fell into disfavor, and at the start of the 21st century it has virtually disappeared (Doble, Martin, & Nutt, 2004). The bromide salts were first used for the treatment of insomnia in the mid-1800s. They were available without a prescription and were used well into the 20th century. Bromides are indeed capable of causing the user to fall asleep, but it was soon discovered that they tend to accumulate in the chronic user’s body, causing a drug-induced depression after as little as just a few days continuous use. The bromide salts have been totally replaced by newer drugs, such as the barbiturates and the benzodiazepines. Despite superficial differences in their chemical structure, the compounds discussed above are all central nervous system (CNS) depressants. The relative potency of the barbiturate-like drugs are reviewed in Table 9.1. These compounds share many common characteristics, in spite of the superficial differences

TABLE 9.1 Dosage Equivalency for Barbiturate-like Drugs Generic name of drug of abuse

Dose equivalent to 30 mg of phenobarbital

Chloral hydrate

500 mg


350 mg


400 mg


300 mg


250 mg

in their chemical structure, such as the ability to potentiate the effects of other CNS depressants. Another characteristic that these CNS depressants share is their significant potential for abuse. Still, in spite of these shortcomings, these agents were the treatment of choice for anxiety and insomnia until the barbiturates were introduced.

History and Current Medical Uses of the Barbiturates Late in the 19th century, chemists discovered the barbiturates. Experimentation quickly revealed that depending upon the dose, the barbiturates were able to act either as a sedative or, at a higher dosage level, as a hypnotic. In addition, it was discovered that the barbiturates were safer and less noxious than the bromides, chloral hydrate, or paraldehyde (Greenberg, 1993). It was in 1903 that the first barbiturate— Veronal— was introduced for human use, and the barbiturates marketed as over-the-counter medications (Nelson, 2000; Peluso & Peluso, 1988). Since the time of their introduction, some 2,500 different barbiturates have been isolated by chemists. Most of these barbiturates were never marketed and they have remained only laboratory curiosities. Perhaps 50 barbiturates were marketed at one point or another in the United States, of which 20 are still in use (Charney, Mihis, & Harris, 2001; Nishino, Mignot & Dement, 1995). The relative potencies of the most common barbiturates are reviewed in Table 9.2. In the United States, the barbiturates have been classified as Category II


Chapter Nine

TABLE 9.2 Normal Dosage Levels of Commonly Used Barbiturates Barbiturate

Sedative dose*

Hypnotic dose**


50–150 mg/day

65–200 mg


120 mg/day

40–60 mg


45–120 mg/day

50–100 mg


96–400 mg/day

Not used as hypnotic


60–80 mg/day

100 mg


30–120 mg/day

100–320 mg


90–200 mg/day

50–200 mg


30–120 mg/day

120 mg

Source: Table based on information provided in Uhde & Trancer (1995). *Administered in divided doses. **Administered as a single dose at bedtime.

controlled substances6 and are available only by a physician’s prescription. After the introduction of the benzodiazepines in the 1960s, the barbiturates previously in use gradually fell into disfavor. At this point, the barbiturates have no role in the routine treatment of anxiety or insomnia (Uhde & Trancer, 1995). In spite of the pharmacological revolution that took place in the latter half of the 20th century, there are still some areas of medicine where certain barbiturates remain the pharmaceutical of choice. Some examples of these specialized uses for a barbiturate include (but are not limited to) certain surgical procedures and the control of epilepsy. As newer drugs have all but replaced the barbiturates in modern medicine, it is surprising to learn that controversy still rages around the appropriate use of many of these chemicals. For example, in the last decade of the 20th century physicians thought that the barbiturates could be used to control the fluid pressure within the brain following trauma. Physicians now question the effectiveness of barbiturates in the control of intracranial hypertension (Lund & Papadakos, 1995). Another area of controversy 6

See Appendix 4.

surrounding the barbiturates is the use of one barbiturate to execute criminals by lethal injection (Truog, Berde, Mitchell, & Brier, 1992). Another equally controversial use of the barbiturates is in the sedation of terminally ill cancer patients who are in extreme pain (Truog et al., 1992). Thus, although the barbiturates have been in use for more than a century, they remain the agent of choice to treat certain medical conditions, and controversy surrounds the use of these pharmaceuticals. The abuse potential of barbiturates. The barbiturates have a considerable abuse potential. Indeed, between 1950 and 1970, the barbiturates were, as a group, second only to alcohol as drugs of abuse (Reinisch, Sanders, Mortensen, & Rubin, 1995). Remarkably, the first years of the 21st century have witnessed a minor resurgence in the popularity of the barbiturates as drugs of abuse (Doble et al., 2004). Indicative of this resurgence, 8.9% of the graduating seniors of the class of 2002 admitted to abusing barbiturates at least once (Johnston, O’Malley, & Bachman, 2003a). A number of older people also, usually individuals over the age of 40, became addicted to the barbiturates when they were younger. For people of this generation, the barbiturates were the most effective treatment for anxiety and insomnia and many users became— and remain— addicted (Kaplan, Sadock, & Grebb, 1994). Finally, a small number of physicians have turned to the barbiturates as antianxiety and hypnotic agents to avoid the extra paperwork imposed on benzodiazepine prescriptions. Fortunately, the majority of physicians have not followed this practice.

Pharmacology of the Barbiturates Chemically, the barbiturates are remarkably similar. The only major difference between the various members of the barbiturate family of drugs is the length of time it takes the individual’s body to absorb, biotransform, and then excrete the specific form of barbiturate that has been used. One factor that influences the absorption of barbiturates is the drug’s lipid solubility. The different barbiturates vary in terms of their lipid solubility, with those forms of barbiturate that are easily soluble in lipids being rapidly distributed to all bloodrich tissues such as the brain. Thus, pentobarbital,

Abuse and Addiction to the Barbiturates and Barbiturate-like Drugs

which is very lipid soluble, may begin to have an effect in 10 to 15 minutes. In contrast, phenobarbital is poorly lipid soluble and does not begin to have an effect until 60 minutes or longer after the user has ingested the medication. While neuropharmacologists understand why different forms of barbiturates might have a different duration of effect and speed of action, the exact mechanism by which barbiturates work is similar for these agents. Barbiturates have been found to inhibit the ability of the GABAA chloride channel to close, thus slowing the rate at which the cell can fire (Doble et al., 2004; Olmedo & Hoffman, 2000). However, the mechanism by which barbiturates accomplish this effect is different from that utilized by the benzodiazepines (Doble et al., 2004). Barbiturates can be classified on the basis of their duration of action.7 First, there are the ultrashort-acting barbiturates. When injected, the effects of the ultrashortduration barbiturates begin in a matter of seconds and last for less than one-half hour. Examples of such ultrashort-duration barbiturates include Pentothal and Brevital. The ultrashort-duration barbiturates are extremely lipid soluble, pass through the blood-brain barrier quickly, and when injected into a vein have an effect on the brain in just a few seconds. These medications are often utilized in surgical procedures when a rapid onset of effects and a short duration of action are desirable. Then there are the short-acting barbiturates. When injected, the short-acting barbiturates have an effect in a matter of minutes that lasts for 3–4 hours (Zevin & Benowitz, 1998). Nembutal is an example (Sadock & Sadock, 2003). In terms of lipid solubility, the short-acting barbiturates would fall between the ultrashort-acting barbiturates and the next group, the intermediate-acting barbiturates. The effects of the intermediate-acting barbiturates begin within an hour when the drug is ingested orally and last 6–8 hours (Zevin & Benowitz, 1998). Included in this group are Amytal (amobarbital) and Butisol (butabarbital) (Schuckit, 2000). Finally, there are the long-acting 7

Other researchers might use classification systems different from the one that is used in this text. For example, some researchers use the chemical structure of the different forms of barbiturate as the defining criteria for classification. This text will follow the classification system suggested by Zevin & Benowitz (1998).


barbiturates. These are absorbed slowly and their effects last for 6–12 hours (Zevin & Benowitz, 1998). Phenobarbital is perhaps the most commonly encountered drug in this class. One point of confusion that must be addressed is that the short-acting barbiturates do not have extremely short half-lives. As was discussed in Chapter 3, the biological half-life of a drug provides only a rough estimate of the time a specific chemical will remain in the body. The shorter-acting barbiturates might have an effect on the user for only a few hours and still have a half-life of 8–12 hours, or even longer. This is because their effects are limited not by the speed at which they are biotransformed by the liver but by the speed with which they are removed from the blood and distributed to the various organs in the body. Significant levels of some shorter-acting barbiturates are stored in different body organs and are still present long after the drug has stopped having its desired effect. The barbiturate molecules stored in the different body organs will slowly be released back into the general circulation, possibly causing a barbiturate hangover (Uhde & Trancer, 1995). Overall, the chemical structures of the various forms of barbiturates are quite similar, and once in the user’s body, these drugs all tend to have similar effects. There are few significant differences in relative potency between various barbiturates. As a general rule, the shorter-term barbiturates are almost fully biotransformed by the liver before being excreted from the body (Nishino et al., 1995). In contrast, a significant proportion of the longer-term barbiturates are eliminated from the body essentially unchanged. Thus, for phenobarbital, which may have a half-life of 2–6 days, between 25% and 50% of the drug will be excreted by the kidneys virtually unchanged. Another barbiturate, methohexital, has a half-life of only 3–6 hours, and virtually all of it is biotransformed by the liver before it is excreted from the body (American Society of Health-System Pharmacists, 2002). An additional difference between the different barbiturates is the degree to which the drug molecules become protein bound. As a general rule, the longer the drug’s half-life, the stronger the degree of protein binding for that form of barbiturate. When used on an outpatient basis, the barbiturates are typically administered orally. On occasion,


especially when used in a medical setting, an ultrashort-acting barbiturate might be administered intravenously, as when it is used as an anesthetic in surgery or for seizure control. On rare occasions, the barbiturates are administered rectally through suppositories. However, the typical patient will take barbiturates in oral form. When taken orally, the barbiturate molecule is rapidly and completely absorbed from the small intestine (Julien, 1992; Levin, 2002; Winchester, 1990). Once it reaches the blood, the barbiturate will be distributed throughout the body, but the concentrations will be highest in the liver and the brain (American Society of Health-System Pharmacists, 2002). The barbiturates are all lipid soluble, but they vary in their ability to form bonds with blood lipids. As a general rule, the more lipid soluble a barbiturate is, the more quickly it will pass through the blood-brain barrier (Levin, 2002). The behavioral effects of the barbiturates are very similar to those of alcohol (Levin, 2002). Once the barbiturate reaches the bloodstream, it is distributed throughout the body just like alcohol, depressing not only the brain activity but also to a lesser degree the activity of the muscle tissues, the heart, and respiration (Matuschka, 1985). Although high concentrations of barbiturates are quickly achieved in the brain, the drug is rapidly redistributed to other body organs (Levin, 2002). The speed at which this redistribution process is carried out varies from one barbiturate to another, thus different barbiturates have different therapeutic half-lives. Following the redistribution process the barbiturate is metabolized by the liver and eventually excreted by the kidneys. It is within the central nervous system (CNS) that the barbiturates have their strongest effect (Rall, 1990). In the brain, the barbiturates are thought to simulate the effects of the neurotransmitter gamma aminobutyric acid (GABA) (Carvey, 1998; Hobbs, Rall, & Verdoorn, 1995). At the same time, the barbiturates are thought to block the effects of the neurotransmitter glutamate. GABA is thought to be the most important “inhibitory” neurotransmitter in the brain, whereas glutamate functions as a stimulating neurotransmitter (Bohn, 1993; Nutt, 1996; Tabakoff & Hoffman, 1992). Within the

Chapter Nine

neuron, barbiturates reduce the frequency at which one of the GABA receptor sites, known as the GABAA site, is activated. At the same time they increase the time that the GABAA site remains activated, even in the absence of GABA itself (Carvey, 1998). This action reduces the electrochemical potential of the cell, reducing the frequency with which that neuron can fire (Cooper, Bloom, & Roth, 1996). At the regional level within the brain, the barbiturates have their greatest impact on the cortex and the reticula activating system (RAS) (which is responsible for awareness) as well as the medulla oblongata (which controls respiration) (American Society of HealthSystem Pharmacists, 2002). At low dosage levels, the barbiturates will reduce the function of the nerve cells in these regions of the brain, bringing on a state of relaxation, and at slightly higher doses, a drug-induced sleep. At extremely high dosage levels, the barbiturates will interfere with the normal function of the neurons of the central nervous system to such a degree that death is possble. The therapeutic dose of any barbiturate is very close to the lethal dose for that compound, and history has shown us that barbiturate-induced death is not uncommon. Some barbiturates have a therapeutic dosage to lethal dosage level ratio of only 1⬊3, reflecting the narrow therapeutic window of these agents. In the past, when barbiturate use was more common, a pattern of 118 deaths per one million prescriptions was noted for them (Drummer & Odell, 2001). This low safety margin and the significantly higher safety margin offered by the benzodiazepines is one reason the barbiturates have for the most part been replaced by newer medications in the treatment of anxiety and for inducing sleep.

Subjective Effects of the Barbiturates at Normal Dosage Levels At low doses, the barbiturates reduce feelings of anxiety or possibly bring on a sense of euphoria. Some users also report a feeling of sedation or fatigue, possibly to the point of drowsiness, and a decrease in motor activity. This results in an increase in the individual’s reaction time, and he or she might have trouble

Abuse and Addiction to the Barbiturates and Barbiturate-like Drugs

coordinating muscle movements, almost as if intoxicated by alcohol (Peluso & Peluso, 1988; “Sleeping Pills and Antianxiety Drugs,” 1998). This is to be expected, as both alcohol and the barbiturates affect the cortex of the brain through a similar pharmacological mechanism. The disinhibition effects of the barbiturates, like alcohol, may cause a state of “paradoxical” excitement or possibly even a paradoxical rage reaction. Patients who have received barbiturates for medical reasons have reported unpleasant side effects such as nausea, dizziness, and a feeling of mental slowness. Anxious patients report that their anxiety is no longer as intense, whereas patients who are unable to sleep report that they are able to slip into a state of drug-induced sleep quickly.

Complications of the Barbiturates at Normal Dosage Levels For almost 60 years, the barbiturates were the treatment of choice for insomnia. As they were so extensively prescribed to help people sleep, it is surprising to learn that tolerance rapidly develops to their hypnotic effects. Indeed, research suggests that they are not effective as hypnotics after just a few days of regular use (Drummer & Odell, 2001; Rall, 1990). In spite of their traditional use as a treatment for insomnia, the barbiturate-induced sleep is not the same as normal sleep. The barbiturates suppress a portion of the sleep cycle known as the rapid eye movement (REM) state of sleep (Peluso & Peluso, 1988). Scientists who study sleep believe that the individual needs to experience REM sleep for emotional well-being. To show the importance of REM sleep, about one-quarter of a young adult’s total sleep time is normally spent in REM sleep (Kaplan et al., 1994). Barbiturate-assisted sleep results in a reduction in the total amount of time the individual spends in REM sleep (Rall, 1990). Thus, through the interference of the normal sleep pattern, barbiturateinduced sleep may impact the emotional and physical health of the individual. After a period of continuous use, the user will experience “REM rebound” when he or she discontinues a barbiturate. In this condition, the person will dream


more intensely and more vividly for a period of time as the body tries to catch up on lost REM sleep time. These dreams have been described by individuals as nightmares that were strong enough to tempt the individual to return to the use of drugs to get a “good night’s sleep again.” This rebound effect might last for 1 to 3 weeks, although in rare cases it has been known to last for up to 2 months (Tyrer, 1993). Barbiturates can cause a drug-induced hangover the day after the person used the drug (Shannon, Wilson, & Stang, 1995). Subjectively, the individual who is going through a barbiturate hangover simply feels that he or she is “unable to get going” the next day. This is because barbiturates often require an extended period of time for the body to completely biotransform and excrete the drug. As was discussed in Chapter 3, in general, it takes five half-life periods to completely eliminate a single dose of a chemical from the blood. Because many of the barbiturates have extended biological half-life periods, some small amounts of a barbiturate might remain in the person’s bloodstream for hours, or even days, after just a single dose. In some cases, the effects of the barbiturates on judgment, motor skills, and behavior might last for several days after a single dose of the drug (Kaminski, 1992). If the person continually adds to this reservoir of unmetabolized drug by ingesting additional doses of the barbiturate, there is a greater chance of experiencing a drug hangover. However, whether from one single dose or repeated doses, the drug hangover is caused by the same mechanism: traces of unmetabolized barbiturates remaining in the individual’s bloodstream for extended periods of time after he or she stops taking the medication. Subjectively, the individual might feel “not quite awake,” or “drugged,” the next day. The elderly or those with impaired liver function are especially likely to have difficulty with barbiturates. This is because the liver’s ability to metabolize many drugs, such as the barbiturates, declines with age. Consequently, Sheridan, Patterson, and Gustafson (1982) advised that older individuals who receive barbiturates be started at one-half the usual adult dosage, and that the dosage level gradually be increased until the patient reaches the point that the medication is having the desired effect.


One side effect of long-term phenobarbital use is a possible loss in intelligence. Researchers have documented a drop of approximately 8 IQ points in patients who have been receiving phenobarbital for control of seizures for extended periods of time, although it is not clear whether this reflects a research artifact, a drug effect, or the cumulative impact of the seizure disorder (Breggin, 1998). It is also not clear whether this observed loss of 8 IQ. points on intelligence testing might be reversed or if a similar reduction in measured IQ develops as a result of the chronic use of other barbiturates. However, this observation does emphasize that the barbiturates are potential CNS agents that will affect the normal function of the brain. Another consequence of barbiturate use, even when the drug is used in a medical setting, is that this class of pharmaceuticals can cause sexual performance problems such as decreased desire for either partner and both erectile problems and delayed ejaculation for the male (Finger, Lund, & Slagel, 1997). Also, hypersensitivity reactions have been reported with the barbiturates. Such hypersensitivity reactions are most common in (but not limited to) individuals with asthma. Other complications occasionally seen at normal dosage levels include nausea, vomiting, diarrhea, and in some cases, constipation. Some patients have developed skin rashes while receiving barbiturates, although the reason for this is not clear. Finally, some patients who take barbiturates develop an extreme sensitivity to sunlight known as photosensitivity. Thus, patients who receive barbiturates must take special precautions to avoid sunburn or even limited exposure to the sun’s rays. Because of these problems and because medications are now available that do not share the dangers associated with barbiturate use, they are not considered to have any role in the treatment of anxiety or insomnia (Tyrer, 1993). Children who suffer from attention deficithyperactivity disorder (ADHD) (or what was once called “hyperactivity”) who also receive phenobarbital are likely to experience a resurgence of their ADHD symptoms. This effect would seem to reflect the ability of the barbiturates to suppress the action of the reticula activating system (RAS) in the brain. Currently, it is thought that the RAS of children

Chapter Nine

with ADHD is underactive, so any medication that further reduces the effectiveness of this neurological system will contribute to the development of ADHD symptoms. Drug Interactions Between the Barbiturates and Other Medications Research has found that the barbiturates are capable of interacting with numerous other chemicals, increasing or decreasing the amount of these drugs in the blood through various mechanisms. Because of the potentiation effect, patients should not use barbiturates if they are using other CNS depressants such as alcohol, narcotic analgesics, phenothiazines, or benzodiazepines, except under a physician’s supervision (Barnhill, Ciraulo, Ciraulo, & Greene, 1995). Another class of CNS depressants that might unexpectedly cause a potentiation effect with barbiturates is the antihistamines (Rall, 1990). Because many antihistamines are available without a prescription, there is a very real danger of an unintentional interaction between these two types of medications. Patients who are taking barbiturates should not use antidepressants known as monoamine oxidase inhibitors (MAOIs, or MAO inhibitors) as the MAOI may inhibit the biotransformation of the barbiturates (Ciraulo, Creelman, Shader, & O’Sullivan, 1995). Patients using barbiturates should not take the antibiotic doxycycline except under a physician’s supervision. The barbiturates will reduce the effectiveness of this antibiotic, which may have serious consequences for the patient (Meyers, 1992). If the patient is using barbiturates and tricyclic antidepressants concurrently, the barbiturate will cause the blood plasma levels of the tricyclic antidepressant to drop by as much as 60% (Barnhill et al., 1995). The barbiturates in such cases increase the speed with which the antidepressants are metabolized by activation of the liver’s microsomal enzymes. This is the same process through which the barbiturates will speed up the metabolism of many oral contraceptives, corticosteroids, and the antibiotic Flagyl (metronidazole) (Kaminski, 1992). Thus, when used concurrently, barbiturates will reduce the effectiveness of these medications, according to the author. Women who are taking both oral

Abuse and Addiction to the Barbiturates and Barbiturate-like Drugs

contraceptives and barbiturates should be aware of the potential for barbiturates to reduce the effectiveness of oral contraceptives (Graedon & Graedon, 1995, 1996). Individuals who are taking the anticoagulant medication warfarin should not use a barbiturate except under a physician’s supervision. Barbiturate use can interfere with the normal biotransformation of warfarin, resulting in abnormally low blood levels of this anticoagulant medication (Graedon & Graedon, 1995). Further, if the patient should stop taking barbiturates while on warfarin, it is possible for the individual’s warfarin levels to rebound to dangerous levels. Thus, these two medications should not be mixed except under a physician’s supervision. When the barbiturates are biotransformed by the liver, they activate a region of the liver that also is involved in the biotransformation of the asthma drug theophylline (sold under a variety of brand names). Thus, patients who use a barbiturate while taking theophylline might experience abnormally low blood levels of the latter drug, a condition that might result in less than optimal control of the asthma. These two medications should not be used by the same patient at the same time except under a physician’s supervision (Graedon & Graedon, 1995). Finally, in one research study, 5 of 7 patients on pentobarbital who smoked marijuana began to hallucinate (Barnhill et al., 1995). This would suggest that individuals who use barbiturates should not risk possible interactions between these medications and marijuana. As is obvious from this list of potential interactions between barbiturates and other pharmaceuticals, the barbiturates are a family of powerful drugs. As in every case of concurrent chemical use, the individual should always consult a physician or pharmacist before taking different medications simultaneously.

Effects of the Barbiturates at Above-Normal Levels When used above the normal dosage levels, barbiturates may cause a state of intoxication that is similar to that seen with alcohol intoxication. Patients who are intoxicated by barbiturates will demonstrate such behaviors as slurred speech and unsteady gait without the characteristic smell of alcohol (Jenike, 1991).


Chronic abusers are at risk for the development of bronchitis and/or pneumonia, as these medications interfere with the normal cough reflex. Individuals under the influence of a barbiturate will not test positive for alcohol on blood or urine toxicology tests (unless they also have alcohol in their systems). Specific blood or urine toxicology screens must be carried out to detect barbiturate intoxication if the patient has used these drugs. Unfortunately, because barbiturates can cause a state of intoxication similar to that induced by alcohol, some barbiturate users will ingest more than the normal dose of the drug. The small “therapeutic window” of the barbiturates gives these drugs a significant overdose potential. The barbiturates cause a dose-dependent reduction in respiration as the increasing drug blood levels interfere with the normal function of the medulla oblongata (the part of the brain that maintains respiration and body temperature). Thus the barbiturates can cause both respiratory depression and hypothermia either when abused at higher than normal doses or when intermixed with other CNS depressants (Pagliaro & Pagliaro, 1998). Other complications of larger-than-normal doses include a progressive loss of reflex activity, and, if the dose is large enough, coma and ultimately, death (Jenike, 1991) (see Figure 9.1). In past decades, prior to the introduction of the benzodiazepines, the barbiturates accounted for about three-fourths of all drug-related deaths in the United States (Peluso & Peluso, 1988). Even now, intentional or unintentional barbiturate overdoses are not unheard of. Mendelson and Rich (1993) found in their study of successful suicides in the San Diego, California, area that approximately 10% of those who committed suicide with a drug overdose used barbiturates either exclusively or as one of the chemicals that were ingested. Thus, the barbiturates present a danger of either intentional or unintentional overdose. Fortunately, the barbiturates do not directly cause any damage to the central nervous system. If the overdose victim reaches medical support before he or she develops shock or hypoxia, he or she may recover completely from a barbiturate overdose (Sagar, 1991). It is for this reason that any suspected barbiturate overdose should be treated by a physician immediately.


Chapter Nine

Death Level of intoxication: Observed symptoms:




Sedation Slurred speech Disorientation Ataxia Nystagmus

Coma, but person may be aroused by pain Hypoventilation Depression of deep tendon reflexes

Deep coma Gag reflex absent Apnea episodes (may progress to respiratory arrest) Hypotension Shock Hypothermia

FIGURE 9.1 Symptoms observed at different levels of intoxication.

Neuroadaptation, Tolerance to, and Dependence on the Barbiturates The primary use for barbiturates today is quite limited as newer, safer, and more effective drugs have been introduced that have replaced them for the most part. Even so, barbiturates continue to have a limited range of medical applications, including the control of epilepsy and the treatment of some forms of severe head injury (Julien, 1992). Even when barbiturates are used in a medical setting, one unfortunate characteristic of them is that with regular use, neuroadaptation to many of their effects will develop quite rapidly. The process of barbiturate-induced neuroadaptation is not uniform, however. For example, when barbiturates are used for the control of seizures, tolerance may not be a significant problem. A patient who is taking phenobarbital for the control of seizures will eventually become somewhat tolerant to the sedative effect of the medication, but she or he will not develop a significant tolerance to the anticonvulsant effect of the phenobarbital. But if the patient were to take a barbiturate for its sedating or hypnotic effects, over time and with chronic use she or he would become less responsive to this drug-induced effect.

Patients have been known to try to overcome the process of neuroadaptation to the barbiturates by increasing their dosage of the drug without consulting their physician. Unfortunately, this attempt at selfmedication has resulted in a large number of unintentional barbiturate overdoses, some of which have been fatal. This is because of a marked difference between the barbiturates and the narcotic family of drugs: While the individual taking barbiturates might experience some degree of neuroadaptation and become less responsive to the original dose, there is no concomitant increase in the lethal dose (Jenike, 1991). Many barbiturate abusers report that the chemical can bring about a drug-induced feeling of euphoria. But as the user becomes tolerant of the euphoric effects of barbiturates following a period of chronic use, she or he will experience less and less euphoria from the drug. In such cases, it is not uncommon for the abuser to increase the dose in order to maintain the drug-induced euphoria. Unfortunately, as stated in the last paragraph, the lethal dose of barbiturates remains relatively stable in spite of the user’s growing tolerance or neuroadaptation to the drug. Thus, as the barbiturate abuser increases the daily dosage level to continue to experience the drug-induced euphoria, she or he will come closer and closer to the lethal dose.


Abuse and Addiction to the Barbiturates and Barbiturate-like Drugs

In addition to the phenomenon of tolerance to the barbiturate family of drugs, cross tolerance is also possible between barbiturates and similar chemical agents. With cross tolerance, once people have become tolerant of one family of chemicals, they will also become tolerant of the effects of other, similar drugs. Cross tolerance between alcohol and the barbiturates is common, as is some degree of cross tolerance between the barbiturates and the opiates, and barbiturates and the hallucinogen PCP (Kaplan et al., 1994). Historically, the United States went through a wave of barbiturate abuse and addiction in the 1950s. Thus, physicians have long been aware that once the person is addicted, withdrawal from barbiturates is potentially life threatening and should be attempted only under the supervision of a physician (Jenike, 1991). The barbiturates should never be abruptly withdrawn, as to do so might bring about an organic brain syndrome that could include such symptoms as confusion, seizures, possible brain damage, and even death. Approximately 80% of barbiturate addicts who abruptly discontinue the drug will experience withdrawal seizures, according to the author. Unfortunately, there is no set formula to estimate the danger period for barbiturate withdrawal problems. Indeed, the exact period during withdrawal when the barbiturate addict is most at risk for such problems as seizures depends on the specific barbiturate being abused (Jenike, 1991). As a general rule, however, the longer-lasting forms of barbiturates tend to have longer withdrawal periods. When an individual abruptly stops taking a short-acting to intermediate-acting barbiturate, one may normally expect withdrawal seizures to begin on the second or third day. Barbiturate-withdrawal seizures are rare after the 12th day following cessation of the drug. When the individual was abusing one of the longer-acting barbiturates, hs or she might not have a withdrawal seizure until as late as the 7th day after the last dose of the drug (Tyrer, 1993). The person who is physically dependent on barbiturates will experience a number of symptoms as a result of the withdrawal process. Virtually every barbiturate-dependent patient will experience a feeling of apprehension, which will last for the first 3–14 days of withdrawal (Shader, Greenblatt, & Ciraulo,

1994). Other symptoms that the patient will experience during withdrawal include muscle weakness, tremors, anorexia, muscle twitches, and a possible state of delirium, according to the authors. All of these symptoms will pass after 3–14 days, depending on the individual. Physicians are able to utilize many other medications to minimize these withdrawal symptoms; however, the patient should be warned that there is no such thing as a symptom-free withdrawal.

Barbiturate-like Drugs Because of the many adverse side effects of the barbiturates, pharmaceutical companies have long searched for substitutes that might be effective yet safe to use. During the 1950s, a number of new drugs were introduced to treat anxiety and insomnia in place of the barbiturates. These drugs included Miltown (meprobamate), Quaalude and Sopor (both brand names of methaqualone), Doriden (glutethimide), Placidyl (ethchlorvynol), and Noludar (methyprylon). Although these drugs were thought to be nonaddicting when they were first introduced, research has shown that barbiturate-like drugs have an abuse potential very similar to that of barbiturates. This should not be surprising, as the chemical structures of some of the barbiturate-like drugs such as glutethimide and methyprylon are very similar to that of the barbiturates themselves (Julien, 1992). Like the barbiturates, glutethimide and methyprylon are metabolized mainly in the liver. Both Placidyl (ethchlorvynol) and Doriden (glutethimide) are considered to be especially dangerous, and neither drug should be used except in rare, special circumstances (Schuckit, 2000). The prolonged use of ethchlorvynol may result in a druginduced loss of vision known as amblyopia. Fortunately, this drug-induced amblyopia is not permanent but will gradually clear when the drug is discontinued (Michelson, Carroll, McLane, & Robin, 1988). Since its introduction, the drug glutethimide has become “notorious for its high mortality associated with overdose” (Sagar, 1991, p. 304). This overdose potential is a result of the drug’s narrow therapeutic range. The lethal dose of glutethimide


is only 10 grams, a dose only slightly above the normal dosage level (Sagar, 1991). Meprobamate was a popular sedative in the 1950s, when it was sold under at least 32 different brand names, including Miltown or Equanil (Lingeman, 1974). However, it is considered obsolete by current standards (Rosenthal, 1992). Surprisingly, this medication is still quite popular in older patients, and older physicians often continue to prescribe it. An over-the-counter prodrug Soma (carisoprodol), which is sold in many states, is biotransformed into meprobamate after being ingested, and there have been reports of physical dependence on it just as there were on meprobamate in the 1950s and 1960s (Gitlow, 2001). Fortunately, although meprobamate is quite addictive, it has generally not been used since the early 1970s. But some older patients have been using this medication continuously since that period, and quite a few remain addicted to it as a result of their initial prescriptions for the drug dating from the 1960s and 1970s (Rosenthal, 1992). Also, in spite of its reputation and history, meprobamate still has a minor role in medicine, especially for patients who are unable to take benzodiazepines (Cole & Yonkers, 1995). The peak blood levels of meprobamate following an oral dose are seen in 1–3 hours, and the half-life is 6–17 hours following a single dose. The chronic use of meprobamate may result in the half-life being extended to 24–48 hours (Cole & Yonkers, 1995). The LD50 of meprobamate is estimated to be about 28,000 mg. However, some deaths have been noted following overdoses of 12,000 mg, according to the authors. Physical dependence to this drug is common when patients take 3,200 mg/day or more. Methaqualone was a drug that achieved significant popularity among illicit drug abusers in the late 1960s and early 1970s. It was originally intended as a nonaddicting substitute for the barbiturates in the mid-1960s. Depending on the dosage level being used, physicians prescribed it both as a sedative and a hypnotic (Lingeman, 1974). Illicit drug users quickly discovered that when they resisted the sedative or hypnotic effects of methaqualone, they would experience a sense of euphoria. Methaqualone is rapidly absorbed from the gastrointestinal tract following an oral dose and the

Chapter Nine

individual begins to feel its effects in 15–20 minutes. The usual dose for methaqualone, when used as a sedative, was 75 mg, and the hypnotic dose was between 150 and 300 mg. Tolerance to the sedating and hypnotic effects of methaqualone developed rapidly, and many abusers gradually increased their daily dosage levels in an attempt to reachieve the initial effect. Some individuals who abused methaqualone were known to use upward of 2,000 mg in a single day (Mirin, Weiss, & Greenfield, 1991), a dosage level that was quite dangerous. Indeed, the lethal dose of methaqualone was estimated to be approximately 8,000 mg for a typical 150-pound user (Lingeman, 1974). Shortly after it was introduced, reports began to appear suggesting that methaqualone was being abused. It was purported to have aphrodisiac properties (which has never been proven) and to provide a mild sense of euphoria for the user (Mirin et al., 1991). People who have used methaqualone report feelings of euphoria, well-being, and behavioral disinhibition. As with the barbiturates, although tolerance of the drug’s effects develop quickly, the lethal dosage of methaqualone remains the same. Death from methaqualone overdose was common, especially when the drug was taken with alcohol. The typical cause of death was heart failure, according to Lingeman (1974). In the United States, methaqualone was withdrawn from the market in the mid-1980s, although it is still manufactured by pharmaceutical companies in other countries. It is often smuggled into this country or manufactured in illicit laboratories and sold on the street. Thus, the substance-abuse counselor must have a working knowledge of methaqualone and its effects.

Summary For thousands of years, alcohol was the only chemical that was even marginally effective as an antianxiety or hypnotic agent. Although a number of chemicals with hypnotic action were introduced in the mid1800s, each was of limited value in the fight against anxiety or insomnia. Then in the early 1900s, the barbiturates were introduced. The barbiturates, which have a mechanism of action very similar to that of

Abuse and Addiction to the Barbiturates and Barbiturate-like Drugs

alcohol, were found to have an antianxiety and a hypnotic effect. The barbiturates rapidly became popular and were widely used both for the control of anxiety and to help people fall asleep. However, like alcohol, the barbiturates were found also to have a significant potential for addiction. This resulted in a search for nonaddictive medications that could replace them. In the post–World War II era, a number of synthetic drugs with chemical structures


very similar to the barbiturates were introduced, often with the claim that these drugs were nonaddicting. However, they were ultimately found to have an addiction potential similar to that of the barbiturates. Since the introduction of the benzodiazepines (to be discussed in the next chapter), the barbiturates and similar drugs have fallen into disfavor. However, there is evidence to suggest that they might be making a comeback.


Abuse of and Addiction to Benzodiazepines and Similar Agents


Medical Uses of the Benzodiazepines

In 1960, the first of a new class of antianxiety1 drugs, chlordiazepoxide, was introduced in the United States as a treatment for anxiety symptoms. Chlordiazepoxide is a member of a family of chemicals known as the benzodiazepines. Since its introduction, some 3,000 different benzodiazepines have been developed, of which about 50 have been marketed around the world and about 12 are used in the United States (Dupont & Dupont, 1998). Benzodiazepines have been found to be effective in treating a wide range of disorders, such as insomnia, muscle strains, and the control of anxiety symptoms, and seizures. Because they are far safer than the barbiturates, they have become the most frequently prescribed psychotropic medications in the world (Gitlow, 2001). Each year, approximately 10% to 20% of the adults in the Western world will use a benzodiazepine at least once (Jenkins & Cone, 1998). Legally, the benzodiazepines are a Category II controlled substance.2 The benzodiazepines were initially introduced as nonaddicting substitutes for the barbiturates or barbiturate-like drugs. However, in the time since their introduction, serious questions have been raised about their abuse potential. Indeed, misuse and abuse of benzodiazepines result in hundreds of millions of dollars in unnecessary medical costs each year in the United States (Benzer, 1995). In this chapter, the history of the benzodiazepines, their medical applications, and the problem of abuse/addiction to them and similar agents in the United States will be examined.

Although the benzodiazepines were originally introduced as antianxiety agents, and they remain valuable aids to the control of specific anxiety disorders, the selective serotonin reuptake inhibitors (SSRIs) have become the “mainstay of drug treatment for anxiety disorders” (Shear, 2003, p. 28). The benzodiazepines, however, remain the treatment of choice for acute anxiety (such as panic attacks or short-term anxiety resulting from a specific stressor) and continue to have a role in the treatment of such conditions as generalized anxiety disorder (GAD). Unfortunately, many physicians continue to view the benzodiazepines as the best medication to control anxiety in spite of the introduction of newer, safer, medications. Because the mechanism of action of the benzodiazepines is more selective than that of the barbiturates, they are able to reduce anxiety without causing the same degree of sedation and fatigue seen with the barbiturates. The most frequently prescribed benzodiazepines for the control of anxiety are shown in Table 10.1. In addition to the control of anxiety, some benzodiazepines have been useful in the treatment of other medical problems such as seizure disorders and muscle strains (Ashton, 1994; Shader & Greenblatt, 1993). The benzodiazepine clonazepam (Clonopin),3 is especially effective in the long-term control of seizures and is occasionally used as an antianxiety agent (Shader & Greenblatt, 1993). Researchers estimate that 10% of the adults in the United States suffer from chronic insomnia (Report of the Institute of Medicine Committee on the Efficacy and Safety of Halcion, 1999). Some members

1Some authors use the term anxioyltic in place of the term antianxiety. For the purposes of this text, the term antianxiety will be used. 2See Appendix 4.



authors spell the name of this medication Klonopin.


Abuse of and Addiction to Benzodiazepines and Similar Agents TABLE 10.1 Selected Pharmacological Characteristics of Some Benzodiazepines Equivalent dose

Average half-life (hours)


0.5 mg



25 mg


Generic name


0.25 mg



7.5 mg


5 mg



30 mg



20 mg



1 mg



15 mg



10 mg



30 mg




0.25 mg


Sources: Based on Hyman (1988) and Reiman (1997).

of the benzodiazepine family of drugs have been found useful as a short-term treatment for insomnia, including temazepam (Restoril), triazolam (Halcion), flurazepam (Dalmane), and quazepam (Doral) (Gillin, 1991; Hussar, 1990). Two different benzodiazepines— alprazolam (Xanax) and adinazolam (Deracyn)— are reportedly of value in the treatment of depression. Although it does not have antidepressant effects, alprazolam is often used to treat the anxiety that frequently accompanies depression and thus would indirectly help the patient to feel better. It is also used to treat panic disorder, although there are rare case reports of alprazolam-induced panic attacks (Bashir & Swartz, 2002). Unlike the other benzodiazepines, adinazolam (Deracyn) does seem to have a direct antidepressant effect. Researchers believe that adinazolam (Deracyn) works by increasing the sensitivity of certain neurons within the brain to serotonin (Cardoni, 1990). A deficit of or insensitivity to serotonin is thought to be the cause of at least some forms of depression. Thus, by increasing the sensitivity of the neurons of the brain

to serotonin, Deracyn (adinazolam) would seem to have a direct antidepressant effect, which is lacking in most benzodiazepines. Benzodiazepines and suicide attempts. The possibility of suicide through a drug overdose is a very real concern for the physician, especially when the patient is depressed. Because of their high therapeutic index (discussed in Chapter 6), the benzodiazepines have traditionally held the reputation of being “safe” drugs to use with patients who were potentially suicidal. Unlike the barbiturates (see Chapter 9), the benzodiazepines have a therapeutic index estimated to be above 1⬊200 (Kaplan & Sadock, 1996) and possibly as high as 1⬊1,000 (Carvey, 1998). In terms of overdose potential, animal research suggests that the LD50 for diazepam is around 720 mg per kilogram of body weight for mice and 1240 mg/kg for rats (Physicians’ Desk Reference, 2004). The LD50 for humans is not known, but these figures do suggest that diazepam is an exceptionally safe drug. However, other benzidoazepines have smaller therapeutic indexes than diazepam. Many physicians recommend the benzodiazepine Serax (oxazepam) for use in cases when the patient is at risk for an overdose because of its greater margin of safety (Buckley, Dawson, Whyte, & O’Connell, 1995). Note, however, that the benzodiazepine margin of safety is drastically reduced when an individual ingests one or more additional CNS depressants in an attempt to end his or her life. This is because of the synergistic4 effect that develops when different CNS depressants are intermixed— one reason that any known or suspected overdose should be evaluated and treated by medical professionals. If the attending physician suspects that the individual has ingested a benzodiazepine in an attempt to end his or her life, that physician might consider the use of Mazicon (flumazenil) to counteract the effects of the benzodiazepine in the brain. Mazicon occupies the benzodiazepine receptor site without activating that site, thus helping to protect the individual from the effects of a benzodiazepine overdose. Although this medication has provided physicians with a powerful new tool in treating the benzodiazepine overdose, it is effective for only 20 to 45 minutes and will block the effects of benzodiazepines only (Brust, 1998). 4

See Glossary.


Pharmacology of the Benzodiazepines The benzodiazepines are very similar in their effects, differing mainly in their duration of action (“Sleeping Pills and Antianxiety Drugs,” 1988). Table 10.1 reviews the relative potency and biological half-lives of some of the benzodiazepines currently in use in the United States. Like many pharmaceuticals, the benzodiazepines can be classified on the basis of their pharmacological characteristics. Tyrer (1993), for example, adopted a classification system based not on the duration of the effects of the benzodiazepines but on the basis of their elimination half-lives (discussed in Chapter 6), separating the benzodiazepines into four groups:5 (a) very short half-lives (4 hours or less), (b) short half-lives (4–12 hours), (c) intermediate half-life (12–20 hours), and (d) long half-life (20 or more hours). The various benzodiazepines currently in use range from moderately to highly lipid soluble (Ayd, 1994). Lipid solubility is important because the more lipid soluble a chemical is, the faster it is absorbed through the small intestine after being taken orally (Roberts & Tafure, 1990). Also, highly lipid soluble drugs can easily pass through the blood-brain barrier to enter the brain (Ballenger, 1995). Once in the general circulation, the benzodiazepines are all protein bound. However, there is some degree of variation between the various forms of benzodiazepines as to what percentage of the medication will be protein bound. Diazepam, for example, is more than 99% protein bound (American Psychiatric Association, 1990) whereas 92% to 97% of chlordiazepoxide is protein bound (Ayd, Janicak, Davis, & Preskorn, 1996) and alprazolam is only about 80% protein bound (Physicians’ Desk Reference, 2004). This variability in protein binding is one factor that influences the duration of effect for each benzodiazepine after a single dose (American Medical Association, 1994). The benzodiazepines are poorly absorbed from intramuscular or subcutaneous injection sites (American Medical Association, 1994). This characteristic makes it difficult to predict in advance the degree of drug bioavailability when a benzodiazepine is injected. For 5To complicate matters, the distribution half-life for benzodiazepines is often far different from the elimination half-life, or the therapeutic half-life. For a discission of these concepts, see Chapter 3.

Chapter Ten

this reason these medications are usually administered orally. One exception is when the patient is experiencing uncontrolled seizures. In such cases, intravenous injections of diazepam or a similar benzodiazepine might be used to help control the seizures. Most benzodiazepines must be biotransformed before elimination can proceed, and in the process of biotransformation some benzodiazepines will produce metabolites that are biologically active. These biologically active metabolites may contribute to the duration of a drug’s effects and may require extended periods of time before they are eliminated from the body. Thus, the duration of effect of many benzodiazepines is far different from the elimination half-life of the parent compound, a factor that physicians must keep in mind when prescribing these medications (Hobbs, Rall, & Verdoorn, 1995). For example, during the process of biotransformation, the benzodiazepine flurazepam will produce five different metabolites, each of which has a psychoactive effect of its own. Because of normal variation with which the individual’s body can biotransform or eliminate flurazepam and its metabolites, this benzodiazepine might continue to have an effect on the user for as long as 280 hours after a single dose. Fortunately, the benzodiazepines lorazepam, oxazepam, and temazepam are either eliminated without biotransformation or produce metabolites that have minimal physical effects on the user. As will be discussed later in this chapter, these benzodiazepines are often preferred for older patients, who may experience over-sedation as a result of the long half-lives of some benzodiazepine metabolites. Although the benzodiazepines are often compared with the barbiturates, they are actually far different from the barbiturates in the way they function in the brain. The barbiturates simulate the action of the neurotransmitter gamma aminobutyric acid (GABA), which is thought to be the most important “inhibitory” neurotransmitter in the brain (Bohn, 1993; Nutt, 1996; Tabakoff & Hoffman, 1992). This causes the barbiturates to nonselectively depress the activity of neurons in the cortex and many other parts of the brain. Subjectively, this effect is interpreted as a reduction in anxiety levels and possibly the ability to fall asleep (although benzodiazepines actually interfere with normal sleep, as discussed below).

Abuse of and Addiction to Benzodiazepines and Similar Agents


In contrast to the barbiturates, the benzodiazepine molecule is thought to bind to one of the GABA receptor sites and also to a chloride channel on the neuron surface, making the cell more sensitive to the GABA that already exists. In support of this theory, in the absence of GABA the benzodiazepines have no apparent effect on the neuron (Charney, Mihis, & Harris, 2001; Hobbs et al., 1995; Pagliaro & Pagliaro, 1998). Neurons that utilize GABA are especially common in the locus ceruleus (Cardoni, 1990; Johnson & Lydiard, 1995). Nerve fibers from the locus ceruleus connect with other parts of the brain thought to be involved in fear and panic reactions. Animal research has suggested that stimulation of the locus ceruleus causes behaviors similar to those seen in humans who are having a panic attack (Johnson & Lydiard, 1995). By enhancing the effects of GABA, the benzodiazepines seem to reduce the level of neurological activity in the locus ceruleus, lowering the individual’s anxiety level. Unfortunately, this theory does not provide any insight into the ability of the benzodiazepines to help muscle tissue relax or to stop seizures (Hobbs et al., 1995). Thus, there is still a lot to be discovered about how these drugs work. Surprisingly, there is little information about the long-term effectiveness of these compounds as antianxiety agents (Ayd, 1994). Some researchers believe that the antianxiety effects of the benzodiazepines last about 1–2 months and that these drugs are not useful in treating anxiety continuously over a long period of time (Ashton, 1994; Ayd et al., 1996). One study found that after 4 weeks the subjects who received the medication had fewer panic attacks than did those patients who received the placebo. However, this same research study found that after 8 weeks of continuous use, patients who received Xanax had just as many panic attacks as the patients who received only a placebo, a finding that was not shared with the physicians (Leavitt, 2003; Walker, 1996). On the other hand, some researchers do believe that the benzodiazepines are an effective agent in the longterm control of anxiety. For example, the Harvard Medical School Mental Health Letter (“Sleeping Pills and Antianxiety Drugs,” 1988) suggested that while patients might develop some tolerance to the sedative effects of benzodiazepines, they did not become tolerant to the antianxiety effects of these medications. Thus, within the

medical community, there is some degree of uncertainty as to the long-term effectiveness of benzodiazepines in the control of anxiety symptoms.

Side Effects of the Benzodiazepines When Used at Normal Dosage Levels Some degree of sedation is common following the ingestion of a benzodiazepine (Ballenger, 1995); however, excessive sedation is uncommon unless the patient ingested a dose that was too large for him or her (Ayd et al., 1996). Advancing age is one factor that may make the individual more susceptible to the phenomenon of benzodiazepine-induced over-sedation (Ashton, 1994; Ayd, 1994). Because of an age-related decline in blood flow to the liver and kidneys, elderly patients often require more time to biotransform and/or excrete many drugs than do younger adults who receive the same medication (Bleidt & Moss, 1989). This contributes to over-sedation or in some cases a state of paradoxical excitement, as the bodies of older patients struggle to adjust to the effects of a benzodiazepine. To illustrate this process, an elderly patient might require three times as long to fully biotransform a dose of diazepam or chlordiazepoxide than would a young adult (Cohen, 1989). If a benzodiazepine is required in an older individual, physicians tend to rely on lorazepam or oxazepam (Ashton, 1994; Graedon & Graedon, 1991) because these compounds have a shorter “half-life” and are more easily biotransformed than diazepam and similar benzodiazepines. Both Deracyn (adinazolam) and Doral (quazepam) are exceptions to the rule that the older patient is more likely to experience excessive sedation than a younger patient. It is not uncommon for patients to experience sedation from adinazolam. As many as two-thirds of those who receive this medication might experience some degree of drowsiness, at least until their bodies adapt to the drug’s effects (Cardoni, 1990). Further, since the active metabolites of Doral (quazepam) have a half-life of 72 hours or more, a strong possibility exists that the user will experience a drug-induced hangover the next day (Hartmann, 1995). Drug-induced hangovers are possible with benzodiazepine use, especially with some of the longer-lasting benzodiazepines (Ashton, 1992, 1994). If you will note, the data in Table 10.1 suggest that the average half-life


Chapter Ten

of many benzodiazepines can be as long as 100 hours. In some cases, the half-life of some of the longer-acting benzodiazepines might be as much as 280 hours, depending on the user’s biochemistry. Further, it usually requires 5 half-life periods before virtually all of a drug is biotransformed/eliminated from the body. If that patient were to take a second or even a third dose of the medication before the first dose were fully biotransformed, he or she would begin to accumulate the unmetabolized medication in body tissues. The unmetabolized medication would continue to have an effect on the individual’s function well past the time that he or she thought the drug’s effects had ended. Even a single 10 mg dose of diazepam can result in visual motor disturbances for up to 7 hours after the medication was ingested (Gitlow, 2001), a finding that might account for the observation that younger adults who use a benzodiazepine are at increased risk for motor vehicle accidents (Barbone et al., 1998). Thus, even therapeutic doses of diazepam contribute to prolonged reaction times in users, increasing their risk for motor vehicle accidents by up to 500% (Gitlow, 2001).

Neuroadaptation to Benzodiazepines and Abuse/Addiction to These Agents Within a few years of the time that the benzodiazepines were introduced, reports of abuse/addiction began to surface. Although these drugs were presented as nonaddicting agents, clinical evidence suggests that patients will experience a discontinuance syndrome after using them at recommended dosage levels for just a few months (Smith & Wesson, 2004). This effect occurs because patients experience a process of “neuroadaptation” (Sellers et al., 1993, p. 65) in which the CNS becomes tolerant to the drug’s effects. If people using the medication were to suddenly discontinue it, they would experience rebound or “discontinuance” syndrome as their bodies adjusted to its sudden absence. Some researchers believe that this state of pharmacological tolerance to the benzodiazepines is evidence that the patient has become addicted to the medication. The discontinuance syndrome may develop “within a few weeks, perhaps days” (Miller & Gold, 1991a, p. 28), and there is a great deal of disagreement as to whether this process reflects the patient’s growing

dependence on benzodiazepines. Some researchers view the rebound or discontinuance symptoms as a natural consequence of benzodiazepine use. For example, Sellers et al. (1993) argued that neuroadaptation “is not sufficient to define drug-taking behavior as dependent” (p. 65). Thus, while the patient might experience a discontinuance syndrome after using a benzodiazepine at recommended doses for an extended period of time, this is seen as a natural process. Advocates of this position note that the body must go through a period of adjustment whenever any medication is discontinued. Researchers disagree as to the percentage of patients who will develop a discontinuance syndrome after using the benzodiazepines for an extended period of time. Ashton (1994) suggested that approximately 35% of patients who take a benzodiazepine continuously for 4 or more weeks will become physically dependent on the medication. These individuals will experience withdrawal symptoms when they stop taking it, according to the author. But in rare cases, pharmacological dependence on the benzodiazepines might develop in just days or weeks (American Psychiatric Association, 1990; Miller & Gold, 1991a). On the other hand, Blair and Ramones (1996) suggested that in most cases in which the benzodiazepines are used at normal dosage levels for less than 4 months, the risk of a patient’s becoming dependent is virtually nonexistent. However, the Royal College of Psychiatrists in Great Britain now recommends that the benzodiazepines be used continuously for no longer than 4 weeks (Gitlow, 2001). If the individual were using (or abusing) a benzodiazepine at high dosage levels and then were to discontinue the use of that compound, she or he would be at risk for the development of a sedative-hypnotic withdrawal syndrome (Smith & Wesson, 2004). This is an extreme form of the discontinuance syndrome and without timely medical intervention might include such symptoms as anxiety, tremors, anorexia, nightmares, vomiting, postural hypotension, seizures, delirium, and possibly death (Smith & Wesson, 2004). Although the abuse potential of the benzodiazepines is viewed as being quite low, one group of patients for whom the benzodiazepines are known to be potentially addictive are those who struggle with other forms of chemical dependence (Fricchione, 2004; Sattar & Bhatia, 2003). There is only limited evidence

Abuse of and Addiction to Benzodiazepines and Similar Agents


that these drugs might be used safely with individuals with substance-use problems (Sattar & Bhatia, 2003). Clark, Xie, and Brunette (2004) found, for example, that while benzodiazepines are often used as an adjunct to the treatment of severe mental illness, “benzodiazepine treatment did not improve outcomes, and persons [with concurrent substance-use disorders and mental illness] were more likely to abuse them” (p. 151). For this reason, these medications should be used with individuals recovering from alcohol/drug addiction “only after safer alternative therapies have proved ineffective” (Ciraulo & Nace, 2000, p. 276) and physicians should attempt to use benzodiazepines such as Clonopin that are known to have lower abuse potentials when a benzodiazepine must be used by a patient with a substance-use problem. Fully 80% of benzodiazepine abuse is seen in a pattern of polydrug abuse (Longo, Parran, Johnson, & Kinsey, 2000; Sattar & Bhatia, 2003). Such polydrug abuse seems to take place to (a) enhance the effects of other compounds, (b) control some of the unwanted side effects of the primary drug of abuse, or (c) help the individual withdraw from the primary drug of abuse (Longo et al., 2000). Finally, a small percentage of abusers will utilize the benzodiazepines to escape the feelings of dysphoria or anxiety that they face on a daily basis (Cole & Kando, 1993; Wesson & Ling, 1996). Thus, while the benzodiazepines do not bring about a state of euphoria such as that induced by many of the other drugs of abuse, they retain a significant abuse potential in their own right (Spiegel, 1996; Walker, 1996). This abuse potential might best be seen in the observation that approximately 25% of recovering alcoholics relapse after receiving a prescription for a benzodiazepine (Gitlow, 2001). Abusers seem to prefer the shorter-acting benzodiazepines such as lorazepam or alprazolam (Longo & Johnson, 2000; Sellers et al., 1993; Walker, 1996), although there is evidence that the long-acting benzodiazepine clonazepam is also frequently abused by illicit drug users (Longo & Johnson, 2000). Because of the potential for abuse of the benzodiazepines, it has been recommended that they not be routinely administered to people with known substanceuse disorders (Minkoff, 2001). Further, physicians are warned that these drugs are contraindicated in patients

with severe mental illness who also have substance-use disorders (Brunette, Noordsy, Xie, & Drake, 2003). These medications do not bring about a reduction in anxiety or depression levels in this population and place the patient at increased risk for either abusing or becoming addicted to the prescribed benzodiazepine, according to the authors. Even in cases in which the medications were used as prescribed, withdrawal from the benzodiazepine can be quite difficult. Individuals who have been using or abusing benzodiazepines for months or years might require a gradual tapering in daily dosage levels over periods as long as 8 to 12 weeks (Miller & Gold, 1998). To complicate the withdrawal process, patients tend to experience an upsurge in anxiety symptoms when their daily dosage levels reach 10% to 25% of their original daily dose (Prater, Miller, & Zylstra, 1999). To combat these anxiety symptoms and increase the individual’s chances of success, the authors recommended the use of “mood stabilizing” agents such as carbamazepine or valproic acid during the withdrawal process. Winegarden (2001) suggested that Seroquel (quetiapine fumarate) might provide adequate control of patients’ anxiety while they are being withdrawn from benzodiazepines. Factors influencing the benzodiazepine withdrawal process. The research team of Rickels, Schweizer, Case, and Greenblatt (1990) examined the phenomenon of benzodiazepine withdrawal and concluded that its severity was dependent on five different drug treatment factors, plus several patient factors. According to the authors, the drug treatment factors included (a) the total daily dose of benzodiazepines being used, (b) the total time during which benzodiazepines had been used, (c) the half-life of the benzodiazepine being used (short half-life benzodiazepines tend to produce more withdrawal symptoms than do long half-life benzodiazepines), (d) the potency of the benzodiazepine being used, and (e) the rate of withdrawal (gradual, tapered withdrawal, or abrupt stopping). Some of the patient factors that influence the withdrawal from benzodiazepines include (a) the patient’s premorbid personality structure, (b) expectations for the withdrawal process, and (c) individual differences in the neurobiological structures within the brain thought to be involved in the withdrawal process. Interactions between these two sets of factors were thought


to determine the severity of the withdrawal process, according to Rickels et al. (1990). Thus, for the person who is addicted to these medications, withdrawal can be a complex, difficult process.

Complications Caused by Benzodiazepine Use at Normal Dosage Levels The benzodiazepines are not perfect drugs. For example, because of tolerance to the anticonvulsant effects of benzodiazepines, they are of only limited value in the long-term control of epilepsy (Morton & Santos, 1989). Another shortcoming of the benzodiazepines is that they cause excessive sedation in rare cases even at normal dosage levels. This effect is most often noted in the older patient or in people with significant levels of liver damage. The fact that the elderly are most likely to experience excessive sedation is unfortunate, considering that two-thirds of those who receive prescriptions for benzodiazepines are above the age of 60 (Ayd, 1994). Some of the known side effects attributed to benzodiazepines include hallucinations, a feeling of euphoria, irritability, tachycardia, sweating, and disinhibition (Hobbs et al., 1995). Even when used at normal dosage levels, the benzodiazepines may occasionally bring about a degree of irritability, hostility, rage, or outright aggression, which is called a paradoxical rage reaction (Drummer & Odell, 2001; Hobbs et al., 1995; Walker, 1996). This paradoxical rage reaction is thought to be the result of the benzidoazepine-induced disinhibition. A similar effect is often seen in people who drink alcohol, and the combination of alcohol and benzodiazepines is also thought to cause a paradoxical rage reaction in some individuals (Beasley, 1987). The combination of the two chemicals may lower the individual’s inhibitions to the point that he or she is unable to control anger that had previously been repressed. Although the benzodiazepines are very good at the short-term control of anxiety, evidence would suggest that antidepressant medications such as imipramine or paroxetine are more effective than benzodiazepines after 8 weeks of continual use (Fricchione, 2004). One benzodiazepine, alprazolam, is marketed as an antianxiety agent, but there is evidence to suggest that its duration of effect is too short to provide optimal control of anxiety (Bashir & Swartz, 2002). Further, there is evidence of

Chapter Ten

alprazolam-induced anxiety according to the authors, a previously unreported side effect that might contribute to long-term dependence on alprazolam as the patient takes more and more medication in an attempt to avoid what is, in effect, drug-induced anxiety. One benzodiazepine, Dalmane (flurazepam), tends to cause confusion and over-sedation, especially in the elderly. This medication is often used as a treatment for insomnia. One of the metabolites of flurazepam is desalkyflurazepam, which, depending on the individual, might have a half-life of 40 to 280 hours (Doghramji, 2003). Thus, the effects of a single dose might last for up to 12 days in some patients. Obviously, with such an extended half-life, if the person should use flurazepam for even a few days he or she might continue to experience significant levels of CNS depression for some time after the last dose of the drug. Further, if a person should ingest alcohol or possibly even an over-the-counter cold remedy before the flurazepam is fully biotransformed, the unmetabolized drug could combine with the depressant effects of the alcohol or cold remedy to produce serious levels of CNS depression. Cross tolerance between the benzodiazepines, alcohol, the barbiturates, and meprobamate is possible (Sands, Creelman, Ciraulo, Greenblatt, & Shader, 1995; Snyder, 1986). The benzodiazepines may also potentiate the effects of other CNS depressants such as antihistamines, alcohol, or narcotic analgesics, presenting a danger of over-sedation, or even death6,7 (Barnhill, Ciraulo, Ciraulo, & Greene, 1995). Many of the benzodiazepines have been found to interfere with normal sexual function, even when used at normal dosage levels (Finger, Lund, & Slagel, 1997). The benzodiazepines interfere with normal rapid eye movement (REM) sleep at night, and when used for extended periods of time they may cause rebound insomnia when discontinued (Qureshi & Lee-Chiong, 2004). The phenomenon of rebound insomnia following treatment with a benzodiazepine has not been studied in detail (Doghramji, 2003). In theory, following an extended period of benzodiazepine use, patients 6When

in doubt about whether two or more medications should be used together, always consult a physician, pharmacist, or the local poison control center. 7For example, the movie star Judy Garland reportedly died as a result of the combined effects of alcohol and the benzodiazepine diazepam (Snyder, 1986).

Abuse of and Addiction to Benzodiazepines and Similar Agents


might experience symptoms on discontinuation that mimic the anxiety or sleep disorder for which they originally started to use the medication (Gitlow, 2001; Miller & Gold, 1991a). The danger is that the patient might begin to take benzodiazepines again in the mistaken belief that the withdrawal symptoms indicated that the original problem still existed. Although it might be so slight as to escape notice by the patient, normal memory function is sometimes affected when benzodiazepines are used at normal dosage levels (Ayd, 1994; Gitlow, 2001; Juergens, 1993; O’Donovan & McGuffin, 1993). This drug-induced anterograde amnesia8 is more pronounced when large doses of a benzodiazepine are ingested or when a benzodiazepine is used by an older person, and some benzodiazepines are more likely to produce this effect than others. Indeed, fully 10% of older patients referred for evaluation of a memory impairment suffer from drug-induced memory problems, with benzodiazepines being the most common cause of such problems in this population (Curran et al., 2003). Benzodiazepine-induced memory problems appear to be similar to the alcohol-induced blackout (Juergens, 1993) and they last for the duration of the drug’s effects on the user (Drummer & Odell, 2001). Even when used at normal dosage levels, the benzodiazepines might interfere with the normal psychomotor skills necessary to operate mechanical devices, such as power tools or motor vehicles. For example, the individual’s risk of being involved in a motor vehicle accident was found to be 50% higher after a single dose of diazepam (Drummer & Odell, 2001). These druginduced psychomotor coordination problems might persist for several days and are more common after the initial use of a benzodiazepine (Drummer & Odell, 2001; Woods, Katz, & Winger 1988). Further, the benzodiazepines occasionally will produce mild respiratory depression, even at normal therapeutic dosage levels, especially in people with pulmonary disease. Because of this, the benzodiazepines should be avoided in patients who suffer from sleep apnea, chronic lung disease, or other sleep-related breathing disorders in order to avoid serious, possibly fatal, respiratory depression (Charney et al., 2001; Drummer & Odell, 2001). Also, benzodiazepines should not be used

with patients who suffer from Alzheimer’s disease as they might potentiate preexisting sleep apnea problems (Doghramji, 1989). In rare cases, therapeutic doses of a benzodiazepine have induced a depressive reaction in the patient (Ashton, 1992, 1994; Drummer & Odell, 2001; Juergens, 1993). The exact mechanism by which the benzodiazepines might cause or at least contribute to depressive episodes is not clear at this time. To further complicate matters, there is evidence to suggest that benzodiazepine use might contribute to actual thoughts of suicide on the part of the user (Ashton, 1994; Drummer & Odell, 2001; Juergens, 1993). Although it is not possible to list every reported side effect of the benzodiazepines, the above list should clearly illustrate that these medications are both extremely potent and have a significant potential to cause harm to the user. The trials of Halcion. Halcion (triazolam) was first introduced as a hypnotic, but it has generated a great deal of controversy. Within a short time of its introduction numerous reports of adverse reactions, as well as the admission by the manufacturer that there were “errors” in the original supporting research, resulted in triazolam’s being banned in the United Kingdom and elsewhere (Charney et al., 2001). An independent review of the safety and effectiveness of triazolam was carried out by the Institute of Medicine, located in Washington, D.C., and the results were summarized in a report issued in April of 1999. The authors concluded that Halcion (triazolam) was “effective in achieving the defined end points in the general adult population with insomnia when used as directed (in the current labeling) at doses of 0.25 mg for as long as 7 to 10 days” (Report of the Institute of Medicine Committee on the Efficacy and Safety of Halcion, 1999, p. 350). However, this committee also suggested that further research be conducted into the long-term effects of current hypnotic agents, including Halcion (triazolam). Thus, it would appear that triazolam will be a controversial drug for many years to come. Drug interactions involving the benzodiazepines. There have been a “few ancedotal case reports” (SaridSegal, Creelman, Ciraulo, & Shader, 1995, p. 193) of patients who have suffered adverse effects from the use of benzodiazepines while taking lithium. The authors reviewed a single case report of “profound hypothermia resulting from the combined use of lithium and




diazepam” (p. 194). In this case, lithium was implicated as the agent that caused the individual to suffer a progressive loss of body temperature. Further, the authors noted that diazepam and oxazepam appear to cause increased levels of depression in patients who are also taking lithium. The reason for this increased level of depression in patients who are using one of these benzodiazepines as well as lithium is not known at this time. Patients who are on Antabuse (disulfiram) should use benzodiazepines with caution, as disulfiram reduces the speed at which the body can metabolize benzodiazepines such as diazepam and chlordiazepoxide (DeVane & Nemeroff, 2002). When a patient must use both medications concurrently, Zito (1994) recommended the use of a benzodiazepine such as oxazepam or lorazepam, which does not produce any biologically active metabolites. Surprisingly, grapefruit juice has been found to alter the P-450 metabolic pathway in the liver, slowing the rate of benzodiazepine biotransformation (Charney et al., 2001). There is evidence that blood levels of Halcion (triazolam) might be as high as double when the patient also takes the antibiotic erythromycin (sold under a variety of brand names) (DeVane & Nemeroff, 2002; Graedon & Graedon, 1995). Further, probenecid might slow the biotransformation of the benzodiazepine lorazepam, thus causing excess sedation in some patients (Sands, Creelman, Ciraulo, Greenblatt, & Shader, 1995). Patients who are taking a benzodiazepine should not use the antipsychotic medication clozapine (Zito, 1994). There have been reports of severe respiratory depression caused by the combination of these two medications, possibly resulting in several deaths. Patients with heart conditions who are taking the medication digoxin as well as a benzodiazepine should have frequent tests to check the digoxin level in their blood (Graedon & Graedon, 1995). There is some evidence that benzodiazepine use might cause the blood levels of digoxin to rise, possibly to the level of digoxin toxicity, according to the authors. The use of benzodiazepines with anticonvulsant medications such as phenytoin, mephenytoin, and ethotoin, the antidepressant fluoxetine, or medications for the control of blood pressure such as propranolol and metoprolol might cause higher than normal blood levels of such benzodiazepines as diazepam (DeVane & Nemeroff, 2002; Graedon & Graedon, 1995). Patients

Chapter Ten

using St. John’s wort may experience more anxiety, as this herbal medication lowers the blood level of alprazolam (DeVane & Nemeroff, 2002). Thus, it is unwise for a patient to use these medications at the same time. Women who are using oral contraceptives should discuss their use of a benzodiazepines with a physician prior to taking one of these medications. Zito (1994) noted that oral contraceptives will reduce the rate at which the body will be able to metabolize some benzodiazepines, thus making it necessary to reduce the dose of these medications. Patients who are taking antitubercular medications such as isoniazid might need to adjust their benzodiazepine dosage (Zito, 1994). Further, patients who take antacids may have trouble absorbing chlordiazepoxide as quickly as they might normally if they had taken the chlordiazepoxide without an antacid (Ciraulo, Shader, Greenblatt, & Barnhill, 1995). Because of the possibility of excessive sedation, the benzodiazepines should never be intermixed with other CNS depressants, except under the supervision of a physician. One medication that has emerged as being potentially dangerous when mixed with a benzodiazepine is buprenorphine (Smith & Wesson, 2004). This finding is consistent with the general prohibition against mixing benzodiazepines with CNS depressants such as alcohol, narcotic analgesics, and antihistamines (Graedon & Graedon, 1995). Individuals taking a benzodiazepine should discontinue their use of the herbal medicine kava (Cupp, 1999). The combined effects of these two classes of compounds may result in excessive, if not dangerous, levels of sedation. This list is not exhaustive, but it does illustrate that there is a potential for an interaction between the benzodiazepines and a number of other medications. People should consult a physician or pharmacist prior to taking two or more medications at the same time to rule out the possibility of an adverse interaction between the medications being used.

Subjective Experience of Benzodiazepine Use When used as an antianxiety agent at normal dosage levels, benzodiazepines induce a gentle state of relaxation in the user. In addition to their effects on the cortex, the benzodiazepines have an effect on the spinal cord, which contributes to muscle relaxation through some unknown mechanism (Ballenger, 1995). When used in

Abuse of and Addiction to Benzodiazepines and Similar Agents


the treatment of insomnia, the benzodiazepines initially reduce the sleep latency period, and users report a sense of deep and refreshing sleep. However, the benzodiazepines interfere with the normal sleep cycle, almost suppressing stage III and IV/REM sleep for reasons that are not clear (Ballenger, 1995). When people use them for extended periods of time as hypnotics, they may experience REM rebound (Hobbs et al., 1995; Qureshi & Lee-Chiong, 2004).9 There are cases on record of individuals who had used a benzodiazepine as a hypnotic for only 1–2 weeks, yet still experienced significant rebound symptoms when they tried to discontinue the medication (“Sleeping Pills and Antianxiety Drugs,” 1988; Tyrer, 1993). To help the individual return to normal sleep, the hormone melatonin should be used during the period of benzodiazepine withdrawal (Garfinkel, Zisapel, Wainstein, & Laudon, 1999; Pettit, 2000). In addition to REM rebound, patients who have used a benzodiazepine for daytime relief from anxiety have reported symptoms such as anxiety, agitation, tremor, fatigue, difficulty concentrating, headache, nausea, gastrointestinal upset, a sense of paranoia, depersonalization, and impaired memory after stopping the drug (Graedon & Graedon, 1991). There have been reports of people experiencing rebound insomnia for as long as 3 to 21 days after their last benzodiazepine use (Graedon & Graedon, 1991). The benzodiazepines with shorter half-lives are most likely to cause rebound symptoms (Ayd, 1994; O’Donovan & McGuffin, 1993; Rosenbaum, 1990). Such symptoms might be common when the patient experiences an abrupt drop in medication blood levels. For example, alprazolam has a short half-life, and the blood levels drop rather rapidly just before it is time for the next dose. It is during this period of time that the individual is most likely to experience an increase in anxiety levels. This process results in a phenomenon known as “clock watching” (Rosenbaum, 1990, p. 1302) by the patient, who waits with increasing anxiety until the time comes for the next dose. To combat rebound anxiety, it has been suggested that a long-acting benzodiazepine such as clonazepam be substituted for the shorter-acting drug (Rosenbaum, 1990). The transition between alprazolam

and clonazepam takes about one week, after which time the patient should be taking only clonazepam. This medication may then be gradually withdrawn, resulting in a slower decline in blood levels. However, the patient still should be warned that there will be some rebound anxiety symptoms. Although the patient might believe otherwise, these symptoms are not a sign that the original anxiety is still present. Rather, as the author noted, these anxiety-like symptoms are simply a sign that the body is adjusting to the gradual reduction in clonazepam blood levels.



Long-Term Consequences of Chronic Benzodiazepine Use Although introduced as safe and nonaddicting substitutes for the barbiturates, the benzodiazepines do indeed have a significant abuse potential. Benzodiazepine abuse/addiction is most common in people with preexisting substance-use disorders, and for this reason these medications “should rarely, if ever” be administered to patients with chemical-use disorders on a chronic basis (O’Brien, 2001, p. 629). Some of the signs of benzodiazepine abuse include (a) taking the drug after the medical/psychiatric need for its use has passed, (b) symptoms of physical or psychological dependence on one of the benzodiazepines, (c) taking the drug in amounts greater than the prescribed amount, (d) taking the drug to obtain an euphoriant effect, and (e) using the drug to decrease self-awareness, or the possibility of change (Dietch, 1983). During withdrawal, the benzodiazepine-dependent individual might experience symptoms of anxiety, insomnia, dizziness, nausea and vomiting, muscle weakness, tremor, confusion, convulsions (seizures), irritability, sweating, and a drug-induced withdrawal psychosis (Brown & Stoudemire, 1998). There have been rare reports of depression, manic reactions, and obsessive-compulsive symptoms as a result of benzodiazepine withdrawal (Juergens, 1993). In extreme cases, patients have been known to experience transient feelings of depersonalization, muscle pain, and a hypersensitivity to light and noise during the benzodiazepine withdrawal process (Spiegel, 1996). In addition to the problems of physical dependence, it is possible to become psychologically dependent on benzodiazepines (Dietch, 1983). Dietch (1983) noted


that “psychological dependence on benzodiazepines appears to be more common than physical dependence” (p. 1140). People with a psychological dependence might take the drug continuously or intermittently because of their belief that they need it in spite of their actual medical requirements. When used as hypnotics, the benzodiazepines are useful for short periods of time. However, researchers believe that the process of neuroadaptation limits the effectiveness of the benzodiazepines as sleep-inducing (hypnotic) medications to just a few days (Ashton, 1994), to a week (Carvey, 1998), or to 2–4 weeks (American Psychiatric Association, 1990; Ayd, 1994) of continual use. Given this fact, it is recommended that the benzodiazepines be used for only the short-term treatment of insomnia (Taylor, McCracken, Wilson, & Copeland, 1998). Surprisingly, many users continue to use these benzodiazepines as a sleep aid for months or even years. This might indicate that these medications have become part of the psychological ritual that the individual follows to ensure proper sleep rather than evidence of the pharmacological effect of the medication (Carvey, 1998). There is a tendency, at least among some users of the benzodiazepines, to increase their dosage levels above that prescribed by their physician. O’Brien (2001) noted that whereas 5–20 mg of diazepam might cause sedation in the typical person, some abusers have reached the point that they are taking more than 1,000 mg per day in divided doses in an attempt to overcome their tolerance to diazepam-induced euphoria. All of the CNS depressants, including the benzodiazepines, are capable of producing a toxic psychosis especially in overdose situations. This condition is also called an organic brain syndrome by some professionals. Some of the symptoms seen with a benzodiazepinerelated toxic psychosis include visual and auditory hallucinations and/or paranoid delusions, as well as hyperthermia, delirium, convulsions, a drug-induced psychosis, and possible death (Jenike, 1991). With proper treatment, this drug-induced psychosis will usually clear in 2 to 14 days (Miller & Gold, 1991a), but withdrawal from benzodiazepines should be attempted only under the supervision of a physician. Benzodiazepines as a substitute for other drugs of abuse. There is little factual information available on the phenomenon of benzodiazepine abuse/addiction. It is known that because of the similarity between the effects

Chapter Ten

of alcohol and those of the benzodiazepines, alcoholdependent people often will substitute a benzodiazepine for alcohol in situations where they cannot drink. The author of this text has met a number of recovering alcoholics who reported that 10 mg of diazepam had the same subjective effect for them as 3–4 “stiff” drinks. Further, the long half-life of diazepam often is sufficient to allow the individual to work the entire day without starting to go into alcohol withdrawal, thus allowing the user to avoid the telltale smell of alcohol on the breath while at work. Finally, research has shown that up to 90% of patients in methadone maintenace programs will abuse benzodiazepines, often doing so at high dosage levels (Sattar & Bhatia, 2003). Patients will take a single, massive dose of a benzodiazepine (the equivalent of 100–300 mg of diazepam) about 30 minutes after ingesting their methadone in order to boost the effect of the latter drug (Drummer & Odell, 2001; O’Brien, 2001). There is evidence that the experimental narcotic buprenorphine may, when mixed with benzodiazepines, offer the user less of a high, thus reducing the incentive for the narcotics user to try to mix the medications (Sellers et al., 1993).

Buspirone In 1986, a new medication by the name of BuSpar (buspirone) was introduced. Buspirone is a member of a class of medications known as the azapirones, which are chemically different from the benzodiazepines. Buspirone was found as a result of a search by pharmaceutical companies for antipsychotic drugs that did not have the harsh side effects of the phenothiazines or similar chemicals (Sussman, 1994). Although the antipsychotic effect of buspirone was quite limited, researchers found that it was approximately as effective in controlling anxiety as were the benzodiazepines (Drummer & Odell, 2001). In addition, buspirone was found to only rarely cause sedation or fatigue for the user (Rosenbaum & Gelenberg, 1991; Sussman, 1994) and there was no evidence of potentiation between buspirone and select benzodiazepines, or alcohol and buspirone (Drummer & Odell, 2001; Feighner, 1987; Manfredi et al., 1991).10 10This

is not, however, a suggestion that the user try to use alcohol and buspirone at the same time. The author does not recommend the use of alcohol with any prescription medication.

Abuse of and Addiction to Benzodiazepines and Similar Agents


The advantages of buspirone over the benzodiazepines are more than outweighed by the fact that the patient must take this medication for up to 2 weeks before it becomes effective (Doble, Martin, & Nutt, 2004). Some of the more common side effects of buspirone include gastrointestinal problems, drowsiness, decreased concentration, dizziness, agitation, headache, feelings of lightheadedness, nervousness, diarrhea, excitement, sweating/clamminess, nausea, depression, nasal congestion, and rarely, feelings of fatigue (Cadieux, 1996; Cole & Yonkers, 1995; Feighner, 1987; Graedon & Graedon, 1991; Manfredi et al., 1991; Newton, Marunycz, Alderdice, & Napoliello, 1986; Pagliaro & Pagliaro, 1998). Buspirone has also been found to cause decreased sexual desire in some users as well as sexual performance problems in some men (Finger, Lund, & Slagel, 1997). In contrast to the benzodiazepine family of drugs, buspirone has no significant anticonvulsant action. It also lacks the muscle relaxant effects of the benzodiazepines (Cadieux, 1996; Eison & Temple, 1987). Indeed, buspirone was been found to be of little value in cases of anxiety that involve insomnia, which is a significant proportion of anxiety cases (Manfredi et al., 1991). It has value in controlling the symptoms of general anxiety disorder, but it does not seem to control the discomfort of acute anxiety/panic attacks. On the positive side, buspirone was found to be effective in treating many patients who suffered from an anxiety disorder with a depressive component (Cadieux, 1996; Cohn, Wilcox, Bowden, Fisher, & Rodos, 1992). Indeed, there is evidence that buspirone might be of value in the treatment of some forms of depression, both as the primary form of treatment and as an agent to potentiate the effects of other antidepressants (Sussman, 1994). In addition, buspirone is valuable in treating obsessive-compulsive disorder, social phobias, posttraumatic stress disorder, and possibly alcohol withdrawal symptoms (Sussman, 1994). Physicians who treat geriatric patients have found that buspirone is effective in controlling aggression in anxious, confused older adults without exacerbating psychomotor stability problems that can contribute to the patient’s falling (Ayd et al., 1996). However, when used with older adults it should be used in smaller doses because of age-related changes in how fast the drug is removed from the circulation (Drummer

& Odell, 2001). It has also been found to reduce the frequency of self-abusive behaviors (SAB) in mentally retarded subjects (Ayd et al., 1996). Researchers have found that the addition of buspirone to antidepressant medications seems to bring many resistant or nonresponsive cases of depression under control (Cadieux, 1996). There also is limited evidence to suggest that buspirone might be useful as an adjunct to cigarette cessation for smokers who have some form of an anxiety disorder (Covey et al., 2000). The Pharmacology of Buspirone The mechanism of action for buspirone is different from that of the benzodiazepines (Eison & Temple, 1987). Whereas the benzodiazepines tend to bind to receptor sites that utilize the neurotransmitter GABA, buspirone tends to bind to one of the many serotonin receptor sites known as the 5-HT1A site (Ayd et al., 1996; Cadieux, 1996; Sussman, 1994). Further, researchers have found that buspirone binds to dopamine and serotonin type 1 receptors in the hippocampus, a different portion of the brain than the site where the benzodiazepines exert their effect (Manfredi et al., 1991). Within the brain, buspirone appears to function in a manner that moderates the level of serotonin (Cadieux, 1996). If there is a deficit of serotonin, as there is in depressive disorders, buspirone seems to stimulate its production (Anton, 1994; Sussman, 1994). If there is an excess of serotonin, as there appears to be in many forms of anxiety states, buspirone seems to lower the serotonin level (Cadieux, 1996). Unfortunately, when used with someone who has a history of addictive behavior, it may require 3–4 weeks before any significant improvement in the patient’s status is noticed, and the user might have to take high doses of buspirone before achieving any relief from anxiety (Renner, 2001). The half-life of buspirone is only 1–10 hours (Cole & Yonkers, 1995). This short half-life requires that the individual take 3–4 divided doses of buspirone each day, whereas the half-life of benzodiazepines like diazepam makes it possible for that drug to be used only 1–2 times a day (Schweizer & Rickels, 1994). Finally, unlike many other sedating chemicals, there does not appear to be any degree of cross tolerance


between buspirone and the benzodiazepines, alcohol, the barbiturates, or meprobamate (Sussman, 1994). Buspirone’s abuse potential is quite limited (Smith & Wesson, 2004). There is no evidence of a significant withdrawal syndrome similar to that seen after protracted periods of benzodiazepine use or abuse (Anton, 1994; Sussman, 1994). Further, unlike benzodiazepines, buspirone does not seem to have an adverse impact on memory (Rickels, Giesecke, & Geller, 1987). Unfortunately, buspirone has not been shown to lessen the intensity of withdrawal symptoms experienced by patients who were addicted to benzodiazepines (Rickels, Schweizer, Csanalosi, Case, & Chung, 1988). Indeed, there is evidence that patients currently taking a benzodiazepine might be slightly less responsive to buspirone while they are taking both medications (Cadieux, 1996). But unlike the benzodiazepines, there is no evidence of tolerance to buspirone’s effects, nor any evidence of physical dependence or a withdrawal syndrome from buspirone when the medication is used as directed for short periods of time (Cadieux, 1996; Rickels et al., 1988). One very rare complication of buspirone use is the development of a drug-induced neurological condition known as the serotonin syndrome, especially when buspirone is used with the antidepressants bloxetine or fluvoxamine (Sternbach, 2003). Although the serotonin syndrome might develop as long as 24 hours after the patient ingests a medication that affects the serotonin neurotransmitter system, in 50% of the cases the patient develops the syndrome within 2 hours of starting the medication (Mills, 1995). A limited number of cases have been reported in which patients who were taking buspirone and an antidepressant among the monoamine oxidase inhibitors (MAOIs or MAO inhibitors) developed abnormally high blood pressure (Ciraulo, Creelman, Shader, & O’Sullivan, 1995). However, at the same time there are countless other cases in which patients have taken these two medications at the same time without apparent ill effect. Thus, the possible role of either medication in the development of the observed hypertension is still unknown. It is unfortunate, but the manufacturer’s claim that buspirone offers many advantages over the benzodiazepines in the treatment of anxiety states has not been totally fulfilled. Indeed, Rosenbaum and Gelenberg

Chapter Ten

(1991) cautioned that “many clinicians and patients have found buspirone to be a generally disappointing alternative to benzodiazepines” (p. 200). In spite of this note, however, the authors recommended a trial of buspirone for “persistently anxious patients” (p. 200). Further, at this point in time, buspirone would seem to be the drug of choice in the treatment of anxiety states in the addiction prone individual.

Zolpidem The drug zolpidem was used as a sleep-inducing (hypnotic) drug in Europe for 5 years before it was introduced to the United States in 1993 (Hobbs et al., 1995). In the United States it is sold as an orally administered hypnotic by the brand name of Ambien, which is marketed as a short-term (defined as less than 4 weeks) treatment of insomnia available only by a physician’s prescription. Pharmacology of zolpidem. Technically, zolpidem is the first of a new family of sleep-inducing chemicals known as imidazopryidines. In contrast to the benzodiazepines, which bind to a number of receptor sites in the brain, zolpidem binds to just one of these receptor sites, which is also used by the benzodiazepines. Thus it is more selective than the benzodiazepines, a unique feature that also gives zolpidem only a minor anticonvulsant effect. Indeed, research has demonstrated that zolpidem’s anticonvulsant action is seen only at doses significantly above those that bring about sleep in the user (Doble, Martin, & Nutt, 2004). The selective method of action is also the reason zolpidem has minimal to no effect on muscle injuries. The biological half-life of a single dose of zolpidem is about 1.5–2.4 hours in the healthy adult, whereas in geriatric patients the half-life is approximately 2.5 hours (Charney et al., 2001; Doble et al., 2004; Folks & Burke, 1998; Kryger, Steljes, Pouliot, Neufeld, & Odynski, 1991). Most of a single dose of zolpidem is biotransformed by the liver into inactive metabolites before excretion by the kidneys. There is little evidence of neuroadaptation to zolpidem’s hypnotic effects when the drug is used at normal dosage levels, even after the drug has been used for as long as one year (Folks & Burke, 1998; Holm & Goa, 2000). But there are rare reports of patients who have become tolerant to the hypnotic effects of zolpidem after using this medication

Abuse of and Addiction to Benzodiazepines and Similar Agents


at very high dosage levels for a period of several years (Holm & Goa, 2000). Unlike the benzodiazepines or barbiturates, zolpidem causes only a minor reduction in REM sleep patterns at normal dosage levels (Hobbs et al., 1995). Further, it does not interfere with the other stages of sleep, allowing for a more natural and restful night’s sleep by the patient (Doble, Martin, & Nutt, 2004; Hartmann, 1995). When used as prescribed, the most common adverse effects include nightmares, headaches, gastrointestinal upset, agitation, and some daytime drowsiness (Hartmann, 1995). There have also been a few isolated cases of a zolpidem-induced psychosis (Ayd, 1994; Ayd et al., 1996) and rebound insomnia when the medication is discontinued (Gitlow, 2001). Side effects are more often encountered at higher dosage levels, and it is for this reason that the recommended dosage level of zolpidem should not exceed 10 mg per day (Hold & Goa, 2000; Merlotti et al., 1989). Zolpidem has been found to cause some cognitive performance problems similar to those seen with the benzodiazepines, although this medication appears less likely to cause memory impairment than the older hypnotics (Ayd et al., 1996). Further, alcohol enhances the effects of zolpidem and thus should not be used by patients on this medication because of the potentiation effect (Folks & Burke, 1998). Zolpidem is contraindicated in patients with obstructive sleep apnea as it increases the duration and frequency of apnea (Holm & Goa, 2000). Effects of zolpidem at above-normal dosage leves. At dosage levels of 20 mg per day or above, zolpidem has been found to significantly reduce REM sleep. Also, at dosage levels of 20 mg per day or more, zolpidem was found to cause rebound insomnia when the drug is discontinued. At dosage levels of 50 mg per day, volunteers who received zolpidem reported such symptoms as visual perceptual disturbances, ataxia, dizziness, nausea, and vomiting. Patients who have ingested up to 40 times the maximum recommended dosage have recovered without significant aftereffects. It should be noted, however, that the effects of zolpidem will combine with those of other CNS depressants if the patient has ingested more than one medication in an overdose attempt, and such multiple-drug overdoses might prove fatal. As with all medications, any suspected overdose of

zolpidem either by itself or in combination with other medications should be treated by a physician. Abuse potential of zolpidem. There are only limited reports of zolpidem abuse, and such reports appear to be limited to individuals who have histories of sedativehypnotic abuse (Gitlow, 2001; Holm & Goa, 2000). The abuse potential is rated as about the same as that of the benzodiazepine family of drugs (Charney et al., 2001). Thus, the prescribing physician must balance the potential for abuse against the potential benefit that this medication would bring to the patient. Because of zolpidem’s sedating effects, this medication should not be used in people with substance-use problems, as its effects may trigger thoughts about returning to active chemical use again (Jones, Knutson, & Haines, 2003).

Zaleplon The drug Sonata (zaleplon) is a member of the pyrazolpyrimidine class of chemicals and was introduced as the first of a new class of hypnotic agents intended for short-term symptomatic treatment of insomnia. Animal research suggests that zaleplon has some sedative and anticonvulsant effects, although it is approved for use only as a hypnotic in the United States (Danjou et al., 1999). When used to induce sleep it is administered orally in capsules containing 5 mg, 10 mg, or 20 mg of the drug. In most cases, the 10 mg dose was thought to be sufficient to induce sleep, although in individuals with low body weight, 5 mg might be more appropriate (Danjou et al., 1999). Once in the body, approximately 30% of the dose of zaleplon is biotransformed by the liver, through the first pass metabolism process. Less than 1% of the total dose is excreted in the urine unchanged, with the majority of the medication being biotransformed by the liver into less active compounds that are eventually eliminated from the body either in the urine or the feces. The halflife of zaleplon is approximately one hour (Doble et al., 2004). In the brain, zaleplon binds at the same receptor site as zolepidem (Charney et al., 2001; Danjou et al., 1999; Walsh, Pollak, Scharf, Schweitzer, & Vogel, 2000). There is little evidence of a drug hangover effect, although it is recommended that the patient not attempt to operate machinery for 4 hours after taking the last dose (Danjou et al., 1999; Doble et al., 2004; Walsh et al., 2000).


Chapter Ten

As noted earlier, this medication is intended for the short-term treatment of insomnia, in part because of the rapid development of tolerance to the effects of it (or similar medication). Individuals who have used zaleplon nightly for extended periods of time have reported rebound insomnia upon discontinuation of this medication, although this might be more common when the drug is used at higher dosage levels than at the lowest dosage level of 5 mg per night. Because of the rapid onset of sleep, users are advised to take this medication just before going to sleep or after being unable to go to sleep naturally. Patients using zaleplon have reported such side effects as headache, rhinitis, nausea, myalgia, periods of amnesia while under the effects of this medication, dizziness, depersonalization, drug-induced hangover, constipation, dry mouth, gout, bronchitis, asthma attacks, nervousness, depression, problems in concentration, ataxia, and insomnia. The abuse potential of zaleplon is similar to that of the benzodiazepines, especially triazolam (Smith & Wesson, 2004). When used for extended periods of time, which means periods possibly as short as 2 weeks of regular use, zaleplon has been implicated as causing withdrawal symptoms such as muscle cramps, tremor, vomiting, and in rare occasions, seizures. Because zaleplon is a sedating agent, Jones et al. (2003) do not recommend that it be used in persons with substanceuse problems, as its effects may trigger thoughts about returning to active chemical use again.

Rohypnol Rohypnol (flunitrazepam) was first identified as being abused in the United States in the mid 1990s. It is a member of the benzodiazepine family of pharmaceuticals, used in more than 60 other countries around the world as a presurgical medication, a muscle relaxant, and a hypnotic, but it is not manufactured or used as a pharmaceutical in the United States (Gahlinger, 2004; Klein & Kramer, 2004; Palmer & Edmunds, 2003). Because it is not manufactured as a pharmaceutical in the United States, there was little abuse of flunitrazepam by U.S. citizens prior to the mid 1990s. Substance-abuse rehabilitation professionals in this country had virtually no experience with Rohypnol (flunitrazepam) when people first began to bring it into this country. Rohypnol was classified as an illegal

substance by the U.S. government in October 1996, and individuals convicted of trafficking in or distributing this drug may be incarcerated for up to 20 years (“Rohypnol and Date Rape,” 1997). Although it is used for medicinal purposes around the world, in the United States Rohypnol has gained a reputation as a “date-rape” drug (Gahlinger, 2004; Saum & Inciardi, 1997). This was because the pharmacological characteristics of flunitrazepam, especially when mixed with alcohol, could cause a state of druginduced amnesia that lasts 8 to 24 hours. To combat its use as a date-rape drug, the manufacturer now includes a harmless compound in the tablet that will turn the drink blue if added to a liquid such as alcohol (Klein & Kramer, 2004). Because of this history of abuse and the fact that flunitrazepam is not detected on standard urine toxicology tests, the company that manufactures Rohypnol, Hoffmann-La Roche Pharmaceuticals, has instituted a program of free urine drug testing to provide law-enforcement officials with a means to detect flunitrazepam in the urine of suspected victims of date rape (Palmer & Edmunds, 2003). In addition to its use in date-rape situations, there are reports of cocaine abusers ingesting flunitrazepam to counteract the unwanted effects of cocaine, and of heroin abusers mixing flunitrazepam with low-quality heroin to enhance its effect (Saum & Inciardi, 1997). Some drug abusers will mix Rohypnol (flunitrazepam) with other compounds to enhance the effect of these compounds. Illicit users may also use flunitrazepam while smoking marijuana and while using alcohol (Lively, 1996). The combination of Rohypnol (flunitrazepam) and marijuana is said to produce a sense of “floating” in the user. There are reports of abusers inhaling flunitrazepam powder and of physical addiction developing to this substance following periods of continuous use. There are also reports of adolescents abusing flunitrazepam as an alternative to marijuana or LSD (Greydanus & Patel, 2003). Chemically, flunitrazepam is a derivative of the benzodiazepine chlordiazepoxide (Eidelberg, Neer, & Miller, 1965) and is reportedly 10 times as powerful as diazepam (Gahlinger, 2004; Klein & Kramer, 2004). When used as a medication, the usual method of administration is by mouth, in doses of 0.5–2 mg. Flunitrazepam is well absorbed from the gastrointestinal tract, with between 80% and 90% of a single 2 mg dose

Abuse of and Addiction to Benzodiazepines and Similar Agents


being absorbed by the user’s body (Mattila & Larni, 1980). Following a single oral dose, the peak blood levels are reached in 30 minutes (Klein & Kramer, 2004) to 1–2 hours (Saum & Inciardi, 1997). Once in the blood, 80% to 90% of the flunitrazepam is briefly bound to plasma proteins, but the drug is rapidly transferred from the plasma to body tissues. Because of this characteristic, flunitrazepam has an elimination halflife that is significantly longer than its duration of effect. Indeed, depending upon the individual’s metabolism, the elimination half-life can range from 15 to 66 hours (Woods & Winger, 1997) whereas the effects last only 8 to 10 hours (Klein & Kramer, 2004). During the process of biotransformation, flunitrazepam produces a number of different metabolites, some of which are themselves biologically active (Mattila & Larni, 1980). Less than 1% of the drug is excreted unchanged. About 90% of a single dose is eliminated by the kidneys after biotransformation, whereas about 10% is eliminated in the feces. Because of this characteristic elimination pattern, patients in countries where flunitrazepam is legal and who have kidney disease require modification of their dosage level, as the main route of elimination is through the kidneys. Although the usual pharmaceutical dose of Rohypnol (flunitrazepam) is less than 2 mg, illicit users will often take 4 mg of the drug in one dose, which will begin to produce sedation in 20 to 30 minutes. The drug’s effects normally last for 8 to 12 hours. The effects of flunitrazepam are similar to those of the other benzodiazepines, including sedation, dizziness, memory problems and/or amnesia, ataxia, slurred speech, impaired judgement, nausea, and loss of sleep or consciousness (Klein & Kramer, 2004). Like the benzodiazepines used in the United States, flunitrazepam is capable of causing paradoxical rage reactions in the user (Klein & Kramer, 2004). Flunitrazepam has an anticonvulstant effect (Eidelberg et al., 1965) and is capable of bringing about a state of pharmacological dependence. Side effects of

flunitrazepam include excessive sedation, ataxia, mood swings, headaches, tremor, and drug-induced amnesia (Calhoun, Wesson, Galloway, & Smith, 1996). Although flunitrazepam has a wide safety margin, concurrent use with alcohol or other CNS depressants may increase the danger of overdose. Withdrawal from flunitrazepam is potentially serious for the chronic abuser, and there have been reports of withdrawal seizures taking place as late as 7 days after the last use of flunitrazepam (“Rohypnol Use Spreading,” 1995). For this reason, patients should be withdrawn from flunitrazepam only under the supervision of a physician.

Summary In the time since their introduction in the 1960s, the benzodiazepines have become some of the most frequently prescribed medications. As a class, the benzodiazepines are the treatment of choice for the control of anxiety and insomnia as well as many other conditions. They have also become a significant part of the drugabuse problem. Even though many of the benzodiazepines were first introduced as “nonaddicting and safe” substitutes for the barbiturates, there is evidence to suggest that they have an abuse potential similar to that of the barbiturate family of drugs. A new series of pharmaceuticals, including buspirone, which is sold under the brand name BuSpar, and zolpidem, have been introduced in the past decade. Buspirone is the first of a new class of antianxiety agents, which works through a different mechanism from that of the benzodiazepines. While buspirone was introduced as nonaddicting, this claim has been challenged by at least one team of researchers. Zolpidem has an admitted potential for abuse; however, research at this time suggests that its abuse potential is less than that of the benzodiazepine most commonly used as a hypnotic: triazolam. Researchers are actively discussing the potential benefits and liabilities of these new medications at this time.


Abuse of and Addiction to Amphetamines and CNS Stimulants

were used for medicinal purposes in the paleolithic era, but it is known that by five thousand years ago, Chinese physicians were using ephedra plants for medicinal purposes (Ross & Chappel, 1998). The active agent of these plants, ephedrine, was not isolated by chemists until 1897 (Mann, 1992), and it remained nothing more than a curiosity until 1930. Then a report appeared in a medical journal suggesting that ephedrine was useful in treating asthma (Karch, 2002), and it quickly became the treatment of choice for this condition. In the 1930s the intense demand for ephedrine soon raised concern as to whether the demand might not exceed the supply of plants. The importance of this fear will be discussed in the section on “History of the Amphetamines” (below). In the United States, ephedrine continued to be sold as an over-the-counter agent marketed as a treatment for asthma, sinus problems, and headaches as well as a “food supplement” used to assist weight-loss programs and as an aid to athletic performance. In February 2004 the Food and Drug Administration (FDA) issued a ban on the over-the-counter sale of ephedrine that took effect on 12 April 2004 (Neergaard, 2004). After that time, ephedrine could only be prescribed by a physician. Medical uses of ephedrine. Ephedrine’s uses include the treatment of bronchial asthma and respiratory problems associated with bronchitis, emphysema, or chronic obstructive pulmonary disease (American Society of Health-System Pharmacists, 2002). Although ephedrine was once considered a valid treatment for nasal congestion, it is no longer used for this purpose after questions were raised as to its effectiveness. In hospitals it might also be used to control the symptoms of shock and in some surgical procedures where low blood pressure is a problem (Karch, 2002). Ephedrine might modify the cardiac rate; however, with the introduction

Introduction The use of central nervous system (CNS) stimulants dates back several thousand years. There is historical evidence that gladiators in ancient Rome used CNS stimulants at least 2,000 years ago to help them overcome the effects of fatigue so that they could fight longer (Wadler, 1994). Not surprisingly, people still use chemicals that act as CNS stimulants to counter the effects of fatigue so they can work, or in times of conflict, fight longer. Currently, there are several different families of chemicals that might be classified as CNS stimulants, including cocaine, the amphetamines, amphetaminelike drugs such as Ritalin (methylphenidate), and ephedrine. The behavioral effects of these drugs are remarkably similar (Gawin & Ellinwood, 1988). For this reason, the amphetamine-like drugs will be discussed only briefly; the amphetamines will be reviewed in greater detail in this chapter. Cocaine will be discussed in the next chapter. However, because the CNS stimulants are controversial and the source of much confusion, this chapter will be subdivided into two sections. In the first, the medical uses of the CNS stimulants, their effects, and complications from their use will be discussed. In the second subsection, the complications of CNS stimulant abuse will be explored.

I. THE CNS STIMULANTS AS USED IN MEDICAL PRACTICE The Amphetamine-like Drugs Ephedrine Scientists have found ephedra plants at Neanderthal burial sites in Europe that are thought to be 60,000 years old (Karch, 2002). It is not known whether the plants 128

Abuse of and Addiction to Amphetamines and CNS Stimulants

of newer, more effective medications, it is rarely used in cardiac emergencies now (American Society of Health-System Pharmacists, 2002). Ephedrine may, in some situations, be used as an adjunct to the treatment of myasthenia gravis (Shannon, Wilson, & Stang, 1995). Pharmacology of ephedrine. In the human body, ephedrine’s primary effects are strongest in the peripheral regions of the body rather than the central nervous system (CNS), and it is known that ephedrine stimulates the sympathetic nervous system in a manner similar to that of adrenaline (Laurence & Bennett, 1992; Mann, 1992). This makes sense, as ephedrine blocks the reuptake of norepinephrine at the receptor sites in the body. When used in the treatment of asthma, ephedrine improves pulmonary function by causing the smooth muscles surrounding the bronchial passages to relax (American Society of Health-System Pharmacists, 2002). It also alters the constriction and dilation of blood vessels by binding at the alpha-2 receptor sites in the body, which modulate blood vessel constriction and dilation (Rothman et al., 2003). When blood vessels constrict, the blood pressure increases as the heart compensates for the increased resistance by pumping with more force. Depending on the patient’s condition, ephedrine might be taken orally or be injected, and it can be smoked. This latter route of administration was the preferred method of ephedrine abuse in the Philippines for many years, but this practice is gradually declining (Karch, 2002). Oral, intramuscular, or subcutaneous doses are completely absorbed. Peak blood levels from a single oral dose are achieved in about one hour (Drummer & Odell, 2001). Surprisingly, given the fact that it has been in use for more than three-quarters of a century, there is very little research into the way that ephedrine is distributed within the body. The serum half-life has been estimated at between 2.7 and 3.6 hours (Samenuk et al., 2002). The drug is eliminated from the body virtually unchanged, with only a small percentage being biotransformed before ephedrine is eliminated from the body by the kidneys. The exact percentage that is eliminated unchanged depends on how acidic the urine is, with a greater percentage being eliminated without biotransformation when the urine is more acidic (American Society of Health-System Pharmacists, 2002).


Tolerance to its bronchodilator action develops rapidly, and because of this, physicians recommend that ephedrine be used as a treatment of asthma for only short periods of time. The chronic use of ephedrine may contribute to cardiac or respiratory problems in the user, and for this reason the medication is recommended for only short-term use except under a physician’s supervision. As an over-the-counter diet aid, ephedrine appears to have a modest, shortterm effect. Shekelle et al. (2003) found in their metaanalysis of the medical literature that ephedrine can help the user lose about 0.9 kilograms of weight over a short term. There is no information on its long-term effectiveness as an aid to weight loss, and there is no evidence to suggest that it is able to enhance athletic ability (Shekelle et al., 2003). Side effects of ephedrine at normal dosage levels. The therapeutic index of ephedrine is quite small, which suggests that this chemical may cause toxic effects at relatively low doses. A meta-analysis of the efficacy and safety of ephedrine suggests that even users who take ephedrine at recommended doses are 200% to 300% more likely to experience psychiatric problems, autonomic nervous system problems, upper gastrointestinal irritation, and heart palpitations (Shekelle et al., 2003). Some of the side effects of ephedrine include anxiety, feelings of apprehension, insomnia, and urinary retention (Graedon & Graedon, 1991). The drug may also cause a throbbing headache, confusion, hallucinations, tremor, seizures, cardiac arrhythmias, stroke, euphoria, hypertension, coronary artery spasm, angina, intracranial hemorrhage, and death (American Society of Health-System Pharmacists, 2002; Cupp, 1999; Karch, 2002; Samenuk et al., 2002). Complications of ephedrine use at above-normal dosage levels. When ephedrine is used at higher than normal dosage levels, the side effects can include those noted above as well as coronary artery vasoconstriction, myocardial infarction, cerebral vascular accidents (CVAs, or strokes), and death (Samenuk et al., 2002). Overthe-counter ephedrine use and abuse was linked to at least 155 deaths and “dozens of heart attacks and strokes” at the time its sale was restricted in February 2004 (Neergaard, 2004, p. 3). Medication interactions involving ephedrine. It is recommended that patients using ephedrine not use any of the tricyclic antidepressants, as these medications will


Chapter Eleven

add to the stimulant effect of the ephedrine (DeVane & Nemeroff, 2002). Patients using ephedrine should check with a physician or pharmacist before the concurrent use of different medications. Ritalin (Methylphenidate) Ritalin (methylphenidate) is a controversial pharmaceutical agent, frequently prescribed for children who have been diagnosed as having attention deficit hyperactivity disorder (ADHD) (Breggin, 1998; Sinha, 2001). Fully 90% of the methylphenidate produced is consumed in the United States (Breggin, 1998; Diller, 1998). Although there are many advocates for the use of methylphenidate in the control of ADHD, there are also critics of this process. Indeed, there are serious questions about whether children are being turned into chemical “zombies” through the use of methylphenidate or similar agents in the name of behavioral control. Most certainly, the use of methylphenidate does not represent the best possible control of ADHD symptoms, as evidenced by the fact that about half of the prescriptions for this medication are never renewed (Breggin, 1998). Given the strident arguments for and against the use of methylphenidate, it is safe to say that this compound will remain quite controversial for many decades to come. Medical uses of methylphenidate. Methylphenidate has been found to function as a CNS stimulant and is of value in the treatment of a rare neurological condition known as narcolepsy. In addition to its use with ADHD, it is also of occasional use as an adjunct to the treatment of depression (Fuller & Sajatovic, 1999). Pharmacology of methylphenidate. Methylphenidate was originally developed by pharmaceutical companies looking for a nonaddicting substitute for the amphetamines (Diller, 1998). Chemically, it is a close cousin to the amphetamines, and some pharmacologists classify methylphenidate as a true amphetamine. In this text, it will be considered an amphetamine-like drug. When methylphenidate is used in the treatment of attention deficit hyperactivity disorder, patients will take between 15 and 90 mg of the drug per day, in divided doses (Wender, 1995). Oral doses of methylphenidate are rapidly absorbed from the gastrointestinal tract, and the drug is thought to be approximately half as potent as D-amphetamine (Wender, 1995). Peak blood levels are achieved in 1.9 hours following a single dose, although

extended-release forms of the drug might not reach peak blood levels until 4–7 hours after the medication was ingested (Shannon et al., 1995). The halflife of methylphenidate is from 1 to 3 hours, and the effects of a single oral dose last for 3 to 6 hours. The effects of a single dose of an extended-release form of methylphenidate might continue for 8 hours. About 80% of a single oral dose is biotransformed to ritanic acid in the intestinal tract, which is then excreted by the kidneys (Karch, 2002). Within the brain, methylphenidate blocks the action of the molecular “transporter” system by which free dopamine molecules are absorbed back into the neuron in a dose-dependent manner. This allows the dopamine to remain in the synapse longer and thus enhances its effect (Volkow & Swanson, 2003; Volkow et al., 1998). At normal therapeutic doses, methylphenidate is able to block 50% or more of the dopamine transporters, within 60 to 90 minutes of the time that the drug is administered (Volkow et al., 1998). Side-effects of methylphenidate. Even though methylphenidate is identified as the treatment of choice for ADHD, very little is known about the long-term effects of this medication, and most follow-up studies designed to identify side effects of methylphenidate have continued for only a few weeks (Schachter, Pham, King, Langford, & Moher, 2002; Sinha, 2001). Surprisingly, the long-term effectiveness and safety of methylphenidate as a treatment for ADHD have not been established (Breggin, 1998; Diller, 1998; Schachter et al., 2002). Researchers do know that even when it is used at therapeutic dosage levels, methylphenidate can cause anorexia, insomnia, weight loss, failure to gain weight, nausea, heart palpitations, angina, anxiety, liver problems, dry mouth, hypertension, headache, upset stomach, enuresis, skin rashes, dizziness, or exacerbation of the symptoms of Tourette’s syndrome (Fuller & Sajatovic, 1999). Other side effects of methylphenidate range from stomach pain, blurred vision, leukopenia, possible cerebral hemorrhages, hypersensitivity reactions, anemia, and perseveration, a condition in which the individual continues to engage in the same task long after it ceases to be a useful activity (Breggin, 1998). Methylphenidate has been implicated as a cause of liver damage in some patients (Karch, 2002). It has the potential to lower the seizure threshold in patients with

Abuse of and Addiction to Amphetamines and CNS Stimulants

a seizure disorder, and the manufacturer recommends that if the patient has a seizure, the drug should be discontinued immediately. There are reports suggesting the possibility of methylphenidate-induced damage to the tissue of the heart, a frightening possibility in light of the frequency with which it is prescribed to children (Henderson & Fischer, 1994). When used at recommended dosage levels, methylphenidate can rarely cause a drug-induced psychosis (Breggin, 1998). There are reports that methylphenidate can cause a reduction in cerebral blood flow patterns when used at therapeutic doses, an effect that may have long-term consequences for the individual taking this medication (Breggin, 1998). These findings suggest a need for further research into the long-term consequences of methylphenidate use or abuse. Children who are taking methylphenidate at recommended dosage levels have experienced a “zombie” effect, in which the drug dampens personal initative on the part of the user (Breggin, 1998). This seems to be a common effect of methylphenidate, even when it is used by normal individuals, although in students with ADHD this effect is claimed to be beneficial (Diller, 1998). The zombie effect reported by Breggin (1998) and Diller (1998) was challenged by Pliszka (1998), who cited research to support his conclusion. Thus, the question of whether methylphenidate causes such a state in children has yet to be determined. On rare occasions, methylphenidate has been implicated in the development of a drug-induced depression that might reach the level of suicide attempts (Breggin, 1998). Further, a long-term follow-up study of 5,000 adolescents with ADHD who were treated with methylphenidate found that in adulthood those adolescents who had received methylphenidate were three times as likely to have abused cocaine as were those whose ADHD was treated by other methods (“Ritalin May Increase Risk,” 1998). However, the results of this study have been challenged (Stocker, 1999b) and the relationship between ADHD, pharmacological treatment of this disorder, and possible predisposition toward substance-use disorders has not been clearly identified. Medication interactions involving methylphenidate. Individuals on methylphenidate should not use tricyclic antidepressants, as these medications can combine with the methylphenidate to cause potentially toxic blood levels of the antidepressant medications (DeVane


& Nemeroff, 2002). Patients should not use any of the MAOI family of antidepressants while taking methylphenidate because of possible toxicity (DeVane & Nemeroff, 2002). The mixture of mythylphenidate and the selective serotonin reuptake inhibitor family of antidepressants has been identified as a cause of seizures and thus should be avoided (DeVane & Nemeroff, 2002). Patients who are using antihypertensive medications while taking methylphenidate may find that their blood pressure control is less than adequate, as the latter drug interferes with the effectiveness of the antihypertensives (DeVane & Nemeroff, 2002). Challenges to the use of methylphenidate as a treatment for ADHD. A small, but vocal, group of clinicians has started to express concern about the use of methylphenidate as a treatment for ADHD (Breggin, 1998; Diller, 1998). Other researchers have noted that the long-term efficacy of methylphenidate in treating ADHD has never been demonstrated in the clinical literature (Schachter et al., 2002). Indeed, in spite of what is told to children or their parents by physicians, the professional literature is filled with research studies that failed to demonstrate any significant positive effect from methylphenidate on ADHD (Breggin, 1998). In contrast to this pattern of reports in the clinical literature, parents (and teachers) are assured that methylphenidate is the treatment of choice for ADHD, mainly because the “material on [methylphenidate’s] lack of efficacy, while readily available in the professional literature, is not presented to the public” (Breggin, 1998, p. 111). Breggin (1999) is a strong critic of the diagnosis of attention deficit hyperactivity disorder (ADHD), and although many clinicians dismiss his comments as being too extreme, some of his observations appear to have merit. For example, although the long-term benefits of methylphenidate use have never been demonstrated, the American Medical Association supports the longterm use of this medication to control the manifestations of ADHD. Research has also demonstrated that the child’s ability to learn new material improves at a significantly lower dose of methylphenidate than is necessary to eliminate behaviors that are not accepted in the classroom (Pagliaro & Pagliaro, 1998). When the student is drugged to the point that these behaviors are eliminated or controlled, learning suffers, according to the authors. Further, two ongoing studies into the longterm effects of methylphenidate have found evidence of


Chapter Eleven

a progressive deterioration in the student’s performance on standardized psychological tests, as compared to the performance of age-matched peers on these same tests (Sinha, 2001). These arguements present thoughtprovoking challenges to the current forms of pharmacological treatment of ADHD and suggest a need for further research in this area.

The Amphetamines History of the amphetamines. Chemically, the amphetamines are analogs1 of ephedrine (Lit, Wiviott-Tishler, Wong, & Hyman 1996). The amphetamines were first discovered in 1887, but it was not until 1927 that one of these compounds was found to have medicinal value (Kaplan & Sadock, 1996; Lingeman, 1974). Following the introduction of ephedrine for the treatment of asthma, questions began to be raised as to whether the demand for it might not exceed the supply. Pharmaceutical companies began to search for synthetic alternatives to ephedrine and found that the amphetamines had a similar effect as ephedrine on asthma patients. In 1932 an amphetamine product called Benzedrine was introduced for use in the treatment of asthma and rhinitis (Derlet & Heischober, 1990; Karch, 2002). The drug was contained in an inhaler similar to “smelling salts.” The ampule, which could be purchased over the counter, would be broken, releasing the concentrated amphetamine liquid into the surrounding cloth. The Benzedrine ampule would then be held under the nose and the fumes inhaled, much like “smelling salts” are, to reduce the symptoms of asthma. It was not long, however, before it was discovered that the Benzedrine ampules could be unwrapped, carefully broken open, and the concentrated Benzedrine injected,2 causing effects similar to those of cocaine. The dangers of cocaine were well known to drug abusers and addicts of the era, but because the long-term effects of the amphetamines were not known they were viewed as a “safe” substitute for cocaine. Shortly afterward, the world was plunged into World War II and amphetamines were used by personnel in the American, British, German, and Japanese armed forces to counteract fatigue 1See

Glossary and Chapter 35. are no longer sold over the counter without a prescription. 2Amphetamines

and heighten endurance (Brecher, 1972). United States Army Air Corps crew members stationed in England alone took an estimated 180 million Benzedrine pills during World War II (Lovett, 1994), whereas British troops consumed an additional 72 million doses (Walton, 2002) to help them function longer in combat. It is rumored that Adolf Hitler was addicted to amphetamines (Witkin, 1995). It is possible to excuse the use of amphetamines during World War II or Operation Desert Storm as being necessary to meet the demands of the war. But for reasons that are not well understood, there were waves of amphetamine abuse in both Sweden and Japan immediately following World War II (Snyder, 1986). The amphetamines were frequently prescribed to patients in the United States in the 1950s and 1960s, and President John F. Kennedy is rumored to have used methamphetamine, another member of the amphetamines, during his term in office in the early 1960s (Witkin, 1995). The amphetamines continued to gain popularity as drugs of abuse, and by 1970 their use had reached “epidemic proportions” (Kaplan & Sadock, 1996, p. 305) in the United States. Physicians would prescribe amphetamines for patients who wished to lose weight or who were depressed, whereas illicit amphetamine users would take the drug because it helped them to feel good. Many of the pills prescribed by physicians for patients were diverted to illicit markets, and there is no way of knowing how many of the 10 billion amphetamine tablets manufactured in the United States in 1970 were actually used as prescribed. The amphetamines occupy a unique position in history, for medical historians now believe that the arrival of large amounts of amphetamines, especially methamphetamine, contributed to an outbreak of drugrelated violence that ended San Francisco’s “summer of love” of 1967 (Smith, 1997, 2001). Amphetamine abusers had also discovered that when used at high dosage levels the amphetamines would cause agitation and could induce death from cardiovascular collapse. They had also found that these compounds could induce a severe depressive state that might reach suicidal proportions, and this might last for days or weeks after the drug was discontinued. By the mid 1970s amphetamine abusers had come to understand that chronic amphetamine use would dominate the


Abuse of and Addiction to Amphetamines and CNS Stimulants

users’ lives, slowly killing them. In San Francisco, physicians at the Haight-Ashbury free clinic coined the slogan “speed kills” by way of warning the general public of the dangers of amphetamine abuse (Smith, 1997, 2001). By this time, physicians had discovered that the amphetamines were not as effective as once thought in the treatment of depressive states or obesity. This fact, plus the development of new medications for the treatment of depression, reduced the frequency with which physicians prescribed amphetamines. The amphetamines were classified as Schedule II substances by the U.S. government, which also limited their legitimate use. However, they continue to have a limited role in the control of human suffering. Further, although the dangers of amphetamine use are well known, during the Desert Storm campaign of 1991 some 65% of United States pilots in the combat theater admitted to having used an amphetamine compound at least once during combat operations (Emonson & Vanderbeek, 1995). Thus, the amphetamines have never entirely disappeared either from the illicit drug world or from the physician’s handbag. Medical uses of the amphetamines. The amphetamines improve the action of the smooth muscles of the body (Hoffman & Lefkowitz, 1990) and thus have a potential for improving athletic performance at least to some degree. However, these effects are not uniform and the overuse of the CNS stimulants can actually bring about a decrease in athletic abilities in some users. Regulatory agencies for different sports routinely test for evidence of amphetamine use among athletes. For these reasons, amphetamine abuse in this population is limited. The amphetamines have an anorexic side effect,3 and at one time this side effect was thought to be useful in the treatment of obesity. Unfortunately, subsequent research has demonstrated that the amphetamines are only minimally effective as a weight-control agent. Tolerance to the appetite suppressing side effect of the amphetamines develops in only 4 weeks (Snyder, 1986). After users have become tolerant to the anorexic effect of amphetamines, it is not uncommon for them to regain the weight that they initially lost. Indeed, research has demonstrated that after a 6-month period, there is no significant difference

in the amount of weight lost between patients using amphetamines and patients who simply dieted to lose weight (Maxmen & Ward, 1995). Prior to the 1970s the amphetamines were thought to be antidepressants and were widely prescribed for the treatment of depression. However, research revealed that the antidepressant effect of the amphetamines was short-lived at best. With the introduction of more effective antidepressant agents the amphetamines fell into disfavor and are now used only rarely as an adjunct to the treatment of depression (Potter, Rudorfer, & Goodwin, 1987). However, they are the treatment of choice for a rare neurological condition known as narcolepsy.4 Researchers believe that narcolepsy is caused by a chemical imbalance within the brain in which the neurotransmitted dopamine is not released in sufficient amounts to maintain wakefulness. By forcing the neurons in the brain to release their stores of dopamine, the amphetamines are thought to at least partially correct the dopamine imbalance that causes narcolepsy (Doghramji, 1989). The first reported use of an amphetamine, Benzedrine, for the control of hyperactive children occurred in 1938 (Pliszka, 1998). Surprisingly, although the amphetamines are CNS stimulants, they appear to have a calming effect on individuals who have attention deficit hyperactivity disorder (ADHD). Research has revealed that the amphetamines are as effective in controlling the symptoms of ADHD as methylphenidate in about 50% of patients with this disorder and that 25% of the patients will experience better symptom control through the use of an amphetamine (Spencer et al., 2001). However, the use of amphetamines to treat ADHD is quite controversial. They are recognized as being of value in the control of ADHD symptoms by some, but research is needed into their long-term effects, and there are those who suggest that these medications may do more harm than good (Breggin, 1998; Spencer et al., 2001).



See Glossary.

Pharmacology of the Amphetamines The amphetamine family of chemicals consists of several different variations of the parent compound. Each of these variations yields a molecule that is similar to the others, except for minor variations in potency See Glossary.


and pharmacological characteristics. The most common form of amphetamine is dextroamphetamine (d-amphetamine sulfate), which is considered twice as potent as the other common form of amphetamine (Lingeman, 1974), methamphetamine (or d-desoxyephedrine hydrochloride). Because of its longer half-life and ability to cross the blood-brain barrier, methamphetamine seems to be preferred over dextroamphetamine by illicit amphetamine abusers (Albertson, Derlet, & Van Hoozen, 1999). Methods of administration in medical practice. There are several methods by which physicians might administer an amphetamine to a patient. The drug molecule tends to be basic and when taken orally is easily absorbed through the lining of the small intestine (Laurence & Bennett, 1992). However, although the amphetamines have been used in medical practice for generations, very little is known about their absorption from the GI tract in humans beyond this fact (Jenkins & Cone, 1998). It is known that a single oral dose of amphetamine will begin to have an effect on the user in 20 (Siegel, 1991) to 30 minutes (Mirin, Weiss, & Greenfield, 1991). The amphetamine molecule is also easily absorbed into the body when injected either into muscle tissue or a vein. In the normal patient who has received a single oral dose of an amphetamine, the peak plasma levels are achieved in 1 to 3 hours (Drummer & Odell, 2001). The biological half-life of the different forms of amphetamine vary as a result of the different chemical structures. For example, the biological half-life of a single oral dose of dextroamphetamine is between 10 and 34 hours whereas that of a single oral dose of methamphetamine is only 4 to 5 hours (Derlet & Heischober, 1990; Fuller & Sajatovic, 1999; Physicians’ Desk Reference, 2004; Shannon et al., 1995). However, when injected, the half-life of methamphetamine can be as long as 12.2 hours (Karch, 2002). The chemical structure of the basic amphetamine molecule is similar to that of norepinephrine and dopamine and thus might be classified as an agonist of these neurotransmitters (King & Ellinwood, 1997). The effects of amphetamines in the peripheral regions of the body are caused by its ability to stimulate norepinephrine release whereas its CNS effects are the result of its impact on the dopamine-using regions of the brain (Lit et al., 1996). Once in the brain, the

Chapter Eleven

amphetamine molecule is absorbed into those neurons that use dopamine as a neurotransmitter, and both stimulate those neurons to release their dopamine stores while simultaneously blocking the reuptake pump that normally would remove the dopamine from the synapse (Haney, 2004). The mesolimbic region of the brain is especially rich in dopamine-containing neurons and is thought to be part of the “pleasure center” of the brain. This fact seems to account for the ability of the amphetamines to cause a sense of euphoria in the user. Another region in the brain in which the amphetamines have an effect is the medulla (involved in the control of respiration), causing the individual to breathe more deeply and more rapidly. At normal dosage levels, the cortex is also stimulated, resulting in reduced feelings of fatigue and possibly increased concentration (Kaplan & Sadock, 1996). There is considerable variation in the level of individual sensitivity to the effects of the amphetamines. The estimated lethal dose of amphetamines for a nontolerant individual is 20 to 25 mg per kg (Chan, Chen, Lee, & Deng, 1994); there is one clinical report of a case in which the person ingested a dose of only 1.5 mg per kg, which proved to be lethal, and rare reports of toxic reactions to amphetamines at dosage levels as low as 2 mg (Hoffman & Lefkowitz, 1990). There are also case reports of amphetamine-naive individuals5 surviving a total single dose of 400–500 mg (or 7.5 mg/ kg body weight for a 160-pound person). However the patients who ingested these dosage levels required medical support to overcome the toxic effects of the amphetamines. Individuals who are tolerant to the effects of the amphetamines may use massive doses “without apparent ill effect” (Hoffman & Lefkowitz, 1990, p. 212). A part of each dose of amphetamine will be biotransformed by the liver, but a significant percentage of the amphetamines will be excreted from the body essentially unchanged. For example, under normal conditions 45% of a single dose of methamphetamine will be excreted by the body unchanged (Karch, 2002). During the process of amphetamine biotransformation, a number of metabolites are formed as the biotransformation process progresses from one step to the next. 5

See Glossary.


Abuse of and Addiction to Amphetamines and CNS Stimulants

The exact number of metabolites will vary, depending on the specific form of amphetamine being used. For example, during the process of methamphetamine biotransformation, seven different metabolites are formed at various stages in the process before the drug is finally eliminated from the body. The percentage of a single dose of amphetamine that is eliminated from the body unchanged might be increased to as much as 75% if the users were to take steps to acidify their blood (Karch, 2002). However, if the individual’s urine is extremely alkaline, perhaps as little as 5% of a dose of amphetamine will be filtered out of the blood by the kidneys and excreted unchanged, according to the author. This is because the drug molecules tend to be reabsorbed by the kidneys when the urine is more alkaline. Thus, the speed at which a dose of amphetamines is excreted from the body varies in response to how acidic the individual’s urine is at the time that the drug passes through the kidneys. At one point, physicians were trained to try to make a patient’s urine more acidic in order to speed up the excretion of the amphetamine molecules following an overdose. However, in recent years it has been found that this treatment method increases the chances that the patient will develop cardiac arrhythmias or seizures, and physicians are less likely to utilize urine acidification as a treatment method for amphetamine overdose than they were 30 years ago (Albertson et al., 1999; Carvey, 1998). Neuroadaptation/tolerance to amphetamines. The steady use of an amphetamine by a patient will result in an incomplete state of neuroadaptation. For example, when a physician prescribes an amphetamine to treat narcolepsy, it is possible for the patient to be maintained on the same dose for years without any loss of efficacy (Jaffe, 2000a). However, patients become tolerant to the anorexic effects of the amphetamines after only a few weeks, and the initial drug-induced state of well-being does not last beyond the first few doses when used at therapeutic dosage levels. Interactions between the amphetamines and other medications. Patients who are taking amphetamines should avoid taking them with fruit juices or ascorbic acid as these substances will decrease the absorption of the amphetamine dose (Maxmen & Ward, 1995). Patients should avoid mixing amphetamines with opiates as these drugs will increase the anorexic and analgesic effects of

narcotic analgesics. Further, patients who are taking a class of antidepressants known as monoamine oxidase inhibitors (MAOIs or MAO inhibitors) should avoid amphetamines as the combination of amphetamines and MAOIs can result in dangerous elevations in the person’s blood pressure (Barnhill, Ciraulo, Ciraulo, & Greene, 1995). You should always consult with a physician or pharmacist before taking two or more medications at the same time to make sure that there is no danger of harmful interactions between the chemicals being used. Subjective Experience of Amphetamine Use The effects of the amphetamines on any given individual will depend upon that individual’s mental state, the dosage level utilized, the relative potency of the specific form of amphetamine, and the manner in which the drug is used. The subjective effect of a single dose of amphetamines is to a large degree very similar to that seen with cocaine or adrenaline (Kaminski, 1992). However, there are some major differences: (a) the effects of cocaine might last from a few minutes to an hour at most, but the effects of the amphetamines last many hours; (b) unlike cocaine, the amphetamines are effective when used orally; and (c) unlike cocaine, the amphetamines have only a very small anesthetic effect (Ritz, 1999). When used in medical practice, the usual oral dosage level is between 15 and 30 mg per day (Lingeman, 1974); however, this depends on the potency of the amphetamine or amphetamine-like drug being used (Julien, 1992). At low to moderate oral dosage levels, the individual will experience feelings of increased alertness, an elevation of mood, a sense of mild euphoria, less mental fatigue, and an improved level of concentration (Kaplan & Sadock, 1996). Like many drugs of abuse, the amphetamines will stimulate the “pleasure center” in the brain. Thus, both the amphetamines and cocaine produce “a neurochemical magnification of the pleasure experienced in most activities” (Gawin & Ellinwood, 1988, p. 1174) when initially used. The authors noted that the initial use of amphetamines or cocaine would “produce alertness and a sense of well-being . . . lower anxiety and social inhibitions, and heighten energy, self-esteem, and the emotions aroused by interpersonal experiences. Although they magnify pleasure, they do not distort it; hallucinations are usually absent” (p. 1174).


Chapter Eleven

Side Effects of Amphetamine Use at Normal Dosage Levels Patients who are taking amphetamines under a physician’s supervision may experience such side effects as dryness of the mouth, nausea, anorexia, headache, insomnia, and periods of confusion (Fawcett & Busch, 1995). The patient’s systolic and diastolic blood pressure will both increase, and the heart rate may reflexively slow down. More than 10% of the patients who take an amphetamine as prescribed will experience an amphetamine-induced tachycardia (Breggin, 1998; Fuller & Sajatovic, 1999). Amphetamine use, even at therapeutic dosage levels, has been known to cause or exacerbate the symptoms of Tourette’s syndrome in some patients (Breggin, 1998; Fuller & Sajatovic, 1999). Other potential side effects at normal dosage levels include dizziness, agitation, a feeling of apprehension, flushing, pallor, muscle pains, excessive sweating, and delirium (Fawcett & Busch, 1995). Rarely, a patient will experience a drug-induced psychotic reaction when taking an amphetamine at recommended dosage levels (Breggin, 1998; Fuller & Sajatovic, 1999). Surprisingly, in light of the fact that the amphetamines are CNS stimulants, almost 40% of patients on amphetamines experience drug-induced feelings of depression, which might become so severe that the individual attempts suicide (Breggin, 1998). Feelings of depression and a sense of fatigue, or lethargy, which last for a few hours or days are common when the amphetamines are discontinued by the patient.

II. CNS STIMULANT ABUSE Scope of the Problem of CNS Stimulant Abuse and Addiction Globally, amphetamines and amphetamine-like compounds are the second most commonly abused illicit chemical (cannabis is the first), with an estimated 34 to 35 million abusers around the world (Rawson, Gonzales, & Brethen, 2002; United Nations, 2003). In the United States, methamphetamine abuse is most popular among younger individuals, with the peak age of amphetamine abuse being the early 20s (Albertson et al., 1999; United Nations, 2003).

Methamphetamine continues to be a popular drug of abuse, especially by intravenous stimulant abusers. Orally administered methamphetamine is also a popular drug of abuse as well. It is estimated that about 800,000 people in the United States have abused some form of amphetamine at least once a month (Lemonick, Lafferty, Nash, & Park, 1997), and close to 5 million have abused methamphetamine at least once in their lives (Karch, 2002). Users typically use amphetamines manufactured in clandestine laboratories, the majority of which are in California. It is estimated that a single ounce of methamphetamine manufactured in an illicit laboratory can provide about 110 doses of the drug. Another major source of illicit amphetamines are Mexican drug dealers, who manufacture the drug in that country and then smuggle it into the United States (Lovett, 1994; Witkin, 1995).

Effects of the CNS Stimulants When Abused Ephedrine The frequency of ephedrine abuse in the United States is not known (Karch, 2002). This is because it was once available over the counter without restriction as a treatment for asthma and nasal congestion, and researchers have no way to determine how many people were legitimate users as opposed to abusers. Historically, ephedrine was abused by cross-country truckers, college students, and others who wanted to ward off the effects of fatigue. It was also occasionally sold in combination with other herbs under the label of “herbal ecstasy” (Schwartz & Miller, 1997), or sold either alone or in combination with other chemicals as a nutritional supplement to enhance athletic performance or aid weight-loss programs (Solotaroff, 2002). Also, ephedrine is used in the manufacture of illicit amphetamine compounds. All of these factors made it impossible to determine how much of the ephedrine produced in this country was being abused, or how much was being diverted to the manufacture of illicit drugs. Effects of ephedrine when abused. Ephedrine is usually abused for its ability to stimulate the CNS. Alcohol abusers often will ingest ephedrine in order to continue

Abuse of and Addiction to Amphetamines and CNS Stimulants

to drink longer, using the ephedrine to conteract the sedative effects of the alcohol. At very high doses, ephedrine can cause the user to experience a sense of euphoria. Methods of ephedrine abuse. The most common method of ephedrine abuse is for the user to ingest ephedrine pills purchased over the counter. On rare occasions, the pills will be crushed and the powder either “snorted” or even more infrequently injected. Ephedrine and its chemical cousin pseudoephedrine were also used in the illicit production of methamphetamine, a fact that may have contributed to the decision to outlaw the use of the former compound in 2004 (Office of National Drug Control Policy, 2004). Consequences of ephedrine abuse. Ephedrine abuse produces effects that are essentially an exaggeration of the side effects of ephedrine seen at normal dosage levels. Although adverse effects are possible at very low doses, a rule of thumb is that the higher the dosage level being used, the more likely the user is to experience an adverse effect from ephedrine (Antonio, 1997). There is mixed evidence suggesting that ephedrine can contribute to cardiac dysfunctions, including arrhythmias, when used at high dosage levels (Karch, 2002). Theoretically, when used at high dosage levels ephedrine can increase the workload of the cardiac muscle and cause the muscle tissue to utilize higher levels of oxygen. This is potentially dangerous if the user should have some form of coronary artery disease. Other complications from ephedrine abuse might include necrosis (death) of the tissues of the intestinal tract, potentially fatal arrhythmias, urinary retention, irritation of heart muscle tissue (especially in patients with damaged hearts), nausea, vomiting, stroke, druginduced psychosis, formation of ephedrine kidney stones in rare cases, and possibly death (American Society of Health-System Pharmacists, 2002; Antonio, 1997; Karch, 2002; Solotaroff, 2002). Ritalin (Methylphenidate) Effects of methylphenidate when abused. There have been no case reports of methylphenidate from illegal laboratories, and thus it is logical to assume that illicit methylphenidate is obtained by diversion of legitimate sources (Karch, 2002). It is interesting to note that Volkow and Swanson (2003) believed that the clinical characteristics of methylphenidate when


used as prescribed would constrain its abuse. The therapeutic use of methylphenidate was thought to cause slow, steady states of dopamine levels in the brain, which would mimic the tonic firing pattern of the cells that utilize dopamine in the brain, characteristics that would prohibit its abuse in the opinion of the authors. Unfortunately, methylphenidate abusers do not follow recommended dosing patterns. It is rare for orally administered methylphenidate to be abused at normal dosage levels (Volkow & Swanson, 2003). But when the individual ingests a larger than normal dose, the abuser will experience a sense of mild euphoria (Diller, 1998). Students have also been known to abuse methylphenidate in order to help them study late before an exam (“Tip Sheet,” 2004). In other common methods of abuse, users crush methylphenidate tablets and either inhale the powder or inject it into a vein (Karch, 2002; Volkow & Swanson, 2003). The strongest effects of methylphenidate abuse are thought to be achieved when it is injected intravenously. In contrast to the effects of methylphenidate when used at therapeutic doses, intravenously administered doses are able to bring about the blockage of more than 50% of the dopamine transporter system within a matter of seconds, causing the user to feel “high” (Volkow & Swanson, 2003; Volkow et al., 1998). Consequences of methylphenidate abuse. The consequences of methylphenidate abuse are similar to those seen when its chemical cousin, the amphetamines, are abused. Even when used according to a physician’s instructions, methylphenidate will occasionally trigger a toxic psychosis in the patient (Karch, 2002). Most certainly, when methylphenidate is abused it may trigger a toxic psychosis that is similar to paranoid schizophrenia. Unlike amphetamine abusers, methylphenidate abusers only rarely suffer CVAs, and cardiac problems associated with methylphenidate abuse are comparatively rare (Karch, 2002). When drug abusers crush methylphenidate tablets then mix the resulting powder with water for intravenous use (Volkow et al., 1998), “fillers” in the tablet are injected directly into the circulation. These fillers are used to give the tablet bulk and form, and when the medication is used according to instructions they pass harmlessly through the digestive tract. When a tablet is crushed and injected, however, these fillers


Chapter Eleven

gain admission to the bloodstream and may accumulate in the retina of the eye, causing damage to that tissue (Karch, 2002). The Amphetamines Effects of the amphetamines when abused. Scientists are only now starting to understand how an amphetamine such as methamphetamine affects the brain (Rawson et al., 2002). It is known that when the amphetamines are abused, the effects will vary, depending on the specific compound being abused and the route by which it was administered. At low doses, such as those achieved through a single oral dose of amphetamines, the user experiences a sense of well-being, energy, and gentle euphoria. Some abusers claim that the amphetamines function as an aphrodisiac; however, there is little scientific evidence to support this claim. When methamphetamine is either injected into a vein or smoked, users experience an intense sense of euphoria, which has been called a “rush” or a “flash.” This sensation was described as “instant euphoria” by the author Truman Capote (quoted in Siegel, 1991, p. 72). Other users have compared the “flash” to sexual orgasm. Researchers have not studied the “rush” in depth, but it appears to last for only a short period of time and is limited to the initial period of amphetamine abuse (Jaffe, 2000a). Subsequent doses of amphetamine do not bring about the same intense euphoria seen with the first dose, and following the initial rush, the user may experience a warm glow or gentle euphoria that may last for several hours. The chronic use of amphetamines at high dosage levels has been implicated as the cause of violent outbursts, possibly resulting in the death of bystanders (King & Ellinwood, 1997). Animal research suggests that following periods of chronic abuse at high dosage levels, norepinephrine levels are depleted throughout the brain, and the brain’s norepinephrine might not return to normal even after 6 months of abstinence (King & Ellinwood, 1997). The effects of chronic amphetamine abuse on dopamine levels in the brain appear more limited to the region known as the caudate putamen; however, as with the norepinephrine economy within the brain, animal research suggests that the dopamine levels in the caudate putamen might not return to normal even after 6 months of abstinence (King & Ellinwood, 1997). Animal research

suggests that the chronic use of amphetamines at high dosage levels might be neurotoxic possibly through amphetamine-induced release of large amounts of the neurotransmitter glutamate (Batki, 2001; Haney, 2004; King & Ellinwood, 1997). Finally, although the mechanism by which this is accomplished is not clear, there have been documented changes in the vasculature of the brain in chronic amphetamine abusers, and researchers do not know whether these changes are permanent (Breggin, 1998). Scope of amphetamine abuse. Globally the abuse of amphetamine or amphetamine-like compounds is estimated to be a $65 billion a year industry (United Nations, 2003). There are regional variations in the pattern of CNS stimulant abuse around the globe, but in the United States methamphetamine is the most commonly abused amphetamine compound (United Nations, 2003). The abuse of amphetamines, especially methamphetamine, has increased dramatically in the last years of the 20th century and first few years of the 21st century (Milne, 2003). Information on how to manufacture methamphetamine is available on the Internet and there is evidence to suggest that organized crime cartels have started to manufacture and distribute methamphetamine in large quantities (Milne, 2003; United Nations, 2003). Unfortunately, there appear to be about as many formulas for producing methamphetamine as there are “chemists” who try to produce it, a matter that makes understanding the toxicology of illicit forms of methamphetamine quite difficult. One measure of the popularity of amphetamines is seen in the increase in illegal laboratories manufacturing this substance that have been uncovered by lawenforcement officials in the past few years. Most illicit labs are “mom and pop” operations that produce relatively small amounts of amphetamine (usually methamphetamine) for local consumption, although a few “superlabs” have also been discovered by law-enforcement officials (United Nations, 2003). It is estimated that about 410 tons of amphetamine compounds (usually methamphetamine) were produced annually by such illicit laboratories around the globe (United Nations, 2003). The phenomenonal growth of amphetamine abuse might be seen in the fact that in Iowa, only two small amphetamine production laboratories were uncovered in 1994; by 1999, there were 803 (Milne, 2003).


Abuse of and Addiction to Amphetamines and CNS Stimulants

One method of methamphetamine production is known as “Nazi Meth,” for the Nazi symbols that decorated the paper with the formula on it that was discovered by police officials (“Nazi Meth Is on the Rise,” 2003). This method does not rely on the use of red phosphorus but uses compounds easily obtained from lithium batteries, ammonia, and other sources (“Nazi Meth Is on the Rise,” 2003). A $200 investment into the required materials will yield methamphetamine that might sell for $2,500 on the street, although there is a danger that some of the contaminants contained in the compound might prove toxic to the user (apparently a matter of little concern to the abuser). Methods of amphetamine abuse. The amphetamines are well absorbed when taken orally or when injected into muscle tissue or a vein; the powder might be snorted, and it may be smoked. When smoked, the amphetamine molecule is also absorbed through the lining of the lungs, and illicit drug chemists developed a smokable form of methamphetamine in the 1950s that is sold under the name of “Ice.” When smoked, the amphetamines will be absorbed into the circulation through the lungs and will reach the brain in just a matter of seconds. In the United States, methamphetamine is commonly abused through the ingestion of tablets by mouth, by smoking it, or by intravenous injection (Karch, 2002). However, the amphetamine molecule is easily absorbed through the tissues of the naso-pharynx, and thus amphetamine powder might be snorted. Subjective effects of amphetamine abuse. Because the amphetamines have a reputation for enhancing normal body functions (alertness, concentration, etc.), they have a reputation as being less dangerous than other illicit compounds (United Nations, 2003). The subjective effects of the amphetamines are dependent upon (a) whether tolerance to the drug has developed, and (b) the method by which the drug was used. Amphetamine abusers who are not tolerant to the drug’s effects and who use oral forms of the drug, or who snort it, report experiencing a sense of euphoria that may last for several hours. Individuals who are not tolerant to the drug’s effects and who inject amphetamines report an intense feeling of euphoria, followed by a less intense feeling of well-being that might last for several hours. It has been reported that the “high” produced by methamphetamine might last 8 to 24 hours, a feature of this drug

that seems to make it more addictive than cocaine (Castro, Barrington, Walton, & Rawson, 2000). Tolerance to the amphetamines. Amphetamine abusers quickly become tolerant to some of the euphoric effects of the drug (Haney, 2004). In an attempt to recapture the initial drug-induced euphoria, amphetamine abusers try to overcome their tolerance to the drug in one of three ways. First, amphetamine abusers will try to limit their exposure to the drug to isolated periods of time, allowing their bodies to return to normal before the next exposure to an amphetamine. The development of tolerance requires constant exposure to the compound or the neuroadaptive changes that cause tolerance are reversed and the body returns to a normal state. Some individuals are able to abuse amphetamines for years by following a pattern of intermittent abuse followed by periods of abstinence (possibly by switching to other compounds that are then abused). Another method by which amphetamine abusers attempt to recapture the initial feeling of euphoria induced by the drug is to embark on a cycle of using higher and higher doses of amphetamine each time the drug is used. This is done in an attempt to overcome tolerance to these chemicals (Peluso & Peluso, 1988). Other abusers “graduate” from oral or intranasal methods of amphetamine abuse to intravenous injections to provide a more concentrated dose. Finally, when this fails to provide abusers with sufficient pleasure, they might embark on a “speed run,” injecting some more amphetamine every few minutes to try to overcome their tolerance to the drugs. Some amphetamine addicts might inject a cumulative dose of 5,000 to 15,000 mg in a 24-hour time span while on a “speed run” (Chan et al., 1994; Derlet & Heischober, 1990). Such dosage levels would be fatal to the “naive” (inexperienced) drug user and are well within the dosage range found to be neurotoxic in animal studies. Speed runs might last for hours or days and are a sign that the individual has progressed from amphetamine abuse to addiction to these compounds. Consequences of Amphetamine Abuse There is a wide variation in what might be considered a “toxic” dose of amphetamines (Julien, 1992). However, a general rule is that the higher the concentration of amphetamines in the blood, the more likely the individual is to experience one or more of the adverse effects.


Whereas adverse effects of an amphetamine dose are rarely encountered when the drugs are used at therapeutic doses under the supervision of a physician, abusers are more likely to experience one or more amphetamineinduced side effects as their dosage level increases to overcome their tolerance to the drug. Central nervous system. Researchers have discovered that amphetamine abuse can cause damage in both a cellular and a regional level of the brain. At the cellular level, up to 50% of the dopamine-producing cells in the brain might be damaged after prolonged exposure to even low levels of methamphetamine (Leshner, 2001a, b). This methamphetamine-induced neurological damage might even be more widespread than just the dopamine-producing neurons. For example, Thompson et al. (2004) utilized highresolution magnetic resonance imaging (MRI) studies to find significant reductions in gray matter in the brains of methamphetamine addicts as compared to normal subjects. In addition, methamphetamine seems to be especially toxic to serotonin-producing neurons (Jaffe, 2000a; King & Ellinwood, 1997). There is evidence that methamphetamine-induced cellular damage might reflect the release of large amounts of glutamate within the brain. Eventually, the large levels of glutamate become toxic to the neurons, causing neuronal damage or even death (Fischman & Haney, 1999). Animal research suggests that methamphetamine-induced brain damage on the cellular level might persist for more than 3 years (Fischman & Haney, 1999). It is thought that amphetamine-induced regional brain damage is caused by the ability of these compounds to bring about both temporary and permanent changes in cerebral blood flow patterns. Some of the more dangerous temporary change in cerebral blood flow caused by amphetamine abuse include the development of hypertensive episodes, cerebral vasculitis, and vasospasm in the blood vessels in the brain. All of these amphetamine-induced changes in cerebral blood flow can result in a cerebral vascular hemorrhage (CVA, stroke) which may or may not be fatal (Albertson et al., 1999; Brust, 1997; King & Ellinwood, 1997). Further, reductions in cerebral blood flow were found in 76% of amphetamine abusers, changes that could persist for years after the

Chapter Eleven

individual had discontinued the use of these drugs (Buffenstein, Heaster, & Ko, 1999). Chronic amphetamine abusers might experience sleep disturbances for up to 4 weeks after their last use of the drug (Satel, Kosten, Schuckit, & Fischman, 1993). The authors also cited evidence that chronic amphetamine users might have abnormal EEG tracings (a measure of the electrical activity in the brain) for up to 3 months after their last drug use. Another very rare complication of amphetamine use or abuse is the development of the neurological condition known as the serotonin syndrome (Mills, 1995).6 Consequences of amphetamine abuse on the person’s emotions. Researchers have also found that the effects of chronic amphetamine abuse on the individual’s emotions might last for an extended period of time after the last actual drug use. The amphetamines are capable of causing both new and chronic users to experience increased anxiety levels (Satel et al., 1993). Indeed, up to 75% of amphetamine abusers report significant degrees of anxiety when they started using amphetamines (Breggin, 1998). Amphetamine-related anxiety episodes might reach the proportions of actual panic attacks, which have been known to persist for months or even years after the last actual use of amphetamines (Satel et al., 1993). That amphetamine abuse should cause such effects is not surprising in light of the research by London et al. (2004). The authors utilized radioactive atoms and positron emission tomography (PET scan) technology to measure the activity level of various regions of the brain in abstinent methamphetamine abusers. They found that chronic methamphetamine abuse alters the metabolism of brain structures thought to be involved in the generation of anxiety and depression, helping researchers to better understand that these conditions can indeed be methamphetamine-induced effects. It is not uncommon for illicit amphetamine users to try to counteract the drug-induced anxiety and tension through the use of agents such as alcohol, marijuana, or benzodiazepines. For example, Peluso and Peluso (1988) estimated that half of all regular amphetamine users may also be classified as heavy drinkers. These individuals attempt to control the side effects of the amphetamines through the use of CNS depressants 6See


Abuse of and Addiction to Amphetamines and CNS Stimulants

such as alcohol.7 Amphetamine users also might experience periods of drug-induced confusion, irritability, fear, suspicion, drug-induced hallucinations, and a drug-induced delusional state (King & Ellinwood, 1997; Julien, 1992). Other possible consequences of amphetamine abuse include assaultiveness, tremor, headache, irritability, weakness, insomnia, panic states, and suicidal and homicidal tendencies (Albertson et al., 1999; Derlet & Heischober, 1990). Physicians have found that the compounds haloperidol and diazepam are effective in helping the individual calm down from an amphetamine-induced agitation (Albertson et al., 1999). All amphetamine compounds are capable of inducing a toxic psychosis, although evidence suggests that methamphetamine is more likely to be involved in a drug-induced psychotic episode than other forms of amphetamine (Batki, 2001). This is because it is easier to achieve chronic high levels with methamphetamine than other CNS stimulants (Kosten & Sofuoglu, 2004). Using PET scan data, Sekine et al. (2001) were able to document long-lasting reductions in the number of dopamine transporter sites in methamphetamine abusers. The authors suggested that this reduction might be associated with the onset of the methamphetamineinduced psychosis in users who develop this complication of methamphetamine abuse. In its early stages, this drug-induced psychosis is often indistinguishable from schizophrenia and might include such symptoms as confusion, suspiciousness, paranoia, auditory and visual hallucinations, delusional thinking (including delusions of being persecuted), anxiety, and periods of aggression (Beebe & Walley, 1995; Kaplan & Sadock, 1996; King & Ellinwood, 1997; United Nations, 2003). Less common symptoms of an amphetamine-induced psychotic episode include psychomotor retardation, incoherent speech, inappropriate or flattened affect, and depression (Srisurapanont, Marsden, Sunga, Wada, & Monterio, 2003). The ability of the amphetamines to induce a psychotic state is reflected in the fact that 46% of amphetamine abusers reported hallucinations and 52% experienced significant degrees of paranoia when they 7 The reverse is also true: Alcohol abusers may ingest an amphetamine, or other CSN stimulant, in an attempt to counteract the sedation that results from heavy drinking.


first began to abuse amphetamines (Breggin, 1998). But where Kaplan and Sadock (1996) suggested that amphetamine-induced hallucinations tend to be mainly visual, which is not typical of a true schizophrenic condition, Srisurapanont et al. (2003) suggested that auditory hallucinations were more common in the amphetamine-induced psychosis. Under normal conditions, this drug-induced psychosis clears up within days to weeks after the drug is discontinued (Haney, 2004). However, in some cases, this drug-induced psychosis may continue for several months (Karch, 2002). Researchers in Japan following World War II noted that in 15% of the cases of amphetamine-induced psychosis, it took up to 5 years following the last amphetamine use before the drug-induced psychotic condition eased (Flaum & Schultz, 1996). For reasons that are not well understood, occasionally the amphetamine-induced psychosis does not remit, and the individual develops a chronic psychosis. It was once thought that the amphetamine-induced psychosis reflected the activation of a latent schizophrenia in a person who was vulnerable to this condition. Chen et al. (2003) assessed 445 amphetamine abusers in Taipei (Taiwan) and found a tendency for those individuals who subsequently developed a methamphetamineinduced psychosis to have been younger at the time of their first drug use, to have used larger amounts of methamphetamine, and to have premorbid schizoid or schizotypal personalities. Further, the authors found a positive relationship between the degree of personality dysfunction and the duration of the methamphetamineinduced psychotic reaction. Prolonged use of the amphetamines may also result in people’s experiencing a condition known as formication. Victims have been known to scratch or burn the skin in an attempt to rid themselves of what they believe are unseen bugs. Also, following prolonged periods of amphetamine abuse, many individuals become fatigued or depressed. It is not uncommon for the individual’s depression to reach suicidal proportions (Fawcett & Busch, 1995). The post-amphetamine depressive reaction can last for extended periods of time, possibly for months following cessation of amphetamine use. The digestive system. Amphetamine abuse has been identified as causing such digestive system problems as diarrhea or constipation, nausea, and vomiting


(Albertson et al., 1999; Derlet & Heischober, 1990). There have been a few reports of liver damage associated with amphetamine abuse (Jones, Jarvie, McDermid, & Proudfoot, 1994). However, the exact mechanisms by which illicit amphetamines are able to cause damage to the liver are still not clear. The consequences of prolonged amphetamine use, like that of cocaine, include the various complications seen in people who have neglected their dietary requirements. Vitamin deficiencies are a common consequence of chronic amphetamine abuse (Gold & Verebey, 1984). Prolonged use of the amphetamines may result in the user vomiting, becoming anorexic, or developing diarrhea (Kaplan & Sadock, 1996). The cardiovascular system. Overall, the amphetamines appear to have less potential for causing cardiovascular damage than does cocaine abuse (Karch, 2002). However, this does not mean that amphetamine abuse does not carry some risk of cardiovascular damage. In spite of its lower potential for cardiovascular damage, amphetamine abuse has been implicated as causing accelerated development of plaque in the coronary arteries, thus contributing to the development of coronary artery disease (CAD) in users (Karch, 2002). Amphetamine abuse can also result in hypertensive reactions, tachycardia, arrhythmias, and sudden cardiac death, especially when used at high dosage levels (Karch, 2002; Wender, 1995). Amphetamine abusers have been known to suffer a number of serious, potentially fatal cardiac problems, including chest pain (angina), atrial and ventricular arrhythmias, congestive heart failure (Derlet & Horowitz, 1995), myocardial ischemia (Derlet & Heischober, 1990), cardiomyopathy (Brent, 1995; Fawcett & Busch, 1995), and myocardial infarction (Fawcett & Busch, 1995; Karch, 2002). The mechanism of an amphetamine-induced myocardial infarction is similar to that seen in cocaine-induced myocardial infarctions (Wijetunga, Bhan, Lindsay, & Karch, 2004). The amphetamines appear to induce a series of spasms in the coronary arteries at the same time that the heart’s workload is increased by the drug’s effects on the rest of the body (Hong, Matsuyama, & Nur, 1991). Amphetamine abuse has been identified as causing rhabodmyolysis in some users, although the exact mechanism by which the amphetamines might cause this disorder remains unclear (Richards, 2000).

Chapter Eleven

Amphetamine abuse can also result in impotence in the male (Albertson et al., 1999; Derlet & Heischober, 1990). On the pulmonary system. There has been very little research into the impact of amphetamine abuse and lung function (Albertson, Walby, & Derlet, 1995). As amphetamine smoking is a common method by which the drug is abused, it might be reasonable to expect that the side effects of smoked amphetamine would be similar to those found when the user smokes cocaine. Thus, amphetamine abuse might result in sinusitis, pulmonary infiltrates, pulmonary edema, exacerbation of asthma, pulmonary hypertension, and pulmonary hemorrhage or infarct (Albertson et al., 1995). Other consequences of amphetamine abuse. One unintended consequence of any form of amphetamine abuse is that the amphetamine being abused might interact with surgical anesthetics if the abuser should be injured and require emergency surgery (Klein & Kramer, 2004). Further, there is evidence that amphetamine use or abuse might exacerbate some medical disorders such as Tourette’s syndrome or tardive dyskinesia (Lopez & Jeste, 1997). Amphetamine abuse has also been implicated as a cause of sexual performance problems for both men and women (Finger, Lund, & Slagel, 1997). High doses or the chronic use of amphetamines can cause an inhibition of orgasm in the user, according to the authors, as well as delayed or inhibited ejaculation in men. The practice of smoking methamphetamine has resulted in the formation of ulcers on the cornea of the eyes of some users (Chuck, Williams, Goldberg, & Lubniewski, 1996). The addictive potential of amphetamines. At present, there is no test that will identify those who are most at risk for amphetamine addiction, and if only for this reason, the abuse of these chemicals is not recommended. Most amphetamine abusers do not become addicted. But some abusers do become either emotionally or physically dependent on amphetamines (Gawin & Ellinwood, 1988). When abused, these compounds stimulate the brain’s “reward system,” possibly with greater effect than natural reinforcers such as food or sex (Haney, 2004). This effect, in turn, helps create “vivid, long-term memories” (Gawin & Ellinwood, 1988, p. 1175) of the drug experience for the user. These memories

Abuse of and Addiction to Amphetamines and CNS Stimulants

help sensitize the individual to drug-use cues, which cause the abuser to crave the drug when exposed to these cues. “Ice” In the late 1970s a smokable form of methamphetamine called “Ice” was introduced to the United States mainland (The Economist, 1989). Although it differs in appearance from methamphetamine tablets, on a molecular level it is simply methamphetamine (Wijetunga et al., 2004). Historical evidence would suggest that this form of methamphetamine was brought to Hawaii from Japan by U.S. army troops following World War II, and its use remained endemic to Hawaii for many years (“Drug Problems in Perspective,” 1990). Smoking methamphetamine is also endemic in Asia, where it is known as “shabu” (United Nations, 2003). The practice has slowly spread across the United States, but by 2003 only 3.9% of the high school seniors surveyed admitted to having used Ice at least once (Johnston, O’Malley, & Bachman, 2003a). How Ice is used. Ice is a colorless, odorless, form of concentrated crystal methamphetamine that resembles a chip of ice or clear rock candy. Some samples of Ice sold on the street have been up to 98%–100% pure amphetamine (Kaminski, 1992). Although injection or inhalation of methamphetamine is common, smoking Ice is also quite popular in some regions of the United States (Kaminski, 1992; Karch, 2002). Ice is smoked much like crack cocaine, crossing into the blood through the lungs and reaching the brain in a matter of seconds. Subjective effects of Ice abuse. In contrast to cocaine, which induces a sense of euphoria that lasts perhaps 20 minutes, the high from Ice lasts for a significantly longer period of time. Estimates as to the duration of the effects of Ice vary from 8 hours (“Raw Data,” 1990), 12 hours (“New Drug ‘Ice,’” 1989; “Drug Problems in Perspective,” 1990), 14 hours (“Ice Overdose,” 1989), 18 hours (McEnroe, 1990), to 24 hours (Evanko, 1991). Kaminski (1992) suggested that the effects of Ice might last as long as 30 hours. The long duration of its effect, while obviously in some dispute, is consistent with the pharmacological properties of the amphetamines as compared with those of cocaine. The stimulant effects of the amphetamines in general last for hours, whereas cocaine’s stimulant effects usually last for a shorter period of time.


The effects of Ice. In addition to the physical effects of the amphetamines, which were reviewed earlier in this chapter, users have found that Ice has several advantages over crack cocaine. First, although it is more expensive than crack, dose for dose Ice is actually cheaper than crack. Second, because of its duration of effect, it seems to be more potent than crack. Third, since Ice melts at a lower temperature than crack, it does not require as much heat to use. This means that Ice may be smoked without the elaborate equipment needed for crack smoking. Because it is odorless, Ice may be smoked in public without any characteristic smell alerting passersby that it is being used. Finally, another advantage of Ice is that if the user decides to stop smoking it for a moment or two, it will cool and reform as a crystal. This makes it highly transportable and offers an advantage over crack cocaine in that the user can use only a part of the piece of the drug at any given time, rather than having to use it all at once. Complications of Ice abuse. Essentially, the complications of Ice abuse are the same as those seen with other forms of amphetamine abuse. This is understandable, since Ice is simply a different form of methamphetamine than the powder or pills sold on the street for oral or intravenous use. However, in contrast to the dosage level achieved when methamphetamine is used by a patient under a physician’s care, the typical amount of methamphetamine admitted into the body when the user smokes Ice is between 150 and 1,000 times the maximum recommended therapeutic dosage for methamphetamine (Hong et al., 1991). At such high dosage levels, abusers commonly experience one or more adverse effects from the drug. In addition to the adverse effects of amphetamine abuse, which are also experienced by Ice users, there are many problems specifically associated with the use of Ice. Methamphetamine is a vasoconstrictor, which might be why some Ice users develop potentially dangerous elevations in body temperature (Beebe & Walley, 1995). When the body temperature passes above 104°F, the prognosis for recovery is quite poor. There have also been reports that female patients who have had anesthesia to prepare them for caesarean sections have suffered cardiovascular collapse because of the interaction between the anesthesia and Ice. Methamphetamine abuse has been known to cause kidney and lung damage as well as permanent damage


Chapter Eleven

to the structure of the brain itself, pulmonary edema, vascular spasm, cardiomyopathy, drug-induced psychotic reactions, acute myocardial infarctions (i.e., a “heart attack”), and cerebral arteritis (Albertson et al., 1995; Hong et al., 1991; Wijetunga et al., 2004). As these findings suggest, Ice is hardly safe. “Kat” In the late 1990s it appeared that methcathinone, or “Kat” (sometimes spelled “Cat,” “qat,” “Khat,” or “miraa”) might become a popular drug of abuse in the United States. Kat leaves contain norephedrine, and cathinone, which is biotransformed into norephedrine by the body. Kat is found naturally in several species of evergreen plants that are normally found in east Africa and southern Arabia (Community Anti-Drug Coalitions of America, 1997; Goldstone, 1993; Monroe, 1994). The plant grows to between 10 and 20 feet in height, and the leaves produce the alkaloids cathinone and cathine. Illicit producers began to produce an analog of cathinone, known as methcathinone, which has a chemical structure similar to that of the amphetamines and ephedrine (Karch, 2002). The legal status of Kat. Kat was classified a Category I8 controlled substance in 1992, and because of this classification the manufacture of this drug or its distribution is illegal (Monroe, 1994). How Kat is produced. Kat is easily synthesized by illicit laboratories, using ephedrine and such compounds as drain cleaner, epsom salts, battery acid, acetone, tuolene, various dyes, and hydrochloric acid to alter the basic ephedrine molecule. The basic components from which Kat is produced are all legally available in the United States (Monroe, 1994). These chemicals are mixed in such a way as to add an oxygen molecule to the original ephedrine molecule (“Other AAFS Highlights,” 1995c) to produce a compound with the chemical structure 2-methylamino1-pheylpropan-l-one. The scope of Kat use. After its introduction into the United States, Kat could be purchased in virtually any major city in the this country by the mid 1990s (Finkelstein, 1997). However, by the start of the 21st century, methcathinone has virtually disappeared from 8See

Appendix 4.

the drug scene, except for sub-Saharan immigrants who continue the practice of chewing the leaves even after arriving in the United States (Karch, 2002; “Khat Calls,” 2004). The effects of Kat. Users typically either inhale or smoke Kat, although it can be injected (Monroe, 1994). The drug’s effects are similar to those of the amphetamines. Users report that the drug can cause a sense of euphoria (Community Anti-Drug Coalitions of America, 1997) as well as a more intense “high” than does cocaine (“’Cat’ Poses National Threat,” 1993). In contrast to cocaine, the effects of Kat can last from 24 hours (Community Anti-Drug Coalitions of America, 1997) up to 6 days (Goldstone, 1993; Monroe, 1994). Once in the body, Kat is biotransformed into ephedrine (“Other AAFS Highlights,” 1995). Thus, its effects on the user are very similar to those seen with the chronic use of ephedrine at high dosage levels. Following the period of drug use, it is not uncommon for Kat users to fall into a deep sleep that might last for as long as several days (Monroe, 1994). Chronic users also have reported experiencing periods of depression following the use of Kat (“’Cat’ Poses National Threat,” 1993). Adverse effects of Kat abuse. Because this is a relatively new drug, much remains to be discovered about the effects of Kat on the user. To date, some of the reported adverse effects include the development of drug-induced psychotic reactions; agitation; hyperactivity; a strong, offensive body odor; sores in the mouth and on the tongue; and depression. Death has been known to occur as a result of Kat use, although the exact mechanism of death has not been identified. Monroe (1994) suggested that Kat users are at increased risk for heart attack or stroke. Brent (1995) suggested that an overdose of Kat produces many of the same effects, and responds to the same treatment, as does an overdose of amphetamine. At this time, Kat seems to to be of less interest to the casual user of chemicals than to the occasional hardcore stimulant abuser (O’Brien, 2001).

Summary Although they had been discovered in the 1880s the amphetamines were first introduced as a treatment for asthma some 50 years later, in the 1930s. The early forms of amphetamine were sold over the counter in cloth-covered ampules that were used in much the

Abuse of and Addiction to Amphetamines and CNS Stimulants

same way as smelling salts are today. Within a short time, however, it was discovered that the ampules were a source of concentrated amphetamine, which could be injected. The resulting “high” was found to be similar to that of cocaine, which had gained a reputation as being a dangerous drug to use, but with the added “benefit” lasting much longer. The amphetamines were used extensively both during and after World War II. Following the war, American physicians prescribed amphetamines for the treatment of depression and as an aid for weight loss. By the year 1970, amphetamines accounted for 8% of all prescriptions written. However, since then, physicians have come to understand that the amphetamines present a serious potential for abuse. The amphetamines


have come under increasingly strict controls, which limit the amount of amphetamine manufactured and the reasons an amphetamine might be prescribed. Unfortunately, the amphetamines are easily manufactured, and there has always been an underground manufacture and distribution system for these drugs. In the late 1970s and early 1980s street drug users drifted away from the amphetamines to the supposedly safe stimulant of the early 1900s: cocaine. In the late 1990s, the pendulum began to swing the other way, and illicit drug users began to use the amphetamines, especially methamphetamine, more and more frequently. This new generation of amphetamine addicts has not learned the dangers of amphetamine abuse so painfully discovered by amphetamine users of the late 1960s: “Speed” kills.



coca plant became associated with the exploitation of South America by European settlers, who encouraged its widespread use. Even today, the practice of chewing coca leaves, or drinking a form of tea brewed from the leaves, has continued. Modern natives of the mountain regions of Peru chew coca leaves mixed with lime, which is obtained from sea shells (White, 1989). The lime works with saliva to release the cocaine from the leaves and helps to reduce its bitter taste. Also, chewing coca leaves is thought to actually help the chewer absorb some of the phosphorus, vitamins, and calcium contained in the mixture (White, 1989). Thus, although its primary use is to help the natives work more efficiently at high altitudes, there might also be some small nutritional benefit obtained from the practice of chewing coca leaves. As European scientists began to explore the biosphere of South America, they took a passing interest in the coca plant and attempted to isolate the compounds that made it so effective in warding off hunger and fatigue. In 1859, a chemist named Albert Neiman isolated a compound that was later called cocaine (Scaros, Westra, & Barone, 1990). This accomplishment then allowed researchers to first produce large amounts of relatively pure cocaine for research. One of these experiments involved the injection of concentrated cocaine directly into the bloodstream with another new invention: the hypodermic needle. Before long researchers discovered that even orally administered cocaine made the user feel good. Extracts from the coca leaf were used to make a wide range of popular drinks, wines, and elixirs (Martensen, 1996). Physicians of the era, lacking effective pharmaceuticals for most human ills, experimented with cocaine concentrate as a possible agent to treat disease. No less a figure than Sigmund Freud experimented with cocaine, at first thinking it a cure for depression (Rome, 1984) and later as

Introduction Historically, the United States experienced a resurgence of interest in and abuse of cocaine in the early to mid 1980s. This wave of cocaine abuse peaked around 1986 and gradually declined in the mid to late 1990s; by the early years of the 21st century cocaine abuse levels in the United States were significantly lower than those seen 15 years earlier. However, cocaine abuse never entirely disappeared, and it remains a serious problem in this country. In this chapter, cocaine abuse and addiction will be discussed.

A Brief Overview of Cocaine At some point in the distant past, a member of the plant species Erythroxylon coca began to produce a neurotoxin in its leaves that would destroy the nervous system of bugs that might try to ingest its leaves (Breiter, 1999). This neurotoxin, cocaine, was able to ward off most of the insects that would otherwise strip the coca plant of its leaves, allowing the coca plant to thrive in the higher elevations of Peru, Bolivia, and Java (DiGregorio, 1990). At least 5,000 years ago, it was discovered that chewing the leaves could ease feelings of fatigue, thirst, and hunger, enabling the individual to work for longer periods of time in the thin mountain air (Hahn & Hoffman, 2001). By the time the first European explorers arrived, the Inca empire was at its height, and the coca plant was used by the Incas not only in their religious ceremonies but as a medium of exchange (Ray & Ksir, 1993) and as part of the burial ritual (Byck, 1987). Prior to the arrival of the first European explorers the coca plant’s use was generally reserved for the upper classes (Mann, 1994). However, European explorers soon found that by giving native workers coca leaves to chew on, the workers would be more productive. The 146



a possible “cure” for narcotic withdrawal symptoms (Byck, 1987; Lingeman, 1974). However, when Freud discovered the drug’s previously unsuspected addictive potential, he discontinued his research on cocaine, as did many other scientists of the era.

Cocaine in Recent U.S. History In response to the decision by the city of Atlanta to prohibit the use of alcohol, John Stith-Pemberton developed a new product that he thought would serve as a “temperance drink” (Martensen, 1996, p. 1615), which up until 1903 contained 60 mg of cocaine per 8 ounce serving (Gold, 1997). In time, the world would come to know Stith-Pemberton’s product by another name: Coca-Cola. Although this is surprising to modern readers, one must remember that consumer protection laws were virtually nonexistent, and chemicals such as cocaine and morphine were readily available without a prescription. These compounds were widely used in a wide variety of products and medicines, usually as a hidden ingredient. This practice contributed to epidemics of cocaine abuse in Europe between the years 1886 and 1891, and in both Europe and the United States between 1894 and 1899, and again in the United States between 1921 and 1929. These waves of cocaine abuse or addiction, the use of cocaine in so many patent medicines, combined with concern over its supposed “narcotic” qualities and the fear that cocaine was corrupting southern blacks, prompted both the passage of the Pure Food and Drug Act of 1906 (Mann, 1994) and the classification of cocaine as a “narcotic” in 1914 (Martensen, 1996). The Pure Food and Drug Act of 1906 required makers to list the ingredients of a patent medicine or elixir on the label. As a result of this law, cocaine was removed from many patent medicines. With the passage of the Harrison Narcotics Act of 1914, nonmedical cocaine use in the United States was prohibited (Derlet, 1989). These regulations, the isolation of the United States during the First and Second World Wars, and introduction of the amphetamines in the 1930s, helped to virtually eliminate cocaine abuse in this country. It did not resurface as a major drug of abuse until the late 1960s. By then, cocaine had the reputation in this country of being the “champagne of drugs” (White, 1989, p. 34) for those who could afford it. It again became popular

as a drug of abuse in the United States in the 1970s and early 1980s. There are many reasons for this resurgence in cocaine’s popularity. First, cocaine had been all but forgotten since the Harrison Narcotics Act of 1914. Stories of cocaine abusers sneezing out long tubes of damaged or dead cartilage in the latter years of the 19th and early years of the 20th centuries were either forgotten or dismissed as “moralistic exaggerations” (Gawin & Ellinwood, 1988, p. 1173; Walton, 2002). Also, there had been a growing disillusionment with the amphetamines as drugs of abuse that started in the mid 1960s. The amphetamines had acquired a reputation as known killers. Drug users would warn each other that “speed kills,” a reference to the fact that the amphetamines could kill the user in a number of different ways. For better or worse, cocaine had the reputation of bringing about many of the same sensations caused by amphetamine use without the dangers associated with the abuse of other CNS stimulants. Cocaine’s reputation as a special, glamorous drug combined with increasing government-sanctioned restrictions on amphetamine production by legitimate pharmaceutical companies helped focus attention on cocaine as a substitute by drug abusers in the late 1960s. By the middle of the 1980s, cocaine had again become a popular drug of abuse in a number of countries around the world. The United States did not always lead in the area of cocaine abuse. For example, by the mid 1970s, the practice of smoking coca paste was popular in parts of South America but had only started to gain popularity in the United States. But as cocaine became more popular in this country, it attracted the attention of what is loosely called “organized crime.” At the same time, cocaine dealers were eager to find new markets for their “product” in the United States, where the primary method of cocaine abuse was intranasal inhalation of the cocaine powder. Cocaine “freebase” (to be discussed below) was known to induce an intense feeling of euphoria when smoked, but it required the use of elaborate equipment by the user to separate the cocaine base from the powder then being sold on the street (“The Men Who Created Crack,” 1991). After a period of experimentation, illicit drug manufacturers developed “crack,” a form of cocaine that could be smoked without elaborate preparations or equipment, and crack started to become the preferred form of cocaine in this country in the early 1980s.


Chapter Twelve

The epidemic of cocaine use/abuse that swept the United States in the 1980s and 1990s will not be discussed here; this topic is worthy of a book in its own right. But by the start of the 21st century, drug abusers had come full circle: The dangers of cocaine abuse were well known, and drug users were eager for an alternative to cocaine. Just as the then-new amphetamines replaced cocaine as the preferred stimulant of choice in the 1930s, it would appear that the amphetamines, especially methamphetamine, is again replacing cocaine as the CNS stimulant of choice for drug abusers. Cocaine use/abuse appears to have peaked sometime around 1986 in the United States, and casual cocaine use has been on the decline since then (Kleber, 1991). However, cocaine has by no means disappeared. Recreational cocaine use has leveled off, but it remains a significant part of the drug-abuse problem in the United States (Gold, 1997).

Cocaine Today At the start of the 21st century, Erythroxylon coca continues to thrive in the high mountain regions of South America, and the majority of the coca plants grown in South America are harvested for the international cocaine trade and not for local use (Mann, 1994). Virtually 98% of the world’s cocaine is produced in South America, with Colombia producing 75% of the total (Karch, 2002). People who live in the high mountain plateaus continue to chew coca leaves to help them work and live. Some researchers have pointed to this practice as evidence that cocaine is not as addictive as drug enforcement officials claim. For example, Jaffe (2000b) noted that while natives of Peru chew cocaine on a regular basis “few progress to excessive use or toxicity” (p. 1003). This was thought to be possible because chewing the leaves is a rather inefficient method of abusing cocaine, and much of the cocaine that is released by this method is destroyed by the acids of the digestive tract. As a result of these forces, the native who chewed cocaine was not thought to obtain a significant level of cocaine in the blood, according to Jaffe. Other researchers have suggested that the natives of South America who chew coca leaves do indeed become addicted to the stimulant effect of the cocaine. These scientists point to studies that have revealed that the blood level of cocaine achieved when coca leaves are chewed

barely enters the lower range of blood levels achieved by those who “snort” cocaine in the United States, and while this is barely enough to have a psychoactive effect, it is still a large enough dose to be addicting in the opinion of some scientists (Karch, 2002). Thus, the answer to the question of whether natives who chew coca leaves are or are not addicted to the cocaine that they might absorb has not been resolved. Current Medical Uses of Cocaine Cocaine was once a popular pharmaceutical agent that was used in the treatment of a wide range of conditions. By the 1880s, physicians had discovered that cocaine was an effective local anesthetic (Byck, 1987; Mann, 1994). Cocaine was found to block the movement of sodium ions into the neuron, thus altering its ability to carry pain signals to the brain (Drummer & Odell, 2001). Because of this effect, cocaine was once commonly used by physicians as a topical analgesic for procedures involving the ear, nose, throat, rectum, and vagina. When used as a local anesthetic, cocaine would begin to be effective in about 1 minute, and its effects would last as long as 2 hours (Shannon, Wilson, & Stang, 1995). Cocaine was also included in a mixture called Brompton’s cocktail, which was used to control the pain of cancer. However, this mixture has fallen out of favor and is rarely, if ever, used today (Scaros et al., 1990). As a pharmaceutical, cocaine’s usefulness was limited by its often undesirable side effects. Because of these side effects, physicians have found a number of other chemicals that offer the advantages of cocaine without its side effects or potential for abuse. Today, cocaine “has virtually no clinical use” (House, 1990, p. 41), although on rare occasions it is still used by physicians to control pain. Scope of the Problem of Cocaine Abuse and Addiction Researchers believe that global cocaine production peaked in 1999, and that since that time it has ebbed and flowed in response to market pressure and intradiction efforts. An estimated 13.3 million people abuse cocaine around the world, of whom 48% (5.9 million) are thought to live in the United States (United Nations, 2004). An estimated 3.3 million cocaine abusers are in Europe, and 2.3 million in South America (United



Nations, 2004). The remaining 1.8 million cocaine abusers live in areas of the globe where cocaine abuse is not a major problem. In the United States cocaine abusers consume 250 of the 700 metric tons of cocaine produced around the world each year (Office of National Drug Control Policy, 2004). Currently it is estimated that more than 30 million people in the United States have used cocaine at least once (Hahn & Hoffman, 2001), with 600,000 people trying it for the first time and an estimated 1.7 million people using cocaine each month (Craig, 2004). These people spend approximately $60 billion each year to purchase their illicit cocaine (“Cocaine Models,” 2003). Surprisingly, in spite of the fact that casual cocaine abuse in the United States peaked in the mid 1980s, the total amount of cocaine consumed each year in this country has remained at about the mid 1980 level (Karch, 2002). This apparent contradiction is explained by the fact that while there are fewer casual cocaine users, the number of regular cocaine abusers (those who use the drug once a week or more) has remained at about 640,000 persons for more than a decade (O’Brien, 2001). These individuals consume a disproportionate amount of all of the cocaine consumed in the United States.

Pharmacology of Cocaine Cocaine is best absorbed into the body when it is administered as cocaine hydrochloride, a water-soluble compound. After entering the body, it quickly diffuses into the general circulation and is rapidly transported to the brain and other blood-rich organs such as the heart. In spite of its rapid distribution, the level of cocaine in the brain is usually higher than it is in the blood plasma, especially in the first 2 hours following use of the drug (“Cocaine in the Brain,” 1994). In the last decade of the 20th century, scientists began to unravel the mystery of cocaine’s effects on the central nervous system (CNS). In part, cocaine seems to activate some of the same regions of the brain involved in sexual desire (Garavan et al., 2000). Cocaine also seems to activate the Mu and Kappa opioid receptors (Unterwald, 2001). These findings would help to account in part for the intensity of the craving that cocaine-dependent people report that they experience

when they abstain from the drug. According to Garavan et al. (2000), the regions of the brain that seem to be affected by cocaine include the nucleus accumbens, the amygdala, and the anterior cingulate. Given the importance of sexual desire and reproduction for the species, it is reasonable to expect that anything that activates the same regions of the brain would cause the same intense desire found in sexual lust. Perhaps it is for this reason that cocaine addicts refer to their drug as the “white lady” and speak of it almost as if it were a human lover. Researchers have also discovered that unlike what was thought to be true in the late 1980s and early 1990s, cocaine does not cause the release of dopamine in the CNS. Rather, cocaine seems to block the reuptake of the dopamine that has already been released by the CNS (Haney, 2004). Further, researchers have found at least five different subtypes of dopamine receptors in the brain, and the reinforcing effects of cocaine seem to reflect its ability to stimulate some of these receptor subtypes more strongly than others. For example, Romach et al. (1999) found that when the dopamine D1 receptor was blocked, their volunteers failed to experience the pleasure that cocaine usually induces when it is injected into the circulation. On the basis of this finding, the authors concluded that the dopamine D1 receptor site was involved in the experience of euphoria reported by cocaine abusers. In the human brain, the dopamine D1 receptors are concentrated in the “mesolimbic” system of the brain, which includes structures such as the nucleus accumbens and the amygdala. These structures are known to be involved in the pleasure response induced by the drugs of abuse. Cocaine’s effects come from its ability to cause a massive discharge of the neurotransmitter dopamine along the nerve pathways that connect the ventral tegmentum region of the brain with the nucleus accumbens, causing the abuser to experience intense pleasure (Beitner-Johnson & Nestler, 1992; Haney, 2004; Restak, 1994). Indeed, this drug-induced activation of the reward system of the brain might be more intense than rewards triggered by natural reinforcers such as food, drink, or sex (Haney, 2004). But cocaine does not just cause the release of dopamine. It also blocks the process of reabsorption/ reuptake of the neurotransmitters norepinephrine and serotonin (Reynolds & Bada, 2003; Unterwald, 2001). The significance of this cocaine-induced blockage of


the norepinephrine and serotonin reuptake systems is not known at this time, although the noreadrenaline system is known to be involved in cardiac function among other things, and thus might account for cocaine’s impact on the cardiovascular system. On a cellular level, cocaine also alters the function of a protein known as postsynaptic density-95 (Sanna & Koob, 2004). Long-term changes in this protein, which is involved in helping the neuron adapt the synapse to changing neurotransmitter mixtures, are thought to be involved in the process of learning and memory formation, possibly accounting at least in part for cocaine’s ability to cause the user to form strong memories of the drug’s effects (Sanna & Koob, 2004). After periods of prolonged abuse, the neurons within the brain will have released virtually all their stores of the neurotransmitter dopamine without being able to reabsorb any of the dopamine, norepinephrine, or serotonin that has been released. Low levels of these neurotransmitters are thought to be involved in the development of depression. This pharmacological effect of cocaine might explain the observed relationship between cocaine abuse and depression, which has been known to reach suicidal proportions in some cocaine abusers. Tolerance to cocaine’s euphoric effect may develop within “hours or days” (Schuckit, 2000, p. 124). As tolerance develops, the individual will require more and more cocaine in order to experience a euphoric effect. This urge to increase the dosage and continue using the drug can reach the point that it “may become a way of life and users become totally preoccupied with drug-seeking and drug-taking behaviors” (Siegel, 1982, p. 731). Another of the brain subunits affected by cocaine is the diencephalon, which is the region of the brain responsible for temperature regulation. This will result in a higher than normal body temperature for the user. At the same time that cocaine is altering the brain’s temperature regulation system, it will also cause the constriction of surface blood vessels. This combination of effects results in hyperthermia: excess body heat. The individual’s body will conserve body heat at just the time it needs to release the excess thermal energy caused by the cocaine-induced dysregulation of body temperature, possibly with fatal results (Hall, Talbert, & Ereshefsky, 1990). Cocaine’s effects are very short-lived. When it is injected intravenously, the peak plasma levels are

Chapter Twelve

reached in just 5 minutes, and after 20–40 minutes the effects begin to diminish (Weddington, 1993). This is because the half-life of a single dose of intravenously administered cocaine is only 30 to 90 minutes (Jaffe, 2000b; Marzuk et al., 1995; Mendelson & Mello, 1996). Cocaine is biotransformed in the liver and produces about a dozen known metabolites (Karch, 2002). About 90% to 95% of a dose of intravenously administered cocaine is biotransformed into one of two primary metabolites: benzoylecgonine (BEG), or ecogonine methyl ester (Cone, 1993; Kerfoot, Sakoulas, & Hyman, 1996). The other metabolites are of minor importance and need not be considered further in this text. Only about 5% to 10% of a single dose of cocaine is excreted from the body unchanged. Neither of the major metabolites of cocaine has any known biological activity in the body. BEG has a half-life of 7.5 hours (Marzuk et al., 1995). Because the half-life of BEG is longer than that of the parent compound, and because it is stable in urine samples that have been frozen, this is the chemical that laboratories usually test for when they test a urine sample for evidence of cocaine use. Cocaine is known to autometabolize following the user’s death. This is to say that the body will continue to biotransform the cocaine in the blood even after the user has died. Thus, a post-mortem blood sample might not reveal any measurable amount of cocaine in the blood, even in cases where the user was known to have used cocaine prior to his or her death. Drug interactions involving cocaine. There has been surprisingly little research into cocaine-drug interactions (Karch, 2002). It is known that cocaine has the potential to interact with a wide range of both pharmaceuticals and illicit drugs. Cross addiction is a common complication of chronic cocaine use. For example, between 20% and 50% of alcohol and heroin dependent individuals are also dependent on cocaine (Gold & Miller, 1997a), whereas more than 75% of cocaine abusers are dependent on alcohol (Zealberg & Brady, 1999). There is a great deal of debate among clinical toxicologists about whether the combination of alcohol and cocaine is or is not inherently dangerous. When a person uses both cocaine and alcohol, a small amount (10%) of the cocaine is biotransformed into cocaethylene (Gold & Miller, 1997a; Karch, 2002). Cocaethylene is so toxic to the user’s body that when it is present in



significant amounts, it is estimated to be 25 to 30 times as likely to induce death as cocaine itself (Karan, Haller, & Schnoll, 1998). Because its half-life is longer than cocaine’s and because it functions as a powerful calcium channel blocker, clinicians suspected that cocaethylene is the cause of cocaine-induced heart problems or death (Hahn & Hoffman, 2001; Karch, 1996). As clinicians explored the relationship between concurrent alcohol and cocaine abuse and cocaethylene formation in the early years of the 21st century, they discovered that cocaethylene is formed only when high levels of alcohol are present in the user’s body (“Is Cocaethylene Cardiotoxic?,” 2002). Further, animal research found evidence of cocaethylene-related cardiotoxicity only when extremely high levels of cocaethylene were present (Wilson & French, 2002). Because (a) only a small amount of cocaine is biotransformed into cocaethylene, (b) this is accomplished only when the user has a high blood alcohol level, and (c) cardiotoxicity is seen only with high blood levels of cocaethylene, there is reason to doubt that it is a major cause of cocaine-related cardiac death. Research also has suggested a possible relationship between the concurrent use of cocaine and alcohol, and death from pulmonary edema (Barnhill, Ciraulo, Ciraulo, & Greene, 1995). Unfortunately, cocaethylene may lengthen the period of cocaine-induced euphoria, making it more likely that the person will continue to use alcohol with cocaine in spite of the danger associated with this practice. Some abusers will inject a combination of cocaine and an opiate, a process known as “speedballing.” However, for reasons that are not well understood, cocaine will actually enhance the respiratory depressive effect of the opiates, possibly resulting in episodes of respiratory arrest in extreme cases (Kerfoot et al., 1996). As discussed later in this chapter, cocaine abuse often results in a feeling of irritation or anxiety. In order to control the cocaine-induced agitation and anxiety, users often ingest alcohol, tranquilizers, or marijuana. The combination of marijuana and cocaine appears capable of increasing heart rate levels by almost 50 beats per minute in individuals who are using both substances (Barnhill et al., 1995). There is one case report of a patient who was abusing cocaine who took an over-the-counter cold medication that contained phenylpropanolamine. This person

developed what seems to have been a drug-induced psychosis that included homicidal thoughts (Barnhill et al., 1995). It is not clear at this time if this was just an isolated incident or if the interaction between cocaine and phenylpropanolamine might precipitate a psychotic reaction, but the concurrent use of these chemicals is not recommended.

How Illicit Cocaine Is Produced The process of cocaine production has changed little in the past generation. First, the cocaine leaves are harvested. In some parts of Bolivia, this may be done as often as once every 3 months, as the climate is well suited for the plant to grow. Second, the leaves are dried, usually by letting them sit in the open sunlight for a few hours or days, and although this process is illegal in many parts of South America, the local authorities are quite tolerant and do little to interfere with the drying of coca leaves. In the next step, the dried leaves are put in a plastic lined pit, and mixed with water and sulfuric acid (White, 1989). The mixture is crushed by workers who wade into the pit in their bare feet. After the mixture has been crushed, diesel fuel and bicarbonate are added to the mixture. After a period of time, during which workers reenter the pit several times to continue to stomp through the mixture, the liquids are drained off. Lime is then mixed with the residue, forming a paste (Byrne, 1989), which is known as cocaine base. It takes 500 kilograms of leaves to produce one kilogram of cocaine base (White, 1989). In step four, water, gasoline, acid, potassium permanganate, and ammonia are added to the cocaine paste. This forms a reddish brown liquid, which is then filtered. A few drops of ammonia, when added to the mixture, produce a milky solid that is filtered and dried. Then the dried cocaine base is dissolved in a solution of hydrochloric acid and acetone. A white solid forms, which settles to the bottom of the tank (Byrne, 1989; White, 1989). This solid material is the compound cocaine hydrochloride. Eventually, the cocaine hydrochloride is filtered and dried under heating lights. This will cause the mixture to form a white, crystalline powder that is gathered up, packed, and shipped, usually in kilogram packages. Before sale to the individual cocaine user, each kilogram is adulterated and the


Chapter Twelve

resulting compound is packaged in 1 gram units and sold to individual users.

How Cocaine Is Abused Cocaine may be used in a number of ways. First, cocaine hydrocloride powder might be inhaled through the nose (intranasal use, also known as “snorting,” or, more appropriately, insufflation). Second, it may be injected directly into a vein (an intravenous injection). Cocaine hydrochloride is a water soluble form of cocaine and thus is well adapted to either intranasal or intravenous use (Sbriglio & Millman, 1987). Third, cocaine base might be smoked. Fourth, cocaine may be used orally (sublingually). We will examine each of these methods of cocaine abuse in detail. Insufflation. Historical evidence suggests that the practice of “snorting” cocaine began around 1903, which is the year that case reports of septal perforation began to appear in medical journals (Karch, 2002). When snorted, cocaine powder is usually arranged on a piece of glass such as a pocket mirror, in thin lines 3 to 5 cm long. Each of these lines contains between 25 and 100 mg of cocaine (Karch, 1996; Strang, Johns, & Caan, 1993). The powder is diced up, usually with a razor blade, on a piece of glass or a mirror, to make the particles as small as possible and enhance absorption. A gram of cocaine prepared in this manner might yield 25 to 30 lines (Karan et al., 1998). The powder is then inhaled through a drinking straw or rolled paper tube. This practice is quite common, and in the mid 1990s, 77% to 95% of cocaine abusers would snort it (Boyd, 1995; Hatsukami & Fischman, 1996). When it reaches the nasal passages, which are richly supplied with blood vessels, the cocaine is quickly absorbed. This allows some of the cocaine to gain rapid access to the bloodstream, usually in 30 to 90 seconds (House, 1990), where it is carried to the brain. The peak effects of cocaine when it is snorted are reached within 15 to 30 minutes, and the effects wear off in about 45 to 60 minutes after a single dose (Kosten & Sofuoglu, 2004; Weiss, Greenfield, & Mirin, 1994), and between 2 and 3 hours for chronic use (Hoffman & Hollander, 1997). Researchers disagree as to how much of the cocaine that is snorted will ultimately be absorbed into the user’s body. Because cocaine functions as a vasoconstrictor it may limit its own absorption when it is

snorted. Estimates of the amount of cocaine that is absorbed through the nasal passages when it is snorted range from 5% (Strang et al., 1993) to 25% to 94% (Hatsukami & Fischman, 1996). Karch (2002) took a middle-of-the-road position on this issue, suggesting that because of its vasoconstrictive effect, it takes longer for cocaine to be absorbed when it is snorted, but that it is virtually all eventually absorbed. Thus the question of whether intranasally administered cocaine might limit its own absorption rate and the degree to which it might do so has not been fully answered. Intravenous cocaine abuse. It is possible to introduce cocaine directly into the body through intravenous injection. Cocaine hydrochloride powder is mixed with water, then injected into a vein. This method of cocaine abuse actually is the least common method by which cocaine is used, with only 7% of those individuals who use cocaine injecting it (Hatsukami & Fischman, 1996). Intravenously administered cocaine will reach the brain in just a matter of 3 to 5 seconds (Restak, 1994) or 30 seconds (Kosten & Sofuoglu, 2004). In contrast to the limited amount of cocaine that is absorbed when it is snorted, intravenous administration results in 20 times as much cocaine reaching the brain as in intranasal cocaine use (Strang et al., 1993). Intravenous cocaine abusers have often reported a rapid, intense feeling of euphoria called the “rush” or “flash,” which is similar to a sexual orgasm but feels different from the rush reported by opiate abusers (Brust, 1998). Following the rush, the user will experience a feeling of euphoria that lasts 10 to 15 minutes. Researchers believe that the rush is the subjective experience of cocaine-induced changes in the ventral tegmentum in the midbrain, and the basal forebrain. The rush will be discussed in more detail in the section on the subjective effects of cocaine (below). Intravenously administered cocaine is biotransformed quite quickly, which is one reason its effects last only about 15 minutes (Weiss et al., 1994). Sublingual cocaine use. This form of cocaine abuse, the third method of administration discussed thus far, is becoming increasingly popular, especially when the hydrochloride salt of cocaine is utilized (Jones, 1987). The tissues in the mouth, especially under the tongue, are richly supplied with blood, allowing large amounts of the drug to enter the bloodstream quickly. After reaching the brain, the cocaine is transported to the


brain, with results similar to those seen in the intranasal administration of cocaine. Rectal cocaine use. The method of rectal cocaine administration has become increasingly popular, especially among male homosexuals (Karch, 2002). Cocaine’s local anesthetic properties provide the desired effects for the user, allowing for participation in otherwise painful forms of sexual activity. Unfortunately, the anesthetic properties of cocaine might mask signs of physical trauma to the tissues in the rectal area, increasing the individual’s risk of death from these activities (Karch, 2002). Cocaine smoking. Historically, the practice of burning or smoking different parts of the coca plant dates back to at least 3000 B.C., when the Incas would burn coca leaves at religious festivals (Hahn & Hoffman, 2001). The practice of smoking cocaine resurfaced in the late 1800s, when coca cigarettes were used to treat hay fever and opiate addiction. By the year 1890, cocaine smoke was being used in the United States for the treatment of whooping cough, bronchitis, asthma, and a range of other conditions (Siegel, 1982). But in spite of this history of cocaine smoking for medicinal reasons, recreational cocaine smoking in the United States did not become popular until the early to mid 1980s. This is because the medicinal uses of cocaine have gradually been reduced as other, more effective agents have been introduced for the control of various illnesses. When cocaine hydrochloride became a popular drug of abuse in the 1970s, users quickly discovered that it is not easily smoked. The high temperatures needed to vaporize cocaine hydrochloride also destroy it, making it of limited value to those who wish to smoke it. Although dedicated cocaine abusers of the 1970s and 1980s knew that it was possible to smoke the alkaloid base of cocaine, they also knew that the process of transforming cocaine hydrochloride into an alkaloid base that might be smoked was a long, dangerous process. This made the practice of smoking cocaine unpopular before around 1985. To transform cocaine hydrochloride into an alkaloid base, cocaine powder had to be mixed with a solvent such as ether, and then a base compound such as ammonia (Warner, 1995). The cocaine will then form an alkaloid base that can be smoked. This form of cocaine is called “freebase” (or simply “base”). Then the precipitated cocaine freebase is passed through a filter,


which effectively removes some of the impurities and increases the concentration of the obtained powder. Unfortunately, the process of filtration does not remove all the impurities from the cocaine, and many of the impurities found in the original sample of cocaine will still remain in the alkaloid base that is produced through this process (Siegel, 1982). The cocaine powder obtained through this process can then be smoked, but the process of transforming cocaine hydrochloride into this form is quite difficult. Further, the chemicals used to separate cocaine freebase from its hydrochloride salt are quite volatile, and there is a very real danger of fire or even an explosion from these compounds. As a result, smoking cocaine freebase never became popular in the United States. But when cocaine freebase was smoked, the fumes would reach the brain in just 7 seconds (Beebe & Walley, 1991; Hahn & Hoffman, 2001), with between 60% and 90% of the cocaine crossing over into the general circulation from the lungs (Beebe & Walley, 1991; Hatsukami & Fischman, 1996). Indeed, there is evidence that when it is smoked, cocaine reaches the brain more quickly than when it is injected (Hatsukami & Fischman, 1996) and has been called “the most addictive substance used by humankind” (Wright, 1999, p. 47). This suggested to illicit drug producers that there would be a strong market for a form of cocaine that could easily be smoked, and by the mid 1980s such a product had reached the U.S. streets. Called crack, this product was essentially a solid chunk of cocaine base that was prepared for smoking before it was delivered for sale at the local level. This is done in illicit factories or laboratories where cocaine hydrochloride is mixed with baking soda and water and then heated until the cocaine crystals begin to precipitate at the bottom of the container (Warner, 1995). More than a generation ago, Breslin (1988) discussed how one crack factory worked: Curtis and his girlfriend dropped the cocaine and baking soda into the water, then hit the bottle with the blowtorch. The cocaine powder boiled down to its oily base. The baking soda soaked up the impurities in the cocaine. When cold water was added to the bottle, the cocaine base hardened into white balls. Curtis and Iris spooned them out, placed them on a table covered with paper, and began to measure the hard white cocaine. (p. 212)


Chapter Twelve

The crack produced in such illicit factories is sold in small, ready-to-use pellets packaged in containers that allow the user one or two inhalations for a relatively low price (Beebe & Walley, 1991). Although at first glance crack seems less expensive than other forms of cocaine, research has demonstrated that it actually is about as expensive as cocaine used for intravenous injection (Karch, 2002). But since it is sold in smaller quanities, it is attractive to the under-18 crowd and in low-income neighborhoods (Bales, 1988; Taylor & Gold, 1990). Since the introduction of crack, the practice of smoking cocaine has arguably become the most widely recognized method of cocaine abuse. However, researchers believe that just over one-third of cocaine abusers in the United States smoke the drug (Hatsukami & Fischman, 1996). In 1996, substance abuse rehabilitation professionals noted a disturbing new trend among some crack users in both England and isolated cities of the United States. In these areas, limited numbers of users would dissolve the pellets of crack in alcohol, lemon juice, vinegar, or water, and then inject it into their bodies through largebore needles (“Crack Injecting in Chicago,” 1996). Apparently, intravenous cocaine abusers were resorting to this practice when their traditional sources of cocaine hydrochloride were unable to provide them with the powder used for injection. It is not known how popular this practice will become, but it does represent a disturbing new twist to the ongoing saga of cocaine abuse/ addiction.

Subjective Effects of Cocaine When It Is Abused Two factors influence the individual’s subjective experience from cocaine. First, the individual’s expectations play a role in how he or she interprets the drug’s effects. Second, there are the actual physiological effects of the drug. These two factors interact to shape the individual’s experience from cocaine and how it is abused. Experienced cocaine users tend to experience both positive (euphoric) and negative (depressive) effects from the drug (Schafer & Brown, 1991). The experienced cocaine abuser expects (a) a generalized feeling of arousal, (b) some feelings of anxiety, and (c) feelings of relaxation and a reduction in the level of tension as a result of the drug use.

Both intravenous injection and cocaine smoking can cause the user to experience a feeling of intense euphoria that has been compared to the sexual orgasm in intensity and pleasure. It is so intense for some users that “it alone can replace the sex partner of either sex” (Gold & Verebey, 1984, p. 719). Some male abusers have reported having a spontaneous ejaculation without direct genital stimulation after either injecting or smoking cocaine. There also appears to be a link between chronic cocaine use and compulsive acting-out behavior for both men and women (Washton, 1995). Within seconds, the initial rush is replaced by a period of excitation or euphoria that lasts for between 10 (Strang et al., 1993) and 20 minutes (Weiss et al., 1994). During this period, the individual will feel an increased sense of competence, energy (Gold & Verebey, 1984), or extreme self-confidence (Taylor & Gold, 1990). Some abusers report feeling powerful or “energized” while under the influence of cocaine, and the drug decreases the sense of fatigue and hunger, and increases awareness of sexual stimuli (Schuckit, 2000). Snorting cocaine powder yields a less intense high than when it is smoked or injected. Still, intranasal cocaine will result in a sense of euphoria as well as many of the effects noted in the last paragraph. This sense of cocaine-induced euphoria might last only a few minutes for the individual who smokes cocaine (Byck, 1987), to an estimated 20 minutes to an hour for the individual who snorts cocaine powder. Then the effects begin to wane, and to regain the cocaine-induced pleasurable feelings, the user must again use cocaine. Tolerance to the euphoric effects of cocaine develop quickly. To overcome their tolerance to the effects of cocaine, many users have been known to engage in a cycle of continuous cocaine use known as “coke runs.” The usual cocaine run lasts about 12 hours, although there have been cases that have lasted up to 7 days (Gawin, Khalsa, & Ellinwood, 1994). During this time, the user is smoking or injecting additional cocaine every few minutes until the total cumulative dose might reach levels that would kill the inexperienced (naive) user. The coke run phenomenon is a similar pattern to that seen when animals are given unlimited access to cocaine. Rats that are given intravenous cocaine for pushing a bar set in the wall of their cage will do so repeatedly, ignoring food or even sex, until they die from convulsions or infection (Hall et al., 1990).



Complications of Cocaine Abuse/Addiction Approximately 40% to 50% of people who die each year in the United States as a direct result of substance abuse do so because of their use of cocaine (Karch, 2002). In some cases death occurs so rapidly from a cocaine overdose that “the victim never receives medical attention other than from the coroner” (Estroff, 1987, p. 25). In addition, cocaine abuse might cause a wide range of other problems including addiction. In the 1960s and early 1970s there were those who believed that cocaine was not addictive. The belief that cocaine was not addictive was fueled by the observation that few users in the late 1960s could afford enough cocaine to allow them to use it long enough to become addicted. But, as has been discussed, cocaine has a very real potential to cause physical and psychological addiction.1 About 15% of those who try cocaine will become addicted to it (Jaffe, Rawson, & Ling, 2005). The development of physical dependence varies among individuals, however, and there are reports of addiction developing in as little as 6 to 10 weeks (Lamar et al., 1986). There also appears to be a progression in the methods by which cocaine abusers utilize the drug, as their addiction to cocaine grows in intensity. As the user’s need for the drug becomes more intense, he or she switches from the intranasal method of cocaine use to those methods that introduce greater concentrations of the drug into the body. For example, 79% to 90% of those who admitted to the use of crack cocaine started to use the drug intranasally (Hatsukami & Fischman, 1996). Respiratory system dysfunctions. The cocaine smoker may experience chest pain, cough, and damage to the bronchioles of the lungs (O’Connor, Chang, & Shi, 1992). There have been reports that in some cases the alveoli of the user’s lungs have ruptured, allowing the escape of air (and bacteria) into the surrounding tissues. This will establish the potential for infection to develop, while the escaping gas may contribute to the inability of the lung to fully inflate (a pneumothorax). Approximately one-third of chronic crack users develop 1 While it is true that not everybody who uses cocaine will become addicted, it is not possible to determine at this time who will become addicted if they should try this drug. If only for this reason, cocaine abuse should be discouraged.

wheezing sounds when they breathe, for reasons that are still not clear (Tashkin, Kleerup, Koyal, Marques, & Goldman, 1996). Other potential complications of cocaine smoking include the development of an asthmalike condition known as chronic bronchiolitis (also known as “crack lung”), hemorrhage, pneumonia, and chronic inflammation of the throat (Albertson, Walby, & Derlet, 1995; House, 1990; Taylor & Gold, 1990). There is evidence that cocaine-induced lung damage may be irreversible. There is also evidence that at least some of the observed increase in the incidence of fatal asthma cases might be caused by unsuspected cocaine abuse (“Asthma Deaths Blamed,” 1997). Although cocaine abuse might not be the cause of all asthma-induced deaths, it is known that smoking crack cocaine can cause irritation to the air passages in the lungs, contributing to both fatal and nonfatal asthma attacks (Tashkin et al., 1996). The chronic intranasal use of cocaine can also cause sore throats, inflamed sinuses, hoarseness, and on occasion, a breakdown of the cartilage of the nose (Karch, 2002). Damage to the cartilage of the nose may develop after as little as 3 weeks’ intranasal cocaine use (O’Connor et al., 1992). Other medical problems caused by intranasal cocaine use might include bleeding from the nasal passages and the formation of ulcers in the nasal passages, according to the authors. Cardiovascular system damage. Cocaine abuse can result in damage to the cardiovascular system. Indeed, cocaine abuse is now thought to be the cause of approximately one-quarter of the nonfatal heart attacks in the 18 to 45 year-old- population (“Cocaine link to heart attack bolstered,” 2001) and is the most common reason for chest pain in cocaine abusers (Hahn & Holffman, 2001). Cocaine abuse is also associated with such cardiovascular problems as severe hypertension, sudden dissection of the coronary arteries, cardiac ischemia, tachycardia, myocarditis, and sudden death (Altertson et al., 1995; Brent, 1995; Derlet & Horowitz, 1995; Hahn & Hoffman, 2001; Hollander, 1995; Jaffe, 2000b; Karch, 2002; O’Connor et al., 1992). In addition to the above complications of cocaine use/abuse, it can cause, or at least speed up, the development of atherosclerotic plaques in the user (Hollander et al., 1997; Karch, 2002). Researchers still do not understand the exact mechanism by which cocaine


abuse can cause the development of atherosclerotic plaques in the coronary arteries of the user, but animal research has revealed that cocaine abuse can trick the body’s immune system into attacking the tissue of the heart and endothelial cells that line the coronary arteries (Tanhehco, Yasojima, McGeer, & Lucchesi, 2000). Cocaine accomplishes this feat by triggering what is known as the “complement cascade,” which is part of the immune system’s response to invading microorganisms. This process causes protein molecules to form on the cell walls of invading microorganisms, eventually causing them to burst from internal pressure. The damaged cells are then attacked by the body’s “scavenger” cells, the microphages. Some researchers believe that the microphages are also involved in the process of atherosclerotic plaque formation. They have suggested that atherosclerotic plaque is formed when the microphages mistakenly attack cholesterol molecules circulating in the blood, attaching these molecules to the endothelial cells of the coronary arteries, thus providing a possible avenue through which cocaine abuse might result in the development of atherosclerotic plaques in the coronary arteries of the user. At one time, researchers believed that cocaine abuse could cause increased platelet aggregation, which is to say that cocaine would somehow cause the user’s blood cells to form blood clots more easily. This possible side effect of cocaine seemed to account for clinical reports in which cocaine abusers were found to be at risk for many of the cardiovascular problems noted in the last paragraphs. However, research has failed to find support for this hypothesis (Heesch et al., 1996). Researchers have found that cocaine can increase the heart rate, while reducing the blood flow to the heart (Karch, 2002; Moliterno et al., 1994). Unfortunately, cocaine seems to cause the coronary arteries to constrict at points where the endothelium is already damaged and the blood flow is already reduced by the buildup of plaque (Hahn & Hoffman, 2001). This effect is strongest in cigarette smokers, according to the authors, as evidenced by the 19% reduction in coronary artery blood flow in cigarette smoking cocaine users, as compared to the 7% decrease in coronary artery diameter after smoking cocaine experienced by nonsmokers (Moliterno et al., 1994). Cigarette smokers with no known coronary artery disease experience a temporary 9% reduction in coronary artery blood flow after

Chapter Twelve

smoking cocaine (Moliterno et al., 1994). This ability to reduce coronary artery blood flow at the moment of increased cardiac demand seems to be one mechanism by which cocaine abuse causes cardiac ischemia and myocardial injury (Hahn & Hoffman, 2001). Cocaine has been implicated as causing a significant but transient increase in the individual’s risk for a myocardial infarction (MI). Research has demonstrated that the risk of a MI is 23.7 times higher in the first hour after the individual began to use cocaine (Karch, 2002). Further, the individual may experience symptoms of cardiac ischemia up to 18 hours after the last use of cocaine because of the length of time it takes for the rupture of atheriosclerotic plaque to manifest as coronary artery blockages (Karch, 2002; Kerfoot, Sakoulas, & Hyman, 1996). It is important for physicians to be aware of possible cocaine abuse by the patient, since physicians use drugs known as betaadrenergic antagonists to treat myocardial ischemia on many occasions. If the patient had recently used cocaine, these drugs can contribute to cocaine-induced constriction of the blood vessels surrounding the heart, making the condition worse (Shih & Hollander, 1996). Cocaine abuse has been associated with a number of other cardiac problems, including atrial fibrillation (Hahn & Hoffman, 2001). However, researchers have not identified the exact mechanism by which cocaine is able to cause cardiac arrhythmias. Researchers once believed that cocaine would alter the effects of the catecholamines in the body. For example, the team of Beitner-Johnson and Nestler (1992) suggested that cocaine could block the reuptake of norepinephrine in the cardiac tissue, thus accounting for the increased risk of cardiac stress and distress in the user (Karch, 2002). The team of Tuncel et al. (2002) challenged these theories, noting that in rare cocaine abusers, a normal physiological response known as the baroreflex would block the release of excess norepinephrine, reducing the stress on the heart. Thus, the theory that the chronic use of cocaine causes increased levels of norepinephrine in the blood, placing an increased workload on the heart, especially the left ventricle, putting the individual at risk for sudden death remains only a theory, and the exact mechanism by which chronic cocaine abuse might contribute to increased risk of heart problems in humans remains unknown.


In addition to being a known cause of all of the above conditions, some scientists believe that cocaine abuse might also cause “microinfarcts,” or microscopic areas of damage to the heart muscle (Gawin et al., 1994). These microinfarcts ultimately will reduce the heart’s ability to function effectively and may lead to further heart problems later on. There is also evidence to suggest that cocaine abuse might cause “silent” episodes of cardiac ischemia while the individual is withdrawing from cocaine, leaving the person at risk for sudden cardiac death (Kerfoot et al., 1996). Researchers have since found that in some settings fully 17% of the patients under the age of 60 seen in hospital emergency rooms for chest pain had cocaine metabolites in their urine (Hollander et al., 1995). There does not seem to be any pattern to cocaine-induced cardiovascular problems, and both first-time and longterm cocaine users have suffered cocaine-related cardiovascular problems. In a hospital setting, between 56% and 84% of those patients with cocaine-induced chest pain are found to have abnormal electrocardiograms (Hollander, 1995). Unfortunately, for those cocaine users who experience chest pain but do not seek medical help, there is a very real danger that these symptoms of potentially fatal cocaine-related cardiac problems might be ignored by the individual. Another rare but potentially fatal complication of cocaine abuse is a condition known as acute aortic dissection (Brent, 1995; Karch, 2002). This condition develops when the main artery of the body, the aorta, suddenly develops a weak spot in its wall. The exact mechanism by which cocaine might cause an acute aortic dissection is not known, and it does occasionally develop in persons other than cocaine abusers. Acute aortic dissection is a medical emergency that may require emergency surgery in order to save the patient’s life. Male cocaine abusers run the risk of developing erectile dysfunctions, including a painful, potentially dangerous condition known as priapism (Finger, Lund, & Slagel, 1997; Karch, 2002). In contrast to patterns found with the intravenous injection of opiates, it is not common for intravenous cocaine abusers to develop scar tissue at the injection site. This is because the adulterants commonly found in powdered cocaine are mainly water soluble and are less irritating to the body than the adulterants found in opiates, and thus are less likely to cause scarring (Karch, 2002).


Cocaine abuse as a cause of liver damage. There is evidence that cocaine metabolites, especially cocaethylene, are toxic to the liver. However, the possibility that cocaine abuse can cause or contribute to liver disease remains controversial (Karch, 2002). Medical research has also discovered that a small percentage of the population simply cannot biotransform cocaine, no matter how small the dosage level used. In this pseudocholinesterase deficiency (Gold, 1989), the liver is unable to produce an essential enzyme necessary to break down cocaine. For people with this condition, the use of even a small amount of cocaine could be fatal. Cocaine abuse as a cause of central nervous system damage. Research has now demonstrated that cocaine abuse causes a reduction in cerebral blood flow, and chronic cocaine abusers demonstrate cognitive deficits in the areas of verbal learning, memory, and attention (Kosten & Sofuoglu, 2004). This neurological damage has been classified as “moderate to severe” by Kaufman et al. (1998, p. 376). O’Malley, Adamse, Heaton, and Gawin (1992) found, for example, that 50% of the subjects who abused cocaine on a regular basis showed evidence of cognitive impairment on the neuropsychological tests used in their study, as compared to only 15% of the control subjects. The mechanism by which chronic cocaine use might cause cognitive dysfunction is thought to be another consequence of its vasoconstrictive effects on the blood vessels in the brain (Brust, 1997; Pearlson et al., 1993). The reduction in blood flow to the brain is called a state of cerebral ischemia, and if this state continues for too long, the neurons deprived of blood will begin to die (Kaufman et al., 1998). This process is known as a cerebral vascular accident (CVA or stroke), of which there are two main types. Scientists have known since the mid 1980s that cocaine increases the user’s risk for a CVA, especially a hemorrhagic CVA (Vega, Kwoon, & Lavine, 2002). In the hemorrhagic CVA, a weakened section of an artery in the brain ruptures, both depriving the neurons dependent on that blood vessel of blood and placing the patient’s life at risk from the uncontrolled hemorrhage. Cocaine-induced strokes might be microscopic in size (the “microstroke”), or they might involve major regions of the brain. Scientists have estimated that cocaine abusers are 14 times more likely to suffer a stroke than are nonabusers (Johnson, Devous, Ruiz, &


Ait-Daoud, 2001), and cocaine-induced strokes have reached “epidemic proportions” (Kaufman et al., 1998, p. 376) in recent years. The risk for a cocaineinduced CVA appears to be cumulative, with long-term users being at greater risk than newer users. However, a cocaine-induced CVA is possible even in a first-time user. One possible mechanism by which cocaine might cause CVAs, especially in users without preexisting vascular disease, is through drug-induced periods of vasospasm and reperfusion between periods of drug use (Johnson et al., 2001; Karch, 2002). This cycle can induce damage to the blood vessels within the brain, contributing to the development of a CVA in the user. Cocaine-induced strokes have been documented to occur in the brain, retina, and spinal cord (Brust, 1997; Derlet, 1989; Derlet & Horowitz, 1995; Jaffe, 2000b; Mendoza & Miller, 1992). Cocaine abusers may also experience transient ischemic attacks (TIAs) as a result of their cocaine use, a phenomenon that could very well be caused by the cocaine-induced vasoconstriction identified by Kaufman et al. (1998). Another very rare complication of cocaine use is the development of a drug-induced neurological condition known as the serotonin syndrome (Mills, 1995). Further, cocaine has been known to induce seizures in some users (Derlet, 1989; O’Connor et al., 1992). The mechanism by which cocaine contributes to or causes the development of seizures is not well understood, but the individual’s potential for cocaine-induced seizures appears to be significantly higher for the first 12 hours after using cocaine (O’Connor et al., 1992). The development of seizures does not appear to be associated with past cocaine use, for seizures have been noted in first-time users as well as individuals who were using cocaine at doses that they had previously used without complications (Gold, 1997; Post et al., 1987). It is theorized that cocaine abuse might initiate a process of “kindling” through some unknown mechanism (Karch, 2002; Post et al., 1987). Although cocaine might have a short half-life, “the sensitization effects are long lasting” (Post et al., 1987, p. 113). The authors believe that the sensitizing effects of cocaine might thus lower the seizure threshold, at least in some individuals, observing that “repeated administration of a given dose of cocaine without resulting seizures would in no way assure the continued

Chapter Twelve

safety of this drug even for that given individual” (p. 159; italics added for emphasis). The amygdala is known to be especially vulnerable to the kindling phemonenon (Taylor, 1993). Thus, cocaine’s effects on the amygdala can make this region of the brain hypersensitive, causing the user to experience cocaine-induced seizures. In addition to all of the above, there is evidence that chronic cocaine abuse can cause, or at least significantly contribute to, a disruption in body temperature regulation known as malignant hyperthermia (Karch, 2002). Individuals who develop this condition suffer extremely high, possibly fatal, body temperatures as a result of CNS damage. In children, cocaine seems to lower the seizure threshold for those predisposed to seizures (Mott, Packer, & Soldin, 1994). The relationship between cocaine abuse and seizures in children was so strong that the authors recommended that all children/adolescents brought to the hospital for a previously undiagnosed seizure disorder be tested for cocaine abuse. Cocaine’s effects on the user’s emotional state and perceptions. It has been suggested (Hamner, 1993) that cocaine abuse might exacerbate the symptoms of posttraumatic stress disorders (PTSD). The exact mechanism by which cocaine seems to be able to add to the emotional distress of PTSD is not clear at this time. However there does appear to be evidence that individuals who suffer from PTSD might find the distress made worse by the psychobiological interaction between the effects of the drug and their traumatic experiences. There is evidence that cocaine use might cause an exacerbation of the symptoms of some medical disorders such as Tourette’s syndrome and tardive dyskinesia (Lopez & Jeste, 1997). Further, after periods of extended use, some people have experienced the so-called cocaine bugs, a hallucinatory experience in which users feel as if bugs were crawling on or just under their skin. This is known as formication (“Amphetamines,” 1990). Patients have been known to burn their arms or legs with matches or cigarettes, or scratch themselves repeatedly, in an attempt to rid themselves of these unseen bugs (Lingeman, 1974). Cocaine has also been implicated as one cause of drug-induced anxiety or panic reactions (DiGregorio, 1990). One study in the early 1990s found that onequarter of the patients seen at one panic disorder clinic eventually admitted to the use of cocaine (Louie, 1990).



Up to 64% of cocaine users experience some degree of anxiety as a side effect of the drug, according to the author. There is a tendency for cocaine users to try to self-medicate this side effect through the use of marijuana. Other chemicals often used by cocaine abusers in an attempt to control the drug-induced anxiety include the benzodiazepines, narcotics, barbiturates, and alcohol. These cocaine-induced anxiety and panic attacks might continue for months (Gold & Miller, 1997a) or even years (Satel, Kosten, Schuckit, & Fischman, 1993) after the individual’s last use of cocaine. Between 53% (Decker & Ries, 1993) and 65% (Beebe & Walley, 1991) of chronic cocaine abusers will develop a drug-induced psychosis very similar in appearance to paranoid schizophrenia. This condition is sometimes called “coke paranoia” by illicit cocaine users. Although it is very similar to paranoid schizophrenia, the symptoms of a cocaine-induced psychosis tend to include more suspiciousness and a strong fear of being discovered or of being harmed while under the influence of cocaine (Rosse et al., 1994). Further, the cocaine-induced psychosis is usually of relatively short duration, possibly a few hours (Haney, 2004; Karch, 2002) to a few days (Kerfoot et al., 1996; Schuckit, 2000) after the person stops using cocaine. The mechanism by which chronic cocaine abuse might contribute to the development of a drug-induced psychosis remains unknown. Gawin et al. (1994) suggested that the delusions found in a cocaine-induced psychotic reaction usually clear after the individual’s sleep pattern has returned to normal, suggesting that cocaineinduced sleep disturbances might be one factor in the evolution of this drug-induced psychosis. Another theory suggests that individuals who develop a cocaineinduced paranoia might possess a biological vulnerability for schizophrenia, which is then activated by chronic cocaine abuse (Satel & Edell, 1991). Kosten and Sofuoglu (2004) disputed this theory however, stating that there was little evidence to suggest that cocaine-induced psychotic episodes are found mainly in people predisposed to these disorders. Approximately 20% of the chronic users of crack cocaine in one study were reported to have experienced drug-induced periods of rage, or outbursts of anger and violent assaultive behavior (Beebe & Walley, 1991), which may be part of a cocaine-induced

delirium that precedes death (Karch, 2002). This cocaine-induced delirium might reflect the effects of cocaine on the synuclein family of proteins within the neuron. Under normal conditions, these protein molecules are thought to help regulate the transportation of dopamine within the neuron. But recent evidence (Mash et al., 2003) suggests that cocaine can alter synuclein production within the cell, causing or contributing to the death of the affected neurons, if not the individual. Indeed, cocaine-induced changes in synuclein production and utilization in the brain might cause cocaine-induced delirium, which is occasionally fatal to the user. Finally, either a few hours after snorting the drug, or within 15 minutes if the person has injected it, the user slides into a state of depression. After periods of prolonged cocaine use, the individual’s post-cocaine depression might reach suicidal proportions (Maranto, 1985). Cocaine-induced depression is thought to be the result of cocaine’s depleting the nerve cells in the brain of the neurotransmitters norepinephrine and dopamine. After a period of abstinence, the neurotransmitter levels usually recover and the individual’s emotions return to normal. But there is a very real danger that the cocaine abuser might attempt or complete suicide as a result of a drug-induced depression. One recent study in New York City found that one-fifth of all suicides involving a victim under the age of 60 were cocaine related (Roy, 2001). Cocaine use as an indirect cause of death. In addition to its very real potential to cause death by a variety of mechanisms, cocaine use may indirectly cause, or at least contribute to, premature death of the user. For example, cocaine abuse is a known cause of rhabdomyolsis as a result of its toxic effects on muscle tissue and its vasoconstrictive effects, which can cause muscle ischemia (Karch, 2002; Richards, 2000). There is also evidence that cocaine abuse may alter the blood-brain barrier, facilitating the entry of the human immunodeficiency virus (HIV) into the brain (see Chapter 33).

Summary Cocaine has a long history, which predates the present date by hundreds if not thousands of years. The active agent of the coca leaf, cocaine, was isolated only about 160 years ago, but people were using the coca


leaf for a long time before that. Coincidentally, at just about the time that cocaine was isolated, the hypodermic needle was developed, which allowed users to inject large amounts of relatively pure cocaine directly into the circulatory system where it was rapidly transported to the brain. Users quickly discovered that intravenously administered cocaine brought on a sense of euphoria, which immediately made it a popular drug of abuse. At the turn of the 20th century, government regulations limited the availability of cocaine, which was mistakenly classified as a narcotic at that time. The development of the amphetamine family of drugs in the 1930s, along with increasingly strict enforcement of the laws against cocaine use, allowed drug-addicted individuals to substitute amphetamines for the increasingly rare cocaine. In time, the dangers of cocaine use were forgotten by all but a few medical historians. But in the 1980s, cocaine again surfaced as a major drug of abuse in the United States as government regulations made it difficult for users to obtain amphetamines. To entice users, new forms of cocaine were introduced, including concentrated “rocks” of cocaine, known as crack. To the cocaine user of the 1980s, cocaine seemed to be a harmless drug, although historical evidence suggested otherwise. Cocaine has been a major drug of abuse ever since.

Chapter Twelve

In the 1980s, users rediscovered the dangers associated with cocaine abuse, and the drug gradually has fallen into disfavor. At this point it would appear that the most recent wave of cocaine addiction in the United States peaked around 1986 and that fewer and fewer people are becoming addicted to cocaine. Because of the threat of HIV-1 infection (see Chapter 33), and the increased popularity of heroin in the United States, many cocaine abusers are smoking a combination of crack cocaine and heroin. When cocaine is smoked, either alone, or in combination with heroin prepared for smoking, the danger of HIV transmission is effectively avoided, as intravenous needles are not involved. In the past few years, the reported number of cocaine- and heroin-related emergency room visits has significantly increased in this country. This increase would seem to reflect the growing popularity of a mixture of both cocaine and heroin that is usually smoked. Thus, it would appear that cocaine will remain a part of the drug abuse problem well into the 21st century. Schuckit (2000) reported that cocaine was isolated in 1857 rather than 1859. Surprisingly, recent research (Post et al., 1987) has cast doubt as to the antidepressant properties of cocaine. Although it is true that not everybody who uses cocaine will become addicted, it is not possible to determine at this time who will become addicted if they should try this drug. If only for this reason, cocaine abuse should be discouraged.


Marijuana Abuse and Addiction

chemicals that, when admitted to the body, alter the individual’s perception of reality in a way some people find pleasurable. In this sense, marijuana is similar to the tobacco plant: They each contain compounds which, when introduced into the body, cause the user to experience certain effects that the individual deems desirable. In this chapter, the uses and abuses of marijuana will be discussed.

Introduction For many generations, marijuana has been a most controversial substance of abuse, and it is the subject of many misunderstandings. For example, people talk about marijuana as if it were a chemical in its own right, when in reality it is a plant, a member of the Cannabis sativa family of plants. The name Cannabis sativa is Latin for “cultivated hemp” (Green, 2002), reflecting the fact that some strains of Cannabis sativa have long been cultivated for the hemp fiber they produce, used to manufacture a number of substances.1 Other strains of the Cannabis sativa family have been found to contain high levels of certain compounds found to have medicinal properties and a psychoactive effect. Unfortunately, in the United States, the hysteria surrounding the use or abuse of Cannabis sativa has reached the point that any member of this plant family is automatically assumed to have an abuse potential (Williams, 2000). Indeed, to differentiate between forms of Cannabis sativa producing compounds that might be abused from members of this plant family that have low levels of these same compounds and are potentially useful plants for manufacturing and industry, Williams (2000) suggested that the term hemp be used for the latter. Marijuana, he suggested, should only be used to refer to those strains of Cannabis sativa that have an abuse potential. This is the pattern that will be followed in this text. Unlike other substances such as alcohol, cocaine, or the amphetamines, marijuana is not in itself a drug of abuse. It is a plant that happens to contain some

History of Marijuana Use in the United States Almost 5,000 years ago, cannabis was in use by Chinese physicians as a treatment for malaria, constipation, the pain of childbirth, and when used with wine, a surgical anesthetic (Robson, 2001). Cannabis continued to be used for medicinal purposes throughout much of recorded history. As recently as the 19th century, physicians in the United States and Europe used marijuana as an analgesic, a hypnotic, a treatment for migraine headaches, and as an anticonvulsant (Grinspoon & Bakalar, 1993, 1995). The anticonvulsant properties of cannabis were illustrated by an incident that took place in 1838, when physicians were able to completely control the terror and “excitement” (Elliott, 1992, p. 600) of a patient who had contracted rabies through the use of hashish. In the early years of the 20th century, cannabis came to be viewed with disfavor as a side effect of the hue-and-cry against opiate abuse (Walton, 2002). At the same time, researchers concluded that the chemicals in the marijuana plant were either ineffective or at least less effective than pharmaceuticals being introduced as part of the fight against disease. These two factors caused it to fall into disfavor as a pharmaceutical (Grinspoon & Bakalar, 1993,1995), and by the 1930s, marijuana was removed from the doctor’s


Gutenberg and King James bibles were first printed on paper manufactured from hemp, and Rembrandt and Van Gogh both painted on canvas made from hemp (Williams, 2000). George Washington cultivated cannabis to obtain hemp, but there is no direct evidence that he smoked marijuana (Talty, 2003).



pharmacopoeia. By a historical coincidence, during the same period when medicinal marijuana use was being viewed with suspicion, recreational marijuana smoking was being introduced into the United States by immigrants and itinerant workers from Mexico who had come north to find work (Mann, 1994). Recreational marijuana smoking was quickly adopted by others, especially jazz musicians (Musto, 1991). With the start of Prohibition in 1920, many members of the working class turned to growing or importing marijuana as a substitute for alcohol (Gazzaniga, l988). Recreational cannabis use declined with the end of Prohibition, when alcohol use once more became legal in the United States. But a small minority of the population continued to smoke marijuana, and this alarmed government officials. Various laws were passed in an attempt to eliminate the abuse of cannabis, including the Marijuana Tax Act of 1937.2 But the “problem” of marijuana abuse in the United States never entirely disappeared, and by the 1960s use of marijuana again became popular. Indeed, by the start of the 21st century it is the most commonly abused illicit drug in the United States (Martin, 2004), with more than 50% of the entire population of the United States having used it at least once (Gold, Frost-Pineda, & Jacobs, 2004; Gruber & Pope, 2002). Medicinal marijuana. Since the 1970s a growing number of physicians in the United States have again started to wonder if one or more of the chemicals found in the marijuana plant might continue to be of value in the fight against disease and suffering in spite of its legal status as a controlled substance. This interest was sparked by the reports from marijuana smokers being treated for cancer that they experienced less nausea if they smoked marijuana after receiving 2Contrary

to popular belief, the Marijuana Stamp Act of 1937 did not make possession of marijuana illegal but did impose a small tax on it. People who paid the tax would receive a stamp to show that they had paid the tax. Obviously, since the stamps would also alert authorities to the fact that the owners either had marijuana in their posession or planned to buy it, illegal users did not apply for the proper forms to pay the tax. The stamps are of interest to stamp collectors, however, and a few collectors have actually paid the tax in order to obtain the stamp for their collection. The Federal Marijuana Stamp Act was found to be unconstitutional by the U.S. Supreme Court in 1992. However, 17 states still have similar laws on the books (“Stamp Out Drugs,” 2003).

Chapter Thirteen

chemotherapy treatments (Robson, 2001). Physicians began to follow up on these reports and found that marijuana, or selected chemicals found in the plant, might control the nausea sometimes caused by cancer chemotherapy. The drug Marinol (dronabinol) was introduced as a synthetic version of one of the chemicals found in marijuana, THC (to be discussed below), to control severe nausea. Marinol has met with mixed success, possibly because marijuana’s antinausea effects are caused by a chemical other than THC (Smith, 1997). Preliminary research conducted in the 1980s suggested that the practice of smoking marijuana might be helpful in treating certain forms of otherwise unmanageable glaucoma (Green, 2002; Grinspoon & Bakalar, 1993; Jaffe, 1990; Voelker, 1994). Unfortunately, the initial promise of marijuana in the control of glaucoma was not supported by follow-up studies (Watson, Benson, & Joy, 2000). Although marijuana smoking does cause a temporary reduction in the fluid pressure within the eye, only 60% to 65% of patients who smoke marijuana experience this effect (Green, 1998). Further, in order to achieve and maintain an adequate reduction in eye pressure levels, the individual would have to smoke 9 to 10 marijuana cigarettes per day—one every 2 to 3 hours (Green, 1998). Research into the possible use of marijuana in the treatment of glaucoma continues at this time. There is evidence to suggest that marijuana can relieve at least some of the symptoms of amyotrophic lateral sclerosis (ALS) for short periods of time (Amtmann, Weydt, Johnson, Jensen, & Carter, 2004). Smoking marijuana also seems to help patients with multiple sclerosis, rheumatoid arthritis, and chronic pain conditions (Green, 2002; Grinspoon & Bakalar, 1997a; Robson, 2001; Watson, Benson, & Joy, 2000). An example of this is the work of the team of Karst et al. (2003), who utilized a synthetic analog of THC3 known as CT-34 to explore whether this compound might be useful in the control of neuropathic pain. The authors found that CT-3 was not only effective in controlling neuropathic pain but also did not seem to have any adverse effects in the experimental subjects. 3

See “Pharmacology of Marijuana” section. Which is shorthand for 1’,1’Dimethylheptyl- 8 tetrahydro-cannabinol-11-oic acid.



Marijuana Abuse and Addiction

Preliminary evidence suggests that it might help control the weight loss often seen in patients with late-stage AIDS or cancer (Green, 2002; Watson et al., 2000). There is also evidence, based on animal research, that a compound found in marijuana might function as a potent antioxidant, which might limit the amount of damage caused by cerebral vascular accidents (CVAs or strokes) (Hampson et al., 2002), and this is being actively explored by scientists eager to find a new tool for treating stroke victims. There is also limited evidence suggesting that marijuana might be useful in controlling the symptoms of asthma, Crohn’s disease, and anorexia as well as emphysema, epilepsy, and possibly hypertension (Green, 2002). One exciting possibility is that marijuana might also contain a compound that inhibits tumor growth (Martin, 2004), but research into possible medical applications of cannabis remains banned by the U.S. government (Green, 2002). Claims that marijuana has a medicinal value are dismissed on the grounds that they are only anecodotal in nature (Marmor, 1998), although the Institute of Medicine concluded that there was enough evidence to warrant an in-depth study of the plant’s medicinal value (Watson et al., 2000). Unfortunately, the Drug Enforcement Administration (DEA) has adopted the curious position that since it will recognize no legitimate medical use for marijuana, there is no need to look for any possible medical applications of this compound. For example, an administrative law judge ruled in 1988 that marijuana should be reclassified as a Schedule II substance (see Appendix IV). The DEA overruled its own judge and determined that marijuana would remain a Schedule I substance (Kassirer, 1997). Thus, in spite of evidence suggesting that at least some of the chemicals in marijuana might have medicinal value, all attempts at careful, systematic research into this area have been blocked by the DEA (Stimmel, 1997b). In the late 1990s, a trend developed in which various state legistlatures would debate the medicinal use of marijuana and put the matter to a vote. Several states, such as California, adopted measures approving the medicinal use of marijuana, often after a popular referendum on the subject had been approved by the voters. Unfortunately, the federal government continues to use bureaucratic mechanisms to block these efforts

(Sadock & Sadock, 2003). Thus, it would appear that marijuana will continue to remain a controversial recreational substance for many years to come.

A Question of Potency Ever since the 1960s, marijuana abusers have sought ways to enhance the effects of the chemicals in the plant by adding other substances to the marijuana before smoking it, or by using strains with the highest possible concentrations of the compounds thought to cause marijuana’s effects. To this end, users have taken to the process of growing strains of marijuana that have high concentrations of the compounds most often associated with pleasurable effects, and marijuana might be said to be the biggest cash crop in the United States at this time (Ross, 2002; Schlosser, 2003).5 There is strong evidence that much of the marijuana sold in the United States at this time is more potent than the marijuana commonly used in the 1960s and 1970s. The average marijuana sample seized by the police in the year 1992 had 3.08% THC, which had increased to 5.11% THC in the year 2002 (Compton, Grant, Colliver, Glantz, & Stinson, 2004).6 There have been reports of marijuana with THC levels of 15% (Segal & Duffy, 1999) and even up to 20% for some Sinsemilla and Netherwood strains (Hall & Solowij, 1998; Weiss & Millman, 1998). One strain developed in British Columbia, Canada, reportedly has a THC content of 30% (Shannon, 2000).

A Technical Point THC is found throughout the marijuana plant, but the highest concentrations are found in the small upper leaves and flowering tops of the plant (Hall & Solowij, 1998). Historically, the term marijuana is used to identify preparations of the cannabis plant that are used for smoking or eating. The term hashish is used to identify the thick resin that is obtained from the flowers of the 5

This is to say the estimated retail value of the marijuana being raised in the United States, not the amount being cultivated, makes it the most valuable cash crop in this country at this time. 6Paradoxically, Schlosser (2003) suggested that the higher potency of the marijuana currently being sold might actually increase the safety of marijuana smoking as the user would need to smoke less to achieve a desired level of intoxication than with less potent preparations.


Chapter Thirteen


Lifetime illicit substance use


Lifetime cannabis use

50 40 30 20 10 0 1975



1990 Year




FIGURE 13.1 Comparison of marijuana abuse frequency with overall illicit drug abuse frequency: 1975–2004. Source: Based on Johnston, O’Malley, Bachman, & Schulenberg (2004a).

marijuana plant. This resin is dried, forming a brown or black substance that has a high concentration of THC. This is subsequently either ingested orally (often mixed with some sweet substance) or smoked. Hash oil is a liquid extracted from the plant, which is 25% to 60% THC; this is added to marijuana or hashish to enhance its effect. However, in this chapter, the generic term marijuana is used for any part of the plant that is to be smoked or ingested.

Scope of the Problem of Marijuana Abuse Estimates of the number of marijuana abusers around the world range from 146 million (United Nations, 2004) to 200–300 million people (Macfadden & Woody, 2000). Fully 30% of all marijuana abusers live in Asia, whereas North America (both the United States and Canada) and Africa each have about 24% of the world’s marijuana abusers. Another 20% are found in Europe (United Nations, 2004). In the United States, marijuana is the most frequently abused illicit substance, a status it has held for a number of decades (Compton et al., 2004; Hall & Degenhardt, 2005; Sussman & Westreich, 2003). Figure 13.1 provides an overview of the proportion of the illicit drug use

problem that is caused by marijuana abuse. It is estimated that more than 50% of the entire population of this country have used marijuana at least once (Gold et al., 2004). There have been no statistically significant changes in the overall rate of marijuana abuse in the adult population in the United States since 1991, although some subgroups have shown an increase in the frequency of marijuana abuse and the percentage of abusers who are addicted has increased in that period (Compton et al., 2004). Marijuna use peaks in early adulthood and usually is discontinued by the time people are in their late 20s or early 30s (Gruber & Pope, 2002). About 46.1% of the seniors in the class of 2003 admitted to having used marijuana at least once (Johnston, O’Malley, & Bachman, 2003a), and the average age at which marijuana use begins is approximately 18 years (Hubbard, Franco, & Onaivi, 1999). Approximately 10% of those who use marijuana do so daily, and another 20% to 30% use it once a week (Hall & Solowij, 1998). Only a small percentage of marijuana abusers use more than 10 grams a month (about enough for 25–35 marijuana cigarettes) (MacCoun & Reuter, 2001). But marijuana is addictive, and it is estimated that 17% of those people who smoke marijuana more than five times


Marijuana Abuse and Addiction

In spite of its popularity as a drug of abuse, the mechanisms by which marijuana affects normal brain function remain poorly understood (Sussman & Westreich, 2003). The Cannabis sativa plant is known to contain at least 400 different compounds, of which an estimated 61 have some psychoactive effect (Gold et al., 2004; Sadock & Sadock, 2003; Weiss & Millman, 1998). The majority of marijuana’s psychoactive effects are apparently the result of a single compound, -9-tetrahydrocannabinol7 (“THC”), which was first identified in 1964 (Mirin, Weiss, & Greenfield, 1991; Restak, 1994; Sadock & Sadock, 2003). A second compound, cannabidiol (CBD), is also inhaled when marijuana is smoked, but researchers are not sure whether this compound has a psychoactive effect on humans or not (Nelson, 2000). Once in the body, THC is biotransformed into the chemical 11-hydroxy- 9-THC, a metabolite that actually is thought to cause its effects in the central nervous system (Sadock & Sadock, 2003). Only about 1% of the THC that is absorbed into the body is able to penetrate the blood-brain barrier to reach the brain, in part because the THC molecule is protein bound (Jenkins & Cone, 1998; Macfadden & Woody, 2000). Scientists have identified two receptor sites for THC in the body, the CB1 and CB2 receptors. Evidence also suggests the possibility that there are other THC receptor sites yet to be discovered (Karst et al., 2003). The CB1 receptor sites are located in the hippocampus, cerebral cortex, basal ganglia, and cerebellum regions of the brain (Gruber & Pope, 2002; Martin, 2004; Watson et al., 2000; Zajicek et al., 2003). In general,

the THC that binds to the CB1 receptor site seems to inhibit the release of excitatory neurotransmitters in these regions of the brain, possibly by opening the potassium ion channel in certain neurons while inhibiting the passage of calcium ions into the neurons, thus reducing the rate at which they might “fire” (Martin, 2004; Wingerchuk, 2004; Zajicek et al., 2003). The CB2 receptor sites are found mainly in peripheral tissues that help mediate the body’s immune response (Martin, 2004; Reynolds & Bada, 2003), which might explain why cannabis seems to have a mild immunosuppressant effect. Scientists have also identified a pair of molecules within the brain that “bind” to the same receptor sites that THC occupies when the individual smokes marijuana. The first of these molecules is anandamide and the second is called sn-2 arachidonyglycerol (or 2-AG) (Martin, 2004). Scientists suspect that anandamide, which functions as a neurotransmitter in the brain, is involved in such activities as mood, memory, cognition, perception, muscle coordination, sleep, regulation of body temperature, and appetite; it possibly helps to regulate the immune system (Gruber & Pope, 2002; Nowak, 2004; Parrott, Morinan, Moss, & Scholey, 2004; Robson, 2001). Although THC uses this same receptor site, it seems to be 4–20 times as potent as anandamide, thus causing it to have a far stronger effect than this natural neurotransmitter (Martin, 2004). Sn-2 arachidonyglycerol has not been studied in detail. It is thought to be manufactured in the hippocampus, a region of the brain known to be involved in the formation of memories (Parrott et al., 2004; Watson et al., 2000). Animal research would suggest that the brain uses these cannabinoid-type chemicals to help eliminate aversive memories (Marsicano et al., 2002; Martin, 2004). In addition, marijuana has been found to affect the synthesis and acetylcholine8 turnover in the limbic system (Hartman, 1995) and the cerebellum (Fortgang, 1999). This might be the mechanism by which marijuana causes the user to feel sedated and relaxed. Marijuana has a mild analgesic effect and is known to potentiate the analgesia induced by morphine (Martin, 2004). These effects appear to be caused by marijuana-induced inhibition of the enzyme adenylate



will become addicted to it (Johns, 2001). Each year in the United States 100,000 people seek treatment for marijuana addiction (Hubbard et al., 1999). Because of its popularity, the legal and the social sanctions against marijuana use have repeatedly changed in the past 30 years. In some states, possession of a small amount of marijuana was decriminalized, only to be recriminalized just a few years later (Macfadden & Woody, 2000). Currently, the legal status of marijuana varies from one state to another.

Pharmacology of Marijuana

is the Greek letter for “delta.”



cyclase, which is involved in the transmission of pain messages, although the exact mechanism by which this is accomplished remains to be identified. Marijuana is also able to inhibit the production of cyclooxygenase9 which may play a role in its analgesic effects (Carvey, 1998). The analgesic effects of marijuana seem to peak at around 5 hours after it was used, and evidence suggests that marijuana is about as potent as codeine (Karst et al., 2003; Robson, 2001). Once in the circulation, THC is rapidly distributed to blood-rich organs such as the heart, lungs, and brain. It then slowly works its way into tissues that receive less blood, such as the fat tissues of the body, where unmetabolized THC will be stored. Repeated episodes of marijuana use over a short period of time allow significant amounts of THC to be stored in the body’s fat reserves. In between periods of active marijuana use, the fat-bound THC is slowly released back into the blood (Schwartz, 1987). In rare cases, this process results in heavy marijuana users testing positive for THC in urine toxicology screens for weeks after their last use of marijuana (Schwartz, 1987). However, this happens only with very heavy marijuana users, and casual users will usually have metabolites of THC in their urine for only about 3 days after the last use of marijuana.10 The primary site of THC biotransformation is in the liver, and more than 100 metabolites are produced during the process of THC biotransformation (Hart, 1997). The half-life of THC appears to vary as a result of whether metabolic tolerance has developed. However, the liver is not able to biotransform THC very quickly and in experienced users THC has a half-life of about 3 days (Schwartz, 1987) to a week for a single dose (Gruber & Pope, 2002). About 65% of the metabolites of THC are excreted in the feces, and the rest are excreted in the urine (Hubbard et al., 1999; Schwartz, 1987). Tolerance to the subjective effects of THC will develop rapidly, and once tolerance has developed users must either wait a few days until their tolerance begins 9See

Glossary. individuals have claimed that their urine samples tested positive for THC because they had used a form of beer made from the hemp plant. Unfortunately, test data fail to support the claim that one might ingest THC from this beer. 10Some

Chapter Thirteen

to diminish or alter the manner in which they use the substance. For example, after tolerance to marijuana has developed the chronic marijuana smoker must use “more potent cannabis, deeper, more sustained inhalations, or larger amounts of the crude drug” (Schwartz, 1987, p. 307) in order to overcome tolerance. Interactions between marijuana and other chemicals. There has been relatively little research into the possible interaction beween marijuana and other chemicals. It was suggested that when patients taking lithium used marijuana, their blood lithium levels would increase (Ciraulo, Shader, Greenblatt, & Barnhill, 1995). The reason for this increase in blood lithium level is not clear. However, because lithium is quite toxic and has only a narrow “therapeutic window,” this interaction between marijuana and lithium is potentially dangerous to the person who uses both substances. There has also been one case report of a patient who smoked marijuana while taking Antabuse (disulfiram). The patient developed a hypomanic episode that subsided when he stopped using marijuana (Barnhill, Ciraulo, Ciraulo, & Greene, 1995). When the patient again resumed the use of marijuana while taking Antabuse, he again became hypomanic, according to the authors, suggesting that the episode of mania was due to some unknown interaction between these two chemicals. For reasons that are not clear, adolescents who use marijuana while taking an antidepressant medication such as Elavil (amitriptyline) run the risk of developing a drug-induced delirium. Thus, individuals who are taking antidepressants should not use marijuana. Cocaine users will often smoke marijuana concurrently with their use of cocaine because they believe that the sedating effects of marijuana will counteract the excessive stimulation caused by the cocaine. Unfortunately, cocaine is known to have a negative impact on cardiac function when it is abused. There has been no research into the combined effects of marijuana and cocaine on cardiac function in either healthy volunteers or patients with some form of preexisting cardiovascular disease. Craig (2004) warned against the concurrent use of alcohol and marijuana. One of the body’s natural defenses against poisons is vomiting. Marijuana inhibits nausea and vomiting. If the person were to ingest too much alcohol while using marijuana, Craig (2004) suggested that his or her body would be less likely to

Marijuana Abuse and Addiction

attempt to expel some of the alcohol through vomiting, raising the individual’s chance of an overdose of the alcohol. There has been no research to test this hypothesis, but the concurrent use of alcohol and cannabis should be avoided on general principles.

Methods of Administration In the United States, marijuana is occasionally ingested by mouth, usually after it has been baked into a product such as cookies or brownies. This process will allow the user to absorb about 4% to 12% of the available THC, with a large part of the THC being destroyed by the chemicals of the digestive tract (Drummer & Odell, 2001; Gold et al., 2004; Stimmel, 1997). In contrast to smoked marijuana, oral ingestion results in a slower absorption into the general circulation so that the user does not feel the effects of THC until 30–60 minutes (Mirin et al., 1991) or perhaps 2 hours (Schwartz, 1987) after ingesting it. The peak blood concentration of THC is usually seen 60–90 minutes after the person has ingested the cookie or brownie, although in rare cases this might be delayed for as long as 1 to 5 hours (Drummer & Odell, 2001). Estimates of the duration of marijuana’s effects when ingested orally range from 3 to 5 hours (Mirin et al., 1991; Weiss & Mirin, 1988) to 8 to 24 hours (Gruber & Pope, 2002). The most popular means by which marijuana is abused is by smoking (Gruber & Pope, 2002), a practice that can be traced back at least 5,000 years (Walton, 2002). It has been estimated that almost 60% of the available THC is admitted into the body when marijuana is smoked (Drummer & Odell, 2001; Gold et al., 2004). Marijuana can be smoked alone or mixed with other substances. Most commonly, the marijuana is smoked by itself in the form of cigarettes commonly called “joints.” The typical marijuana cigarette usually contains between 500 mg and 750 mg of marijuana and provides an effective dose of approximately 2.5 mg to 20 mg of THC per cigarette (depending on potency). The amount of marijuana in the average joint weighs about 0.014 ounces (Abt Associates, Inc., 1995b). A variation on the marijuana cigarette is the “blunt.” Blunts are made by removing one of the outer leaves of a cigar, unrolling it, filling it with high potency marijuana mixed with chopped cigar tobacco, and then rerolling the mixture into the cigar’s outer


leaves so that the mixture assumes the shape of the original cigar (Gruber & Pope, 2002). Users report some degree of stimulation, possibly from the nicotine in the cigar tobacco entering the lungs along with the marijuana smoke. The technique by which marijuana is smoked is somewhat different from the normal smoking technique used for cigarettes or cigars (Schwartz, 1987). Users must inhale the smoke deeply into their lungs, then hold their breath for between 20 and 30 seconds in an attempt to get as much THC into the blood as possible (Schwartz, 1987). Because THC crosses through the lungs into the circulation very slowly, only 2% to 50% of the THC that is inhaled will actually be absorbed (Macfadden & Woody, 2000). But the effects of this limited amount of THC begin within seconds (Weiss & Mirin, 1988) to perhaps 10 minutes (Bloodworth, 1987). It has been estimated that to produce a sense of euphoria the user must inhale approximately 25 to 50 micrograms per kilogram of body weight when marijuana is smoked, and between 50 to 200 micrograms per kilogram of body weight when ingested orally (Mann, 1994). Doses of 200 to 250 micrograms per kilogram when marijuana is smoked or 300 to 500 micrograms when taken orally may cause the user to hallucinate, according to the author. As these figures suggest, it takes an extremely large dose of THC for this to occur. Marijuana users in other countries often have access to high potency sources of THC and thus may achieve hallucinatory doses. But it is extremely rare for marijuana users in this country to have access to such potent forms of the plant. Thus, for the most part, the marijuana being smoked in this country will not cause the individual to hallucinate. However, in many parts of the country, marijuana is classified as a hallucinogenic by law-enforcement officials. The effects of smoked marijuana reach peak intensity within 30 minutes and begin to decline in an hour (Nelson, 2000). Estimates of the duration of the subjective effects of smoked marijuana range from 2–3 (O’Brien, 2001) to 4 hours (Grinspoon & Bakalar, 1997a; Sadock & Sadock, 2003) after a single dose. The individual might suffer some cognitive and psychomotor problems for as long as 5–12 hours after a single dose, however, suggesting that the effects of smoking marijuana might last longer than the euphoria (Sadock & Sadock, 2003).


Chapter Thirteen

Proponents of the legalization of marijuana point out that in terms of immediate lethality, marijuana appears to be a “safe” drug. Various researchers have estimated that the effective dose is 1/10,000th (Science and Technology Committee Publications, 1998) to 1/20,000th, or even 1/40,000th the lethal dose (Grinspoon & Bakalar, 1993, l995; Kaplan, Sadock, & Grebb, 1994). It was reported that a 160-pound person would have to smoke 900 marijuana cigarettes simultaneously to achieve a fatal overdose (Cloud, 2002). An even higher estimate was offered by Schlosser (2003), who suggested that the average person would need to smoke 100 pounds of marijuana a minute for 15 minutes to overdose on it. In contrast to the estimated 434,000 deaths each year in this country from tobacco use and the 125,000 yearly fatalities from alcohol use, only an estimated 75 marijuana-related deaths occur each year. Most marijuana-related deaths take place in accidents while the individual is under the influence of this substance rather than as a direct result of any toxic effects of THC (Crowley, 1988). As these data would suggest, there has never been a documented case of a marijuana overdose (Gruber & Pope, 2002; Schlosser, 2003). Indeed, in terms of its immediate toxicity, marijuana appears to be “among the least toxic drugs known to modern medicine” (Weil, 1986, p. 47).

Subjective Effects of Marijuana At moderate dosage levels, marijuana will bring about a two-phase reaction (Brophy, 1993). The first phase begins shortly after the drug enters the bloodstream, when the individual will experience a period of mild anxiety; the second phase follows—a sense of well-being or euphoria as well as a sense of relaxation and friendliness (Kaplan et al., 1994). These subjective effects are consistent with the known physical effects of marijuana. Research has found that marijuana causes “a transient increase in the release of the neurotransmitter dopamine” (Friedman, 1987, p. 47), a neurochemical thought to be involved in the experience of euphoria. The individual’s expectations influence how he or she interprets the effects of marijuana. Marijuana users tend to anticipate that the drug will (a) impair cognitive function as well as the user’s behavior, (b) help the user relax, (c) help the user interact socially and

enhance sexual function, (d) enhance creative abilities and alter perception, (e) bring some negative effects, and (f) bring about a sense of “craving” (Schafer & Brown, 1991). Individuals who are intoxicated on marijuana frequently report an altered sense of time as well as mood swings (Kaplan et al., 1994) and feelings of well-being and happiness. Marijuana also seems to bring about a splitting of consciousness, in which the user will possibly experience depersonalization and/or derealization while under its influence (Johns, 2001). Marijuana users have often reported a sense of being on the threshold of a significant personal insight but not being able to put this insight into words. These reported drug-related insights seem to come about during the first phase of the marijuana reaction. The second phase of the marijuana experience begins when the individual becomes sleepy, which takes place following the acute intoxication caused by marijuana (Brophy, 1993).

Adverse Effects of Occasional Marijuana Use More than 2,000 separate metabolites of the 400 chemicals in the marijuana plant may be found in the body after the individual has smoked marijuana (Jenike, 1991). Many of these metabolites may remain present in the body for weeks after a single episode of marijuana smoking. Unfortunately, scientists have not studied the long-term effects of these metabolities. Further, if the marijuana is adulterated (as it frequently is), the various adulterants add their own contribution to the flood of chemicals admitted to the body when the person uses marijuana. Again, there is little research into the long-term effects of these adulterants or their metabolites on the user. Although advocates of marijuana use point to its safety record, it is not a benign substance. In addition to the famous “bloodshot eyes” seen in marijuana smokers, which is caused by marijuana making the small blood vessels in the eyes dilate and thus be more easily seen, approximately 40% to 60% of users will experience at least one other adverse drug-induced effect (Hubbard et al., 1999). About 15% of marijuana abusers experience episodes of drug-induced anxiety or even full-blown panic attacks (Johns, 2001; Kaplan et al., 1994; Millman


Marijuana Abuse and Addiction

& Beeder, 1994). Factors that seem to influence the development of marijuana-related panic reactions are the use of more potent forms of marijuana, the individual’s prior experience with marijuana, expectations for the drug, the dosage level being used, and the setting in which the drug is used. Such panic reactions are most often seen in the inexperienced marijuana user (Bloodworth, 1987; Gruber & Pope, 2002; Mirin et al., 1991). Usually the only treatment needed is simple reassurance that the drug-induced effects will soon pass (Millman & Beeder, 1994; Kaplan et al., 1994). Because smokers, more easily than oral users, are able to titrate the amount used, there is a tendency for panic reactions to occur more often after marijuana is ingested than when it is smoked (Gold et al., 2004). Marijuana use also contributes to impaired reflexes for at least 24 hours after the individual’s last use (Gruber & Pope, 2002; Hubbard et al., 1999; Meer, 1986). The team of Ramaekers, Berghaus, van Laar, and Drummer (2004) concluded that marijuana abuse caused a dose-related impairment of cognition and psychomotor function, with the risk of a motor vehicle accident being 300% to 700% higher in persons who had recently used marijuana. This finding was consistent with that of Schwartz (1987), who concluded that teenagers who smoked marijuana as often as six times a month “were 2.4 times more likely to be involved in traffic accidents” (p. 309) than were nonusers. A more serious but quite rare adverse reaction is the development of a marijuana-induced psychotic reaction, often called a toxic or drug-induced psychosis. The effects of a marijuana-induced toxic psychosis are usually short-lived and will clear up in a few days to, at most, a week (Johns, 2001). Fortunately, researchers currently think that marijuanainduced psychotic reactions are only the result of extremely heavy marijuana use, making the danger of a marijuana-induced psychosis for the casual user quite low (Johns, 2001). However, research has also demonstrated that marijuana use can exacerbate preexisting psychotic disorders or initiate a psychotic reaction in an individual predisposed to this form of psychiatric dysfunction (Johns, 2001; Linszen, Dingemans, & Lenior, 1994; Mathers & Ghodse, 1992; O’Brien, 2001). One possible mechanism through which marijuana might contribute to the emergence of schizophrenia

was suggested by Linszen et al. (1994). The authors noted THC functions as a dopamine agonist in the nerve pathways of the region of the brain known as the medial forebrain bundles. Dysregulation of normal dopamine activity in this region of the brain has been suggested as one possible cause of schizophrenia, and this might be one mechanism through which marijuana might contribute to the emergence of psychotic symptoms in patients with schizophrenia. Marijuana is known to reduce sexual desire in the user and for male users may contribute to erectile problems and delayed ejaculation (Finger, Lund, & Slagel, 1997). Finally, there is a relationship between cannabis abuse and depression, although researchers are not sure whether the depression is a result of the cannabis use (Bovasso, 2001). This marijuana-related depression is most common in the inexperienced user and may reflect the activation of an undetected depression in the user. The depressive episode is usually mild, is short-lived, and does not require professional intervention except in rare cases (Millman & Beeder, 1994).

Consequences of Chronic Marijuana Abuse Researchers have found precancerous changes in the cells of the respiratory tract of chronic marijuana abusers similar to those seen in cigarette smokers (Gold et al., 2004). However, as many marijuana smokers also use tobacco cigarettes, it is not clear to what degree, if any, their marijuana abuse has caused or contributed to these cellular changes. It has been demonstrated that the chronic use of THC reduces the effectiveness of the respiratory system’s defenses against infection (Gruber & Pope, 2002; Hubbard et al., 1999). Animal research also suggests the possibility of a drug-induced suppression of the immune system as a whole, although researchers do not know whether this effect is found in humans (Abrams et al., 2003; Gold et al., 2004). But given the relationship between HIV-1 virus infection and immune system impairment,11 it would seem that marijuana abuse by patients with HIV-1 infection is potentially dangerous. 11This

topic is discussed in more detail in Chapter 33.


With the exception of nicotine, which is not found in the cannabis plant, marijuana smokers are exposed to virtually all of the toxic compounds found in cigarettes, and if they smoke a blunt12 their exposure to these compounds is even higher (Gruber & Pope, 2002). The typical marijuana cigarette has between 10 and 20 times as much “tar” as tobacco cigarettes (Nelson, 2000), and marijuana smokers are thought to absorb four times as much tar as cigarette smokers (Tashkin, 1993). In addition, the marijuana smoker will absorb five times as much carbon monoxide per joint as would a cigarette smoker who smoked a single regular cigarette (Oliwenstein, 1988; Polen, Sidney, Tekawa, Sadler, & Friedman, 1993; University of California, Berkeley, 1990b). Smoking just four marijuana joints appears to have the same negative impact on lung function as smoking 20 regular cigarettes (Tashkin, 1990). Marijuana smoke has been found to contain 5 to 15 times the amount of a known carcinogen, benzpyrene, as does tobacco smoke (Bloodworth, 1987; Tashkin, 1993). Indeed, the heavy use of marijuana was suggested as a cause of cancer of the respiratory tract and the mouth (tongue, tonsils, etc.) in a number of younger individuals who would not be expected to have cancer (Gruber & Pope, 2002; Hall & Solowij, 1998; Tashkin, 1993). There are several reasons for the observed relationship between heavy marijuana use and lung disease. In terms of absolute numbers, marijuana smokers tend to smoke fewer joints than cigarette smokers do cigarettes. However, they also smoke unfiltered joints, a practice that allows more of the particles from smoked marijuana into the lungs than is the case for cigarette smokers. Marijuana smokers also smoke more of the joint than cigarette smokers do cigarettes. This increases the smoker’s exposure to microscopic contaminants in the marijuana. Finally, marijuana smokers inhale more deeply than cigarette smokers and retain the smoke in the lungs for a longer period of time (Polen et al., 1993). Again, this increases the individual’s exposure to the potential carcinogenic agents in marijuana smoke. These facts seem to account for the fact that like tobacco smokers, marijuana users have an increased frequency of bronchitis and other upper respiratory infections (Hall 12

Discussed earlier in this chapter.

Chapter Thirteen

& Solowij, 1998). The chronic use of marijuana also may contribute to the development of chronic obstructive pulmonary disease (COPD), similar to that seen in cigarette smokers (Gruber & Pope, 2002). Marijuana abuse has been implicated as the cause of a number of reproductive system dysfunctions. For example, there is evidence that marijuana use contributes to reduced sperm counts (Brophy, 1993) and a reduction in testicular size (Hubbard et al., 1999) in men. Further, chronic male marijuana users have been found to have 50% lower blood testosterone levels than men who do not use marijuana (Bloodworth, 1987). Women who are chronic marijuana users have been found to experience menstrual abnormalities and/or a failure to ovulate (Gold et al., 2004; Hubbard et al., 1999). Researchers are still divided on whether chronic marijuana use could result in fertility problems in the woman. However, on the basis of the limited data that is available at this time, there is no evidence to suggest that chronic marijuana use results in fertility problems (Grinspoon & Bakalar 1997). People who have previously used hallucinogenics may also experience marijuana-related “flashback” experiences (Jenike, 1991). Such flashbacks are usually limited to the 6-month period following the last marijuana use (Jeinke, 1991) and will eventually stop if the person does not use any further mood-altering chemicals (Weiss & Mirin, 1988). The flashback experience will be discussed in more detail in the chapter on the hallucinogenic drugs, as there is little evidence that cannabis alone can induce flashbacks (Sadock & Sadock, 2003). There is little conclusive evidence that chronic cannabis use can cause brain damage (Grant, Gonzalez, Carey, Natarajan, & Wolfson, 2003) or permanent neurocognitive damage (Vik, Cellucci, Jarchow, & Hedt, 2004). At most, there seems to be a minor reduction in the user’s ability to learn new information while using cannabis (Grant et al. 2003). Sussman and Westreich (2003) suggested that chronic marijuana abuse might result in a 20% to 30% reduction in the user’s level of cognitive performance. It is possible to detect evidence of cognitive deficits in chronic cannabis abusers for up to 7 days after their last use of marijuana (Pope, Gruber, Hudson, Huestis, & Yurgelun-Todd, 2001; Pope

Marijuana Abuse and Addiction

& Yurgelun-Todd, 1996). Researchers have identified memory deficits associated with cannabis abuse that seem progressively worse in chronic users (Fletcher et al., 1996; Gruber, Pope, Hudson, & Yurgelun-Todd, 2003; Solowij et al., 2002). But these cognitive changes seem to reverse after 2 weeks of abstinence from marijuana (Vik et al., 2004). Researchers have also found changes in the electrical activity of the brain, as measured by electroencephalographic (EEG) studies, in chronic marijuana abusers. These EEG changes seem to last for at least 3 months after the individual’s last use of marijuana (Schuckit, 2000). However, the importance of these observed EEG changes is not known at this time, and neuropsychological testing of chronic marijuana users in countries such as Greece, Jamaica, and Costa Rica has failed to uncover evidence of permanent brain damage (Grinspoon & Bakalar, 1997). It is not known at this time whether these EEG changes are caused by the abuse of cannabis or the abuse of other recreational chemicals (Grant et al., 2003). Hernig, Better, Tate, and Cadet (2001) used a technique known as transcranial Doppler sonography to determine the blood flow rates in the brains of 16 long-term marijuana abusers and 19 nonusers. The authors found evidence of increased blood flow resistance in the cerebral arteries of the marijuana abusers, suggesting that chronic marijuana abuse might increase the individual’s risk of a cerebral vascular accident (stroke). Within 4 weeks of their last use of cannabis, the blood flow patterns of young marijuana abusers was comparable to that seen in normal 60-year-old adults, according to the authors, who were unable to predict whether the blood flow patterns would return to normal with continued abstinence from marijuana. This places cannabis in the paradoxical position of possibly contributing to the individual’s risk for stroke and as possibly containing a compound that might limit the damage caused by a cerebrovascular accident after it occurs. While additional research is necessary to determine the degree to which chronic cannibis abuse might interfere with blood flow within the brain and the mechanism by which this might occur, the evidence suggesting that marijuana abuse might be a cause of permanent brain damage is mixed, at best, if not lacking at this time.


The “amotivational syndrome.” Scientists have found conflicting evidence as to whether chronic marijuana use might bring about an “amotivational syndrome.” The amotivational syndrome is thought to consist of decreased drive and ambition, short attention span, easy distractibility, and a tendency not to make plans beyond the present day (Mirin et al., 1991). Indirect evidence suggesting that the amotivational syndrome might exist was provided by Gruber et al. (2003). The authors compared psychological and demographic measures of 108 individuals who had smoked cannabis at least 5,000 times against 72 age-matched control subjects who admitted to having used marijuana no more than 50 times. The authors found that the heavier marijuana users reported significantly lower incomes and educational achievement than did the control group in spite of the fact that the two groups came from similar families of origin. While suggestive, this study does not answer the question of whether these findings reflect the effects of marijuana or if individuals prone to marijuana abuse tend to have less drive and initiative and are drawn to marijuana because its effects are similar to their personalities. The “amotivational syndrome” has been challenged by many researchers in the field. Even chronic marijuana abusers demonstrate “remarkable energy and enthusiasm in the pursuit of their goals” (Weiss & Millman, 1998, p. 211). It has been suggested that the amotivational syndrome might reflect nothing more than the effects of marijuana intoxication in chronic users (Johns, 2001), and there is little evidence of “a specific and unique ‘amotivational syndrome’” (Mendelson & Mello, 1998, p. 2514; Sadock & Sadock, 2003; Iverson, 2005). Marijuana abuse as a cause of death. Although marijuana is, in terms of immediate lethality, quite safe, there is significant evidence that chronic marijuana use can contribute to or be the primary cause of a number of potentially serious medical problems. For example, some of the chemicals in marijuana might function as “dysregulators of cellular regulation” (Hart, 1997, p. 60) by slowing the process of cellular renewal within the body. Marijuana users experience a 30% to 50% increase in heart rate that begins within a few minutes of use


Chapter Thirteen

and can last for up to 3 hours (Craig, 2004; Hall & Solowij, 1998). For reasons that are unknown, marijuana also causes a reduction in the strength of the heart contractions and the amount of oxygen reaching the heart muscle, changes that are potentially serious for patients with heart disease (Barnhill et al., 1995; Schuckit, 2000). Although these changes are apparently insignificant for younger cannabis users, they may be the reason older users are at increased risk for heart attacks in the first hours following their use of marijuana (“Marijuana-related Deaths,” 2002; Mittleman, Lewis, Maclure, Sherwood, & Muller, 2001). Thus, it would appear that marijuana use is not as benign as advocates of this substance would have us believe. The myth of marijuana-induced violence. In the 1930s and 1940s, it was widely believed that marijuana use would cause the user to become violent. Researchers no longer believe that marijuana is likely to induce violence. Indeed, the sedating and euphoric effects of marijuana would be more likely to reduce the tendency toward violence while the user is intoxicated rather than bring it about (Husak, 2004). However, the chronic abuser, who is more tolerant of the effects, will experience less of the sedating effects and be more capable of violence than a rare user (Walton, 2002). Even so, currently few clinicians now believe that marijuana use is associated with an increased tendency for violent acting out.13

The Addiction Potential of Marijuana Because marijuana does not cause the same dramatic withdrawal syndromes seen with alcohol or narcotic addiction, people tend to underestimate the addiction potential of cannabis. But tolerance, one of the hallmarks of addiction, does slowly develop to cannabis. Researchers believe that smoking as few as three marijuana cigarettes a week may result in tolerance to the effects of marijuana (Bloodworth, 1987). Further, 13However,

if the marijuana were adulterated with any other chemical(s), or if the abuser had used marijuana along with other chemicals, then the effects of that chemical(s) must be considered as a possible cause of drug-induced violent behaviors. For example, the hallucinogen PCP is known to trigger violent behaviors in some users, and it is a common adulterant in marijuana.

perhaps 9% of cannabis abusers will become addicted to marijuana (Cloud, 2002; Fortgang, 1999; Gruber & Pope, 2002). Gruber and Pope (2002) suggested that one-third of the adolescents who use marijuana daily are addicted to it. One characteristic that seems to identify individuals who are at risk for becoming addicted to marijuana is an early life (prior to age 16 years) positive experience with it (Fergusson, Horwood, Lynskey, & Madden, 2003). The withdrawal syndrome from cannabis has not been examined in detail (Budney, Moore, Bandrey, & Hughes, 2003). A popular misconception is that there is no withdrawal syndrome from marijuana, but research has found that chronic marijuana abusers experience a withdrawal syndrome that includes such symptoms as irritability, aggressive behaviors, anxiety, insomnia, nausea, a loss of appetite, sweating, and vomiting (Gruber & Pope, 2002; Kouri, Pope, & Lukas, 1999; Nahas, l986). The withdrawal symptoms begin anywhere from 1 to 3 days after the last use of cannabis, peak between the 2nd and 10th day, and can last up to 28 days or more (Budney et al., 2003; Sussman & Westreich, 2003) and has been classified as flu-like in terms of intensity (Martin, 2004). It would thus appear that despite claims to the contrary, marijuana meets the criteria necessary to be classified as an addictive compound.

Summary Marijuana has been the subject of controversy for the past several generations. In spite of its popularity as a drug of abuse, surprisingly little is actually known about marijuana. After a 25-year search, researchers have identified what appears to be the specific receptor site, which the THC molecule uses to cause at least some of its effects on perception and memory. In spite of the fact that very little is known about this drug, some groups have called for its complete decriminalization. Other groups maintain that marijuana is a serious drug of abuse with a high potential for harm. Even the experts differ as to the potential for marijuana to cause harm. For example, in contrast to Weil’s (l986) assertion that marijuana was one of the safest drugs known, Oliwenstein (l988) classified it as a dangerous drug. In reality, the available evidence at this time would suggest that marijuana

Marijuana Abuse and Addiction

is not as benign as was once thought. Marijuana, either alone or in combination with cocaine, will increase heart rate, a matter of some significance to those with cardiac disease. There is evidence that chronic use of marijuana will cause physical changes in the brain, and the smoke from marijuana cigarettes has been found to be even more harmful than


tobacco smoke. Marijuana remains such a controversial drug that the United States government refuses to sanction research into its effects on the grounds that they do not want to run the risk that researchers might find something about marijuana that proponents of its legalization might use to justify their demands (D. Smith, 1997).


Opiate Abuse and Addiction

control cancer-related pain (Herrera, 1997). As many as 73% of patients in moderate to severe distress are thought to suffer unnecessarily because their physicians do not prescribe adequate doses of the appropriate analgesics (Stimmel, 1997b). In addition, regulatory policies of the Drug Enforcement Administration (DEA) aimed at discouraging the diversion of prescribed narcotic analgesics1 often intimidate or confuse physicians who wish to prescribe these medications for patients in pain. As a result of physicians’ irrational fears of causing addiction coupled with federal supervisory edicts, only a minority of patients are thought to receive adequate doses of a narcotic analgesic to control pain (Herrera, 1997; Paris, 1996). This is unfortunate, for although the narcotic analgesics do have a significant abuse potential, they also remain potent and extremely useful medications. To try to clear up some of the confusion that surrounds the legitimate use of narcotic analgesics, this chapter will be divided into two sections. In the first section, the role and applications of narcotic analgesics as pharmaceutical agents will be examined. In the second section, the narcotic analgesics as drugs of abuse will be discussed.

Introduction Pain is perhaps the oldest problem known to medicine (Meldrum, 2003). Each year in the United States more than 70% of adults will experience at least one episode of acute pain (Williams, 2004). In spite of all of the advances made by medical science in the past century, even now there is no objective measure of pain, and the physician must rely almost exclusively on the patient’s assessment of his or her pain (Williams, 2004). The phenomenon of pain is the outcome of a complex neurophysiological process that at best is only poorly understood by scientists (Chapman & Okifuji, 2004). With pain so poorly understood, it is not surprising to learn that the medications used to control pain, the narcotic analgesics, are a source of endless confusion not only for health care professionals but also for the general public. In spite of the relief that this family of medications offers for those in pain, the general public and physicians alike view them with distrust because of their history of abuse (Herrera, 1997; Vourakis, 1998). Over the years, myths and mistaken beliefs about narcotic analgesics and pain management have been repeated over and over from one health care professional to the next so often that they have been incorporated into professional journals and textbooks as medical “fact” and have then shaped patient care (Vourakis, 1998). Much of the literature published about narcotic analgesics in the 20th century focused on the problem of addiction to these medications, making physicians hesitate to prescribe large doses of opioids out of a fear that they would cause or contribute to a substance-use problem (Antoin & Beasley, 2004). Physicians continue to underprescribe narcotic analgesics because of this fear, causing patients to suffer needlessly (Carvey, 1998; Kuhl, 2002). One study found that only slightly more than half of the 300 physicians surveyed were able to correctly estimate the dose of morphine needed to

I. THE MEDICAL USES OF NARCOTIC ANALGESICS A Short History of the Narcotic Analgesics There is anthropological evidence suggesting that opium was used in religious rituals 10,000 years ago (Restak, 1994; Walton, 2002). There is also archeological 1

For many years, the issue of prescription diversion was not thought to be significant. Although the true scope of this problem is still unclear, it has become apparent that diversion of such compounds as Oxycontin is common (Meier, 2003).


Opiate Abuse and Addiction

evidence that the opium poppy was being cultivated as a crop in certain regions of Europe by the latter part of the Neolithic era (Booth, 1996; Spindler, 1994). Somehow, early humans had discovered that if you made an incision at the top of the Papaver somniferum plant during a brief period in its life cycle, the plant would extrude a thick resin. This resin is “an elaborate cocktail containing sugars, proteins, ammonia, latex, gums, plant wax, tats, sulphuric and lactic acids, water, meconic acid, and a wide range of alkaloids” (Booth, 1996, p. 4). Although the exact composition of this resin would not be determined for thousands of years, at some point early humans had discovered that it could be used for ritual and medicinal purposes. Eventually, this resin was called opium. The English word opium can be traced to the Greek word opion, which means “poppy juice” (Stimmel, 1997b). In a document known as the Ebers Papyri, which dates back to approximately 7,000 B.C.E., there is a reference to the use of opium as a treatment for children who suffer from colic (Darton & Dilts, 1998). Historical evidence suggests that the widespread use of opium had developed by around 4200 B.C.E. (Walton, 2002). For the thousands of years when physicians could offer few truly effective treatments to the sick, opium came to be viewed as a gift from the gods (Ray & Ksir, 1993; Reisine & Pasternak, 1995). It could relieve virtually every form of pain, and it could control diarrhea, especially massive diarrhea such as that of dysentery.2 Physicians also discovered that opium could control anxiety, and its limited antipsychotic potential made it marginally effective in controlling the symptoms of psychotic disorders in an era when physicians had no other effective treatment for psychosis (Beeder & Millman, 1995; Woody, McLellan, & Bedrick, 1995). In 18033 a chemist named Friedrich W. A. Serturner first isolated a pure alkaloid base from opium that was recognized as being the active agent of opium. This chemical was later called morphine after the Greek god of dreams, Morphius. Surprisingly, morphine is a “nitrogenous waste product” (Hart, 1997, p. 59) produced by the opium poppy, and not the reason for the 2

See Glossary. Restak (1994) suggested that morphine was isolated in 1805, not 1803, whereas Antoin and Beasley (2004) suggested that this event took place in 1806. 3


plant’s existence. But by happy coincidence this waste product happens to control many of the manifestations of pain in humans. As chemists explored the various chemical compounds found in the sap of the opium poppy, they discovered a total of 20 distinct alkaloids in addition to morphine and including codeine, that could be obtained from that plant (Gold, 1993; Reisine & Pasternak, 1995). After these alkaloids were isolated, medical science found a use for many of them. Unfortunately, a number of these alkaloids can be abused. In 1857, about half a century after morphine was first isolated from opium, Alexander Wood invented the hypodermic needle. This device made it possible to quickly and painlessly inject a substance into the body. The ready availability of relatively pure morphine, the intravenous needle, the mistaken belief that morphine was nonaddicting (it was an ingredient in patent medications that were used to treat every ailment imaginable), and the widespread use of morphine in military field hospitals of the era all combined to produce large epidemics of morphine addiction in both the United States and Europe in the last half of the 19th century. The “patent medicine” phenonemon of the 19th century deserves special mention. In the latter half of this century, the average person placed little confidence in what medical science had to offer. Physicians were referred to as “croakers,” and it was not unusual for the patient to rely on time-honored folk remedies, and patent medicines rather than to seek a physician’s advice (Norris, 1994). Both cocaine and morphine were common ingredients in many of the patent medicines that were sold throughout the United States. Even if users of a patent medicine were aware of the contents of the bottle they had purchased, they were unlikely to believe that the “medicine” could hurt them. The idea of a medication as addictive was totally foreign to the average person, especially as the concept of “drug abuse” itself did not emerge until the latter years of the 19th century (Walton, 2002). As a result, large numbers of people unknowingly became addicted to one or more chemicals in the patent medicines they had come to rely on for every illness. In other cases, people had started to use either opium or morphine for the control of pain or to treat diarrhea, only to become physically dependent on that chemical. When users tried to stop, they would begin to experience withdrawal symptoms. Like magic, these withdrawal symptoms would disappear when they


resumed the use of the original medicine. As a result of these two phenomena—widespread availability of morphine for self-medication and its use in patent medicines—more than 1% of the entire population of the United States was addicted to opium or to narcotics at the start of the 20th century (Restak, 1994). During this period, the practice of smoking opium had been introduced to the United States by Chinese immigrants, many of whom came to work on the railroad in the era following the Civil War. Opium smoking became somewhat popular, especially on the Pacific coast, and through this process many opium smokers became addicted to it. By 1900 fully a quarter of the opium imported into the United States was used not for medicine but for smoking (Jonnes, 1995; Ray & Ksir, 1993). Two-thirds to three-fourths of those individuals addicted to opiates in the United States were women (Kandall, Doberczak, Jantunen, & Stein, 1999). Faced with an epidemic of unrestrained opiate use, the United States Congress passed the Pure Food and Drug Act of 1906. This law required manufacturers to list the ingredients of their product on the label, revealing for the first time that many a trusted remedy contained narcotics. Other provisions in the law, especially the Harrison Narcotics Act of 1914, prohibited the use of narcotics without a prescription signed by a physician. Since then, the battle against narcotic abuse/addiction has waxed and waned, but it has never entirely disappeared.

The Classification of Narcotic Analgesics Since morphine was first identified, medical researchers have either isolated or developed a wide variety of compounds that, in spite of differences in their chemical structure, have similar pharmacological effects to that of morphine. Segal and Duffy (1999) classified these compounds as falling into one of three groups: 1. Natural opiates: obtained directly from the opium, of which morphine and codeine are examples. 2. Semisynthetic opiates: chemically altered derivatives of natural opiates. Dihydromorphine and heroin are examples of this group of compounds. 3. Synthetic opiates: synthesized in laboratories and not derived from natural opiates at all. Methadone and propoxyphene are examples of these compounds.

Chapter Fourteen

Admittedly, there are significant differences in the chemical structures of different natural, semisynthetic, and synthetic opiates. However, for the sake of simplification, all of these compounds will be grouped together under the generic terms opiates or narcotic analgesics in this chapter, as they all have similar pharmacological effects. The problem of pain is almost universal (Meldrum, 2003) and is poorly understood (Fishman & Carr, 1992). It is generally viewed as something to be avoided if possible, and the very word pain comes from the Latin word poena, which means a punishment or penalty (Stimmel, 1997). There are three basic types of pain (Holleran, 2002): acute, chronic, and cancerinduced pain. Acute pain is short, intense, and resolves when the cause of the pain (incision, broken bone, etc.) heals. Chronic pain4 is associated with a nonmalignant pathological condition in the body, and cancer pain is the result of a tumor’s growth or expansion (Holleran, 2002). Because the experience of pain is so uncomfortable, there is a very real demand for medications that will control the individual’s suffering. To meet this demand, researchers have developed a group of medications that are collectively known as analgesics. An analgesic is a chemical that is able to bring about the “relief of pain without producing general anesthesia” (Abel, 1982, p. 192). There are two different groups of analgesics. The first are agents that cause local anesthesia, of which cocaine was once the prototype. Local anesthetics block the transmission of nerve impulses from the site of the injury to the brain, preventing the brain from receiving the nerve impulses that would otherwise transmit the pain message. The analgesics in the second group are more global in nature. These alter the individual’s perception of pain within the central nervous system (CNS) itself. This group of analgesics was further divided into two subgroups by Abel (1982). The first consists of the narcotics, which have both a central nervous system (CNS) depressant capability as well as an analgesic effect. The second subgroup of global analgesics is drugs such as aspirin and acetaminophen, which will be discussed in Chapter 18. 4

The treatment of an addicted person with chronic pain is addressed in Chapter 31.


Opiate Abuse and Addiction TABLE 14.1 Some Common Narcotic Analgesics* Generic name

Brand name

Approximate equianalgesic parenteral dose


10 mg every 3–4 hours



1.5 mg every 3–4 hours



100 mg every 3 hours



10 mg every 6–8 hours



1 mg every 3–4 hours



0.1 mg every 1–2 hours



60 mg every 3–4 hours



0.3–0.4 mg every 6–8 hours


75–130 mg every 3–4 hours**


Perdocet, Tylox

Not available in parenteral dosage forms

Source: Based on information contained in Medical Economics Company (2000) and Cherny & Foley (1996). *This chart is for purposes of comparison only. It is not intended to serve as, nor should it be used for, a guide to patient care. **It is not recommended that doses of codeine above 65 mg be used because doses above this level do not result in significantly increased analgesia and may result in increased risk of unwanted side effects.

Where Opium Is Produced Surprisingly, the synthesis of morphine in the laboratory is extremely difficult, and most of the morphine used by physicians is still obtained from the opium poppy (Gutstein & Akil, 2001). Virtually the entire planet’s need for legitimate opium might be met by the opium produced by just India. A single nation, Afghanistan, accounts for more than 75% of all of the illicit opium produced on this planet each year (United Nations, 2004). Virtually all of the opium produced beyond that needed for medicine finds its way to the illicit narcotics market, which is a thriving, multinational industry.

Current Medical Uses of the Narcotic Analgesics Since the introduction of aspirin, narcotics are no longer used to control milder levels of pain. As a general rule, the opiates are most commonly utilized to control severe, acute pain (O’Brien, 2001) and some forms of

chronic pain5 (Belgrade, 1999; Marcus, 2003; Savage, 1999). In addition, they are of value in the control of severe diarrhea and the cough reflex in some diseases. A number of different opiate-based analgesics have been developed that have minor variations in potency, absorption characteristics, and duration of effects. The generic and brand names of some of the more commonly used narcotic analgesics are provided in Table 14.1.

Pharmacology of the Narcotic Analgesics The resin that is collected from the Papaver somniferum plant contains 10% to 17% morphine (Jenkins & Cone, 1998). Chemists isolated the compound morphine from this resin almost 200 years ago and quickly discovered that it was the active agent of opium. In spite of the time that has passed since then, it is still the standard against which other analgesics are measured (Nelson, 2000). More surprisingly, only since the 1970s 5

Although this use for narcotic analgesics is quite controversial (Antoin & Beasley, 2004).


Chapter Fourteen

have researchers been able to begin unraveling some of the mystery of how we experience pain. In the brain, the narcotic analgesics mimic the actions of several families of endogenous opioid peptides, including the following (Jaffe & Jaffe, 2004): enkephalin




TABLE 14.2 Brain Receptor Sites Utilized by Narcotic Analgesics Opioid receptor Mu

Analgesia, euphoria, respiratory depression, suppression of cough reflex


Analgesia, euphoria, endocrine effects, psychomotor functions


Analgesia in spinal cord, sedation, miosis


Dysphoria, hallucinations, increased psychomotor activity, respiratory activity





endorphin These opioid peptides function as neurotransmitters in the brain and spinal cord (Hirsch, Paley, & Renner, 1996). Although these compounds function as neurotransmitters or modulate the action of other neurotransmitters in some manner, their exact mechanism of action remains unclear (Gutstein & Akil, 2001). It is known that opioid peptides are involved in such diverse functions in the CNS as the perception of pain, moderation of emotions, the perception of anxiety, the feeling of sedation, appetite suppression, anticonvulsant activity within the brain, smooth muscle motility, regulation of a number of body functions (such as temperature, heart rate, respiration, and blood pressure), and perhaps even the perception of pleasure (Hawkes, 1992; Restak, 1994; Simon, 1997). In the body, opioid peptides help to regulate the movement of food and fluid through the intestines (Pasternak, 1998). As this list suggests, the opioid peptides are quite powerful chemicals. In contrast, morphine and its chemical cousins are only crude copies of them. For example, the opioid peptide known as beta endorphin (ß-endorphin) is thought to be 200 times as potent an analgesic as morphine. Currently, researchers believe that the narcotic analgesics function as opioid peptide agonists, occupying the receptor sites in the CNS normally utilized by the opioid peptides to simulate or enhance the action of these naturally occurring neurotransmitters. In the last decade of the 20th century, researchers identified a number of receptor sites within the brain that are utilized by the opioid peptides (Carvey, 1998). There is some disagreement as to the exact number of receptor sites. However, the different sites are identified by letters from the Greek alphabet. Table 14.2 summarizes what is known about the different receptor sites in the central nervous system utilized by narcotic analgesics and the function controlled by each receptor subtype.

Biological activity associated with opioid receptor

Source: Based on information provided in Ashton (1992), Jaffe (1989), and Zevin & Benowitz (1998).

There is strong evidence that opioids will alter the blood flow pattern within the human brain. Using single photon emission computed tomography (SPECT) scans to examine the cerebral blood flow in the brains of nine nondependent volunteers, Schlaepfer et al. (1998) studied changes in the blood flow patterns of various regions of the brains of their subjects. The authors found statistically significant changes in the regional blood flow pattern, with significantly more blood being sent to the anterior cingulate cortex, the thalamus, and the amygdalae regions of the brain when a drug known to occupy the Mu receptor was administered. Although it was not clear whether the observed increase in blood flow was associated with the analgesic effect of the drug, it is known that these are areas of the brain with high concentrations of the Mu receptor, suggesting that they play a role in pain perception in humans. Research has demonstrated that the region of the brain involved in pain perception is different from the area of the brain that is involved in the experience of euphoria. The thalamus seems to be involved in the perception of pain (Restak, 1994). On the other hand, the experience of euphoria often reported by narcotic abusers seems to be caused by the effects of the opioids on the ventral tegmental region of the brain (Kaplan, Sadock, & Grebb, 1994). This area uses

Opiate Abuse and Addiction

dopamine as its major neurotransmitter and connects the cortex of the brain with the limbic system. SklairTavron et al. (1996) found that the chronic administration of morphine to rats caused these same dopamine-utilizing neurons to shrink in volume by approximately 25%, suggesting that the morphine causes these neurons to alter their function in an as yet undetermined manner. Another region of the brain rich in opioid peptide receptors is the amygdalae (singular: amygdala) (Reeves & Wedding, 1994). These regions of the brain function as a halfway point between the senses and the hypothalamus, which is the “emotion center” of the brain, according to the authors. It is thought that the amygdala will release opioid peptides in response to sensory data, thus influencing the formation of memory. For example, the sense of pleasure that one feels upon solving an intricate mathematics problem is caused by the amygdala’s release of opioid peptides. This pleasure will make it more likely that the student will remember the solution to that problem if she or he should encounter it again. When the Mu receptor site is occupied by a narcotic analgesic, the individual will experience a sense of well-being, an effect that might account for the reports that morphine and similar agents reduce the individual’s awareness of pain without a significant loss of consciousness (Giannini, 2000). At first, narcotic analgesics also produce a sense of drowsiness, allowing a degree of sedation to be achieved in spite of the individual’s pain (American Medical Association, 1994; Jaffe, 1992). Through these effects, narcotic analgesics are able to reduce the individual’s anxiety level, promote drowsiness, and allow the person to sleep in spite of severe pain (Gutstein & Akil, 2001; Jaffe, Knapp, & Ciraulo, 1997). These effects seem to reflect the impact of the morphine molecule on the locus ceruleus region of the brain (Gold, 1993; Jaffe et al., 1997). Codeine. Codeine is also an alkaloid found in the same milky sap from the plant papaver somniferum from which opium is obtained. It was first isolated in 1832 (Jaffe, 2000c; Melzack, 1990). Like its chemical cousin, morphine, codeine is able to suppress the cough reflex, and it has a mild analgesic potential. As an analgesic, codeine is thought to be about onefifth as potent as its chemical cousin, morphine (Karch, 2002). This is not surprising, as researchers have found


that about 10% of a dose of codeine is biotransformed into morphine, which researchers believe is responsible for codeine’s analgesic potential (Reisine & Pasternak, 1995). However, there is significant variability between individuals in the ability of their bodies to convert codeine into morphine, and thus there are differences between the amount of analgesia that different people might obtain from cocaine (Karch, 2002). Following a single dose of codeine, peak blood levels are seen in 1 to 2 hours, and the half-life of codeine is between 2.4 and 3.6 hours (Karch, 2002). The analgesic potential of codeine is enhanced by over-the-counter (OTC) analgesics such as aspirin or acetaminophen (Gutstein & Akil, 2001). This is one reason it is commonly administered in combination with one of them (Cherny & Foley, 1996). Also, research has found that codeine is not as vulnerable to the first pass metabolism effect as is morphine, allowing better pain control from oral doses of codeine than can be achieved with oral doses of morphine (Gutstein & Akil, 2001). Codeine, like many narcotic analgesics, is also quite effective in the control of cough. This is accomplished through codeine’s ability to suppress the action of a portion of the brain known as the medulla that is responsible for the maintenance of the body’s internal state (Jaffe et al., 1997; Jaffe & Martin 1990). Except in extreme cases, codeine is the drug of choice for cough control (American Medical Association, 1994). Morphine. Morphine is well absorbed from the gastrointestinal tract, but for reasons discussed later in this chapter, orally administered morphine is of limited value in the control of pain. Morphine is also easily absorbed from injection sites and is often administered through intramuscular or intravenous injections. Finally, morphine is also easily absorbed through the mucous membranes of the body and it is occasionally administered in the form of rectal suppositories. The peak effects of a single dose of morphine are seen about 60 minutes after an oral dose and between 30 and 60 minutes after the drug is administered through intravenous injection (Shannon, Wilson, & Stang, 1995). After absorption into the circulation, morphine will go through a two-phase process of distribution throughout the body (Karch, 1996). In the first phase, which lasts only a few minutes, the morphine is distributed to various blood-rich tissues, including muscle tissue, the kidneys, liver, lungs, spleen, and the brain. In the second


phase, which proceeds quite rapidly, the majority of the morphine is then biotransformed into a metabolite known as morphine-3-glucuronide (M3G), with a smaller amount being transformed into the metabolite morphine-6-glucuronide (M6G) or one of a small number of additional metabolites (Karch, 2002). The process of morphine biotransformation takes place in the liver, and within 6 minutes of an intravenous injection, the majority of a single dose of morphine has been biotransformed into one of the two metabolites discussed in the last paragraph. Scientists have only recently discovered that M6G has biologically active properties, and it has been suggested that this metabolite might be even more potent than the parent compound, morphine (Karch, 2002). About 90% of morphine metabolites are eventually eliminated from the body by the kidneys (Shannon et al., 1995); the other 10% will be excreted as unchanged morphine (Karch, 1996). The biological half-life of morphine ranges from 1 to 8 hours, depending on the individual’s biochemistry, with most textbooks giving an average figure of 2–3 hours (Drummer & Odell, 2001). Following a single dose, approximately one-third of the morphine becomes protein bound (Karch, 1996). The mechanism by which morphine is able to provide analgesic effects remains unclear, but it is known from experience that the effects of a single dose of morphine last for approximately 4 hours (Gutstein & Akil, 2001). Although it is well absorbed when administered through intramuscular or intravenous injection, morphine takes 20 to 30 minutes to cross through the blood-brain barrier to reach the target areas in the brain where it has its primary effect (Angier, 1990). Thus, there is a delay between the time the narcotic analgesic is injected and the moment the patient begins to experience some relief from pain. Methadone. Methadone has been found quite useful in the control of severe, chronic pain and is sometimes prescribed by physicians for this purpose (O’Brien, 2001). When used this way, methadone begins to exert an analgesic effect within 30 minutes; its analgesic action peaks in 4 hours, and it may remain effective for 6 to 8 hours (Gutstein & Akil, 2001). The analgesic doses of methadone are significantly higher than those used when the drug is part of a detoxification or opiate maintenance program. These applications of methadone will be discussed in Chapter 32.

Chapter Fourteen

Oxycontin. Introduced in December 1995 as a timerelease form of oxycodone, Oxycontin is designed for use by patients whose long-term pain can be controlled through the use of oral medications as opposed to intravenously administered narcotic analgesics (Physicians’ Desk Reference, 2004). The time-release feature of Oxycontin allows the patient to achieve relatively stable blood levels of the medication after 24 to 36 hours of use, providing a better level of analgesia than could be achieved with shorter-acting agents. In theory, this feature would provide for fewer episodes of “breakthrough” pain, allowing the patient to experience better pain control. The abuse of Oxycontin will be discussed later in this chapter. Heroin. Although it is used by physicians in other countries to treat severe levels of pain, heroin has no recognized medical use in the United States. Here, it is occasionally used in much the same manner as methadone: as an agonist replacement for illicit narcotics to control the patient’s withdrawal symptoms and allow him or her to function in society. Surprisingly, both animal studies and autopsy-based human data suggest that opioids such as heroin have a cardioprotective potential (Mamer, Penn, Wildmer, Levin, & Maslansky, 2003; Peart & Gross, 2004). The exact mechanism by which heroin (and morphine) protect cardiac tissue from ischemia is not known at this time, and it is not clear whether this compound offers the promise of reducing the damage to the muscle tissues of the heart during myocardial infarction in humans.

Neuroadaptation to Narcotic Analgesics Analgesia is not a static process but is influenced by a host of factors such as disease progression, an increase in physical activity, lack of compliance in taking analgesics, and medication interaction effects (Pappagallo, 1998). Another factor that influences the effectiveness of a narcotic analgesic is the process of neuroadaptation, which is occasionally misinterpreted as evidence that the patient is addicted to the narcotic analgesic being used. The development of neuroadaptation is incomplete and uneven (Jaffe & Jaffe, 2004). Some patients have been known to develop a craving for opiates after having received intravenous injections of morphine


Opiate Abuse and Addiction

every 2 hours for just a single day (Nelson, 2000). Other patients have become tolerant of the analgesic effect of a given dose of a narcotic analgesic in as little as 1 to 2 weeks of continual use (Fulton & Johnson, 1993; McCaffery & Ferrell, 1994; Tyler, 1994). However, in contrast to the development of tolerance to the analgesic effect of opiates, the patient may never become fully tolerant of the drug’s ability to affect the size of the pupil of the eyes or of the drug-induced constipation brought on by this class of medications. As the patient gradually becomes tolerant of the analgesic effects of lower doses of a narcotic, his or her daily dosage might be raised to levels that would literally kill a patient who had not had time to complete the process of neuroadaptation. To illustrate, a single intravenous dose of 60 mg of morphine is potentially fatal to the opiate-naive person (Kaplan et al., 1994). In contrast to this is the patient whose daily morphine levels gradually increased from 60 mg per day to 3,200 mg per day before that patient died of cancer (Fulton & Johnson, 1993). When used in the control of pain, most dosage increases are made necessary by the progression of the disorder causing the patient to experience the pain (Savage, 1999). Only a minority of cases involve neuroadaptation to the analgesic effects of the opiate being prescribed. Clinical research has found that the concurrent administration of dextromethorphan, an NMDA receptor antagonist, with morphine slows the development of neuroadaptation and improves analgesia without the need for an increase in the morphine dose (O’Brien, 2001). Also, concurrent use of NSAIDs (non-steroidal anti-inflammatory drugs) such as aspirin or acetaminophen may potentiate the analgesic effect of narcotic analgesics through an unknown mechanism (Gutstein & Akil, 2001). Thus physicians may attempt to offset the development of neuroadaptation to the analgesic effects of narcotic analgesics or enhance their analgesic potential through the concurrent use of NSAID compounds. Unfortunately, many physicians incorrectly interpret the process of neuroadaptation to an opiate as evidence of addiction rather than neuroadaptation, a mistake that results in the underutilization of opiates in patients experiencing severe pain (Herrera, 1997). Cherny (1996) termed the patient’s repeated requests for additional narcotic analgesics in such cases

pseudoaddiction, noting that in contrast to true addiction the patient ceases to request additonal narcotics once his or her pain is controlled. Drug interactions involving narcotic analgesics.6 Even a partial list of potential medication interactions clearly underscores the potential for narcotic analgesics to cause harm to the individual if they are mixed with the wrong medication(s). The synthetic narcotic analgesic meperidine should not be used in patients who are taking or have recently used monoamine oxidase inhibitors (MAOIs, or MAO inhibitors) (Peterson, 1997). The combination of these two classes of medications might prove fatal to the patient, even if she or he had stopped using MAOIs within the last 14 days (Peterson, 1997). Patients who are taking narcotic analgesics should not use any other chemical classified as a CNS depressant except under a physician’s supervision, as there is a danger of excessive sedation from the combination of two or more of these (Ciraulo, Shader, Greenblatt, & Barnhill, 1995). There is evidence that the use of a selective serotonin reuptake inhibitor such as fluvoxamine might result in significantly increased blood levels of methadone, possibly to toxic levels (Drummer & Odell, 2001). Further, 21 of 30 methadone maintenance patients who started a course of antibiotic therapy with Rifampin experienced opiate withdrawal symptoms that were apparently caused by an unknown interaction between the methadone and the antibiotic (Barnhill, Ciraulo, Ciraulo, & Greene, 1995). The authors noted that the withdrawal symptoms did not manifest themselves until approximately the fifth day of Rifampin therapy, suggesting that the interaction between these two medications might require some time before the withdrawal symptoms develop. Patients who are taking narcotic analgesics should not use other CNS depressants (antihistamines, benzodiazepines, barbiturates, etc.) except under a physician’s supervision. The combination of opiates with other CNS depressants can result in a potentially fatal drug-induced reaction if certain medications are used at the same time. This list does not include every possible interaction between opiates and other 6The

reader is advised to always consult a physician or pharmacist, before taking two different medications.


chemical agents, but it does underscore the potential for harm that might result if narcotic analgesics are mixed with the wrong medication(s).

Subjective Effects of Narcotic Analgesics When Used in Medical Practice As stated earlier, the primary use of narcotic analgesics is to reduce the distress caused by pain (Darton & Dilts, 1998). To understand how this is achieved, one must understand that pain may be simplistically classified as acute or chronic. Acute pain implies sudden onset, often within minutes or hours. Usually, there is a clear-cut etiology, and the intensity of acute pain is severe, often reflecting the degree of pathology. Chronic pain is ongoing for weeks, months, or years; the original source of pain, if ever known, is often no longer apparent. This is particularly true of nonmalignant pain. (Katz, 2000, pp. 1–2)

Acute pain serves the function of warning the organism to rest until recovery from an acute injury can take place. Morphine is usually prescribed for the control of severe, acute forms of pain (Fulton & Johnson, 1993; Melzack, 1990). Many factors affect the degree of analgesia achieved through the use of morphine. These factors include (a) the route by which the medication was administered, (b) the interval between doses, (c) the dosage level, and (d) the half-life of the specific medication being used (Fishman & Carr, 1992). Other factors that influence people’s experience of pain are (e) their anxiety level, (f) their expectations for the narcotic, (g) the length of time they have been receiving narcotic analgesics, and (h) their general state of tension. The more tense, frightened, and anxious people are, the more likely they are to experience pain in response to a given stimulus. As discussed earlier in this chapter, between 80% and 95% of the patients who receive a dose of morphine experience a reduction in their level of fear, anxiety, and/or tension (Brown & Stoudemire, 1998), and they report that their pain becomes less intense or perhaps disappears entirely (Jaffe et al., 1997; Reisine & Pasternak, 1995).

Chapter Fourteen

Complications Caused by Narcotic Analgesics When Used in Medical Practice Constriction of the pupils. When used at therapeutic dosage levels, the opiates will cause some degree of constriction of the pupils (miosis). Some patients will experience such constriction even in total darkness (Shannon et al., 1995). Although this is a diagnostic sign that physicians often use to identify the opioid abuser (discussed later in this chapter), it is not automatically a sign that the patient is abusing the medication. Rather, this is a side effect of opioids that the physician expects in the patient who is using a narcotic analgesic for legitimate medical reasons, and one that is unexpected in the patient who is not receiving such medication. Respiratory depression. Another side effect seen at therapeutic dosage levels is some respiratory depression. Although the degree of this is not as significant when narcotics are given to a patient in pain (Bushnell & Justins, 1993), even following a single therapeutic dose of morphine (or a similar agent) respiration might be affected for up to 24 hours (Brown & Stoudemire, 1998). For this reason, many experts advise that narcotic analgesics be used with caution in individuals who suffer from respiratory problems such as asthma, emphysema, chronic bronchitis, and pulmonary heart disease. Some experts in the field have challenged the belief that morphine has a significant effect on respiration when used properly (Barnett, 2001; Peterson, 1997). For example, Peterson (1997) concluded that severe respiratory depression is uncommon in patients with no previous history of breathing problems. As these different reports suggest, physicians are still not sure how much respiratory depression might be caused by narcotic analgesics or whether this is a problem only for patients with respiratory disorders. Thus, until a definitive answer arrives, health care workers should anticipate that narcotics will cause the respiratory center of the brain to become less sensitive to rising blood levels of carbon dioxide and thus should expect some degree of respiratory depression (Bushnell & Justins, 1993; Darton & Dilts, 1998). Gastrointestinal side effects. When used at therapeutic dosage levels, narcotic analgesics can cause nausea and vomiting, especially in the first 48 hours after the


Opiate Abuse and Addiction

patient starts the medication or receives a major dose increase (Barnett, 2001). At normal dosage levels, approximately 10% to 40% of ambulatory patients will experience some degree of nausea and approximately 15% will actually vomit as a result of receiving a narcotic analgesic (Brown & Stoudemire, 1998; Cherny & Foley, 1996). Ambulatory patients seem most likely to experience nausea or vomiting, and patients should rest for a period of time after receiving their medication to avoid this potential side effect. Whereas opiate-induced nausea is a dose-related side effect, some individuals who are quite sensitive to the opiates might experience druginduced nausea and vomiting even at low dosage levels. This may reflect the individual’s genetic predisposition toward sensitivity to opiate-induced side effects (Melzack, 1990). There is experimental evidence that ultra-low doses of the narcotic blocker naloxone might provide some relief from morphine-induced nausea in postsurgical patients without blocking the desired analgesic effect of the morphine (Cepeda, Alvarez, Morales, & Carr, 2004). At therapeutic dosage levels, morphine and similar drugs have been found to affect the gastrointestinal tract in a number of ways. All of the narcotic analgesics decrease the secretion of hydrochloric acid in the stomach and slow the muscle contractions of peristalsis (which push food along the intestines) (Shannon et al., 1995). In extreme cases, narcotic analgesics may actually cause spasm in the muscles involved in peristalsis and possibly even constipation (Jaffe & Jaffe, 2004). This is the side effect that makes morphine so useful in the treatment of dysentery and severe diarrhea. But constipation is the most common adverse side effect encountered when narcotic analgesics are used for extended periods of time at therapeutic levels (Cherny & Foley, 1996; Herrera, 1997). This problem can usually can be corrected by using over-the-counter laxatives (Barnett, 2001; Herrera, 1997). Blood pressure effects. Under normal conditions, narcotic analgesics will cause the patient to experience a mild degree of respiratory depression. In those patients who have experienced some form of head trauma, this might contribute to an increase in intracranial blood pressure as the body attempts to compensate for the increased levels of carbon dioxide in the blood by pumping more blood to the brain (Pagliaro & Pagliaro, 1998). Thus, narcotic analgesics should be

used with caution in patients with head injuries to avoid the potential complications caused by druginduced intracranial blood pressure increase. Other side effects. Another troublesome side effect of the narcotic analgesics is a stimulation of the smooth muscle tissue surrounding the bladder. This, plus a tendency for narcotic analgesics to reduce the voiding reflex, may cause the patient to experience some degree of urinary retention (Jaffe et al., 1997; Tyler, 1994). Twenty-five percent of the patients who receive a dose of morphine experience some degree of sedation, 4% to 35% experience some drug-induced irritability, and 4% to 25% experience some degree of depression as a side effect of the morphine they receive for pain control. An unknown percentage will experience morphine-induced nightmares. The danger of addiction. Many health care workers admit to being afraid they will cause the patient to become addicted to narcotic analgesics by giving him or her too much medication.7 In reality, the odds that a patient with no prior history of alcohol or drug addiction will become addicted to narcotic analgesics when these medications are used for the short-term control of severe pain has been estimated at only 1 of 12,000 to 14,000 (Roberts & Bush, 1996). Most patients who develop a psychological dependence on opiates after receiving them for pain control seem to have a preexisting addictive disorder (Paris, 1996). Further, as noted earlier in this chapter, neuroadaptation to the analgesic effects of opioids over time is a normal phenomenon and should not automatically be interpreted as a sign of developing addiction to these medications (Hirsch et al., 1996; McCaffery & Ferrell, 1994). As the process of neuroadaptation progresses, some patients might require 10 to 50 times as much morphine as drug-naive individuals to experience the same degree of analgesia (Brown & Stoudemire, 1998). Unfortunately some physicians do not understand the process of neuroadaptation and consequently under-medicate the individual prior to and following surgery (Imhof, 1995). Few physicians realize that opiate-tolerant patients will require higher-than-normal doses of opiates to control their pain. Fearing that they will bring about 7This

would, technically, be an iatrogenic addiction, as opposed to the usual form of addiction to narcotics that will be discussed later in this chapter.


Chapter Fourteen

an overdose or that they are contributing to the patient’s abuse of medications, physicians often under-medicate patients, leaving them in needless pain just because they have become tolerant to the drug’s effects. Routes of administration for narcotic analgesics in medical practice. Although the narcotic analgesics are well absorbed from the gastrointestinal tract, orally administered narcotic analgesics are useful only in the control of mild to moderate levels of pain (Shannon et al., 1995). This is because the first pass metabolism effect severely limits the amount of the drug that is able to reach the brain. For example, the liver biotransforms 70% to 80% of the morphine that is absorbed through the gastrointestinal tract before it reaches the brain (Drummer & Odell, 2001). Thus, orally administered narcotics are of limited value in the control of severe levels of pain. A standard conversion formula is that 60 mg of orally administered morphine will give the same level of analgesia as 10 mg of injected morphine (Cherny & Foley, 1996). The intravenous administration of narcotics actually allows for the greatest degree of control over the amount of drug that actually reaches the brain, so this is the primary method of administration for narcotic analgesics (Jaffe & Martin, 1990). However, there are exceptions. For example, there is a new transdermal patch, developed for the narcotic fentanyl. This will be discussed in more detail in the section on fentanyl. Withdrawal from narcotic analgesics when used in medical practice. Most patients who receive narcotic analgesics for the control of pain, even when they do so for extended periods of time, are able to discontinue the medication without problems. A small number of patients will develop a “discontinuance syndrome” similar to that seen in patients who receive benzodiazepines for an extended period. This discontinuance syndrome is usually mild but may require the patient to gradually reduce daily intake of narcotic analgesics rather than to stop using the medication all at once. Thus, narcotic analgesics are relatively benign medications when used properly.

Fentanyl In 1968, a new synthetic narcotic, fentanyl, was introduced. Because of its short duration of action, fentanyl has become an especially popular analgesic during and

immediately after surgery (Shannon et al., 1995). It is well absorbed from muscle tissue, and a common method of administration is intramuscular injection. Because fentanyl can also be absorbed through the skin, a transdermal patch has been developed on the theory that by slowly absorbing small amounts of fentanyl through the skin the patient might experience some relief from chronic pain. Unfortunately, the medication is only slowly absorbed through the skin, and therapeutic blood levels of fentanyl are not achieved for up to 12 hours after the individual first starts to use the patch (Tyler, 1994). Recently, a new dosage form, fentanyl-laced candy, has been introduced as a premedication for children about to undergo surgery (“Take Time to Smell,” 1994). It is interesting that opium was once used in Rome to calm infants who were crying (Ray & Ksir, 1993). After thousands of years of medical progress, we have returned to the starting point of using opiates to calm the fears of children. Pharmacology and subjective effects of fentanyl. Fentanyl is extremely potent, but there is some controversy over exactly how potent it is. It is estimated to be 50 to 100 times as potent as morphine (Drummer & Odell, 2001; Gutstein & Akil, 2001), although Ashton (1992) suggested that fentanyl was 1,000 times as potent as morphine. Kirsch (1986) concluded that fentanyl is “approximately 3,000 times stronger than morphine, (and) 1,000 times stronger than heroin” (p. 18). The active dose of fentanyl in the human is 1 microgram (Kirsch, 1986). As a basis of comparison, the average postage stamp weighs 60,000 micrograms. Thus, the average effective dose of fentanyl is 1/60,000th the weight of the typical postage stamp. Fentanyl is highly lipid soluble and thus reaches the brain quickly after it is administered. This is a characteristic of value when the drug is used in surgical procedures. The biological half-life of a single intravenous dose of fentanyl is rather short, ranging between 1 and 6 hours depending on the individual’s biochemistry8 (Drummer & Odell, 2001). Laurence and Bennett (1992) offered a middle-of-the-road figure of 3 hours, 8Because

of differences between individuals, different individuals biotransform and/or eliminate at different rates. Depending on the specific compound, there might be a difference in several orders of magnitude between those who are “fast metabolizers” of a specific drug and those whose bodies make them “slow metabolizers.”

Opiate Abuse and Addiction

which is the average therapeutic half-life of fentanyl. Fentanyl’s primary site of action is the Mu receptor site (Brown & Stoudemire, 1998), and the duration of fentanyl’s analgesic effect is 30 to 120 minutes. The drug is rapidly biotransformed by the liver and excreted from the body in the urine (Karch, 1996). The effects of fentanyl on the individual’s respiration might last longer than the analgesia produced by the drug (Shannon et al., 1995). This is a characteristic that must be kept in mind when the patient requires longterm analgesia. The major reason fentanyl is so useful is that in a medical setting, fentanyl produces a more rapid analgesic response than does morphine. The analgesic effects of fentanyl are often seen in just minutes after injection. This is a decided advantage when physicians seek to control pain during and after surgery. Side effects of fentanyl. About 10% of patients who receive a dose of fentanyl experience somnolence and/or confusion; 3% to 10% experience dizziness, druginduced anxiety, hallucinations, and/or feelings of depression (Brown & Stoudemire, 1998). Approximately 1% of patients who receive a dose of fentanyl experience agitation and/or a drug-induced state of amnesia, and about 1% experience a drug-induced state of paranoia. Other side effects include blurred vision, a sense of euphoria, nausea, vomiting, dizziness, delirium, lowered blood pressure, constipation, and possible respiratory— and in extreme cases cardiac—arrest (Shannon et al., 1995). At high dosage levels, muscle rigidity is possible (Foley, 1993). Physicians have noted that when fentanyl is administered to a patient, the blood pressure might drop by as much as 20% and heart rate might drop by as much as 25% (Beebe & Walley, 1991). Thus, the physician must balance the potential benefits to be gained by using fentanyl against the drug’s potential to cause adverse effects. Unfortunately, although fentanyl is an extremely useful pharmaceutical, it is also a popular drug of abuse. This aspect of fentanyl will be discussed in the next section.

Buprenorphine Buprenorphine, a synthetic analgesic that was introduced in the 1960s, is estimated to be 25 to 50 times as potent as morphine (Karch, 2002). Medical researchers quickly discovered that orally administered doses of buprenorphine are extremely useful in treating


postoperative and cancer pain. Further, as will be discussed in Chapter 32, researchers have discovered that when administered orally, buprenorphine appears to be at least as effective as methadone in blocking the effects of illicit narcotics. Buprenorphine has a rather unique absorption pattern. The drug is well absorbed from intravenous and intramuscular injection sites as well as when administered sublingually (Lewis, 1995). These methods of drug administration offer the advantage of rapid access to the general circulation without the danger of first pass metabolism. Unfortunately, when administered orally, buprenorphine suffers extensive first pass metabolism, a characteristic that limits its effectiveness as an analgesic. Thus, when used for analgesia, buprenorphine is injected into the patient’s body. Upon reaching the general circulation, approximately 95% of buprenorphine becomes protein bound (Walter & Inturrisi, 1995). The drug is biotransformed by the liver, with 79% of the metabolites being excreted in the feces and only 3.9% being excreted in the urine (Walter & Inturrise, 1995). Surprisingly, animal research suggests that the various drug metabolites are unable to cross the blood-brain barrier (BBB). This suggests that the drug’s analgesic effects are achieved by the buprenorphine molecules that cross the BBB to reach the brain rather than any drug metabolites that might be produced during the biotransformation process. Once in the brain, buprenorphine binds to three of the same receptor sites in the brain that are utilized by morphine. Buprenorphine binds most strongly to the Mu and Kappa receptor sites, which is where narcotic analgesics tend to act to reduce the individual’s perception of pain. However, buprenorphine does not cause the same degree of activation at the Mu receptor site that morphine does. For reasons that are still not clear, buprenorphine is able to cause clinically significant levels of analgesia with a lower level of activation of the Mu receptor site than morphine causes (Negus & Woods, 1995). Buprenorphine also tends to form weak bonds with the Sigma receptor site (Lewis, 1995). However, just because a drug is able to bind at a receptor site does not mean that it is always able to activate the receptor site. Buprenorphine is an excellent example of a drug that might bind to different receptor sites in the brain without having the same potential to activate these different


Chapter Fourteen

receptor sites in the brain. In the human brain, buprenorphine easily binds to both the Mu and Kappa receptor sites. However, the drug has relatively little effect on the Kappa receptor site, while more strongly affecting the activity of the Mu receptor site (Negus & Woods, 1995). Virtually all of the drug’s effects are achieved by buprenorphine’s ability to bind at, and activate, the Mu opiate receptors in the brain (Lewis, 1995). Indeed, the drug effectively functions as a Kappa receptor site antagonist at the same dosage level that it activates the Mu opiate receptor sites in the brain to cause analgesia (Negus & Woods, 1995). Finally, buprenorphine molecules only slowly “disconnect” from their receptor sites, thus blocking large numbers of other buprenorphine molecules from reaching those same receptor sites. Thus, at high dosage levels, buprenorphine seems to act as its own antagonist, limiting its own effects. Buprenorphine causes significant degrees of sedation for 40% to 70% of the patients who receive a dose of this medication. Between 5% and 40% will experience dizziness, and in rare instances (1%) patients have reported drug-induced feelings of anxiety, euphoria, hallucinations, or depression (Brown & Stoudemire, 1998). As is obvious from this brief review of buprenorphine’s pharmacology, it is a unique narcotic analgesic—more selective than morphine and more powerful than morphine. As will be discussed in the following section, however, it is slowly becoming popular as a drug of abuse.

II. OPIATES AS DRUGS OF ABUSE Many of the opiates are popular as drugs of abuse. In this section, the opiate abuse/addiction will be discussed. Why do people abuse opiates? Simply put, opiatebased analgesics are popular with illicit drug users because they make the user feel good. When they are used by people who are not experiencing any significant degree of pain, opioids are able to activate the brain’s reward system, which normally is active when the individual is involved in life-enhancing activities such as eating or sex (Kosten & George, 2002). The abuser experiences a sensation of drug-induced euphoria that varies in intensity depending on how the abusers indroduced the drug into their bodies.

When injected directly into the circulation, some opiates may cause the user to experience a rush or flash that is said to be similar to sexual orgasm (Bushnell & Justins, 1993; Hawkes, 1992; Jaffe, 1992, 2000c; Jaffe & Martin, 1990). This rush is different from the one reported by CNS stimulant abusers (Brust, 1998). Following the rush the user will experience a sense of euphoria, which usually lasts for 1–2 minutes (Jaffe, 2000c). Finally, the user often experiences a prolonged period of blissful drowsiness that may last several hours (Scaros, Westra, & Barone, 1990). These are characteristics that appeal to some drug users. Neuropsychopharmacologists believe that they have identified the reasons that narcotic analgesics are able to bring about these effects. Narcotic analgesics seem to mimic the action of naturally occurring neurotransmitters. Two different regions of the limbic system of the brain, the nucleus accumbens and the ventral tegmentum seem to be associated with the pleasurable response that many users report when they use opioids (Kosten & George, 2002). When abused, opioids trigger the release of massive amounts of dopamine in the nucleus accumbens, which is experienced by the person as pleasure.

The Mystique of Heroin There is widespread abuse of synthetic and semisynthetic narcotic analgesics such as Vicodin and Oxycontin, with more than 1.5 million people abusing these for the first time each year (Kalb et al., 2001). But it is heroin that people think of when the topic of opioid abuse/addiction is raised. Globally, 9 million people are thought to be addicted to heroin (diacetylmorphine) (United Nations, 2000), and between 600,000 and 1 million people in the United States are heroin addicts (Kranzler, Amin, Modesto-Lowe, & Oncken, 1999; O’Brien, 2001). Olmedo and Hoffman (2000) suggested an even higher number of 1.5 million“chronic” heroin users in the United States but did not identify what percentage of these people were addicted. Each year, heroin-related deaths account for about half of all illicit drug-use deaths in this country (Epstein & Gfroerer, 1997; Karch, 1996). A short history of heroin. Like aspirin, heroin was first developed by chemists at the Bayer pharmaceutical company of Germany, and it was first introduced in 1898. Like its chemical cousin morphine, heroin is

Opiate Abuse and Addiction

obtained from raw opium. One ton of raw opium will, after processing, produce approximately 100 kilograms of heroin (“South American Drug Production,” 1997). When the chemists who developed diacetylmorphine first tried it, they reported that the drug made them feel heroic. Thus, the drug was given the brand name Heroin (Mann & Plummer, 1991). Following the Civil War in the United States, large numbers of men had become addicted to morphine. Because heroin at low doses was found to suppress the withdrawal symptoms of morphine addicts, physicians of the era thought it was nonaddicting, and it was initially sold as a cure for morphine addiction (Walton, 2002). Physicians were also impressed by the ability of morphine and its chemical cousin heroin to suppress the severe coughs seen in tuberculosis or pneumonia, both leading causes of death in the 19th century, and thus to comfort the patient. Not until 12 years after it was introduced, long after many morphine addicts had become addicted to heroin, was its true addiction potential finally recognized. However, by that time heroin abuse/addiction had become a fixture in the United States. During the 1920s, the term junkie was coined for the heroin addict who supported his or her drug use by collecting scrap metal from industrial dumps for resale to junk collectors (Scott, 1998). Pharmacology of heroin. The heroin molecule is best visualized as a pair of morphine molecules that have been joined chemically. The result is an analgesic that is more potent than morphine, and a standard conversion formula is that 4 milligrams (mg) of heroin is as powerful as 10 mg of morphine (Brent, 1995; Lingeman, 1974). Estimates of the half-life of intravenous heroin range from less than 2 minutes (Drummer & Odell, 2001), through 3 minutes (Kreek, 1997), to a high estimate of 36 minutes (Karch, 2002). Surprisingly, research has shown that the heroin molecule does not bind to known opiate receptor sites in the brain, and researchers have suggested that it might more accurately be described as a prodrug9 than as a biologically active compound in its own right (Jenkins & Cone, 1998). Once in the body, heroin is biotransformed into morphine, a process that gives heroin its analgesic potential (Drummer & Odell, 2002; Jaffe, 1992; Karch, 2002; Reisine & Pasternak, 9

See Glossary.


1995). But because of differences in its chemical structure, heroin is much more lipid soluble than morphine. The difference in chemical structure allows heroin to cross the blood-brain barrier 100 times faster than morphine (Angier, 1990), a characteristic that makes it especially attractive as a drug of abuse. Subjective effects of heroin when abused. A number of factors influence the subjective effects of heroin including (a) the individual’s expectations for the drug and (b) the method of heroin abuse. For example, when it is used intranasally, only about 25% of the available heroin is absorbed by the user’s body, and the rate of absorption is slower than if the drug is directly injected into the circulation. In contrast to the slower rate of absorption and the limited amount that reaches the brain, virtually 100% of intravenously administered heroin reaches the circulation. In contrast to the gentle euphoria that intranasal users report, individuals who inject heroin directly into the circulation report that it produces a rush or a flash very similar to a sexual orgasm and that lasts for about 1 minute. Other sensations include a feeling of warmth under the skin, dry mouth, nausea, and a feeling of heaviness in the extremities. Users also report a sense of nasal congestion and itchy skin, both the result of heroin’s ability to stimulate the release of histamine in the user’s body. After this, the user will experience a sense of floating, or light sleep, that will last for about 2 hours, accompanied by clouded mental function. Heroin in the United States today. In contrast to countries where heroin is a recognized therapeutic agent, heroin is not a recognized pharmaceutical in the United States, and its possession or manufacture is illegal. In spite of this fact, heroin use has been viewed by many as a sign of rebellion, perhaps reaching its pinnacle with the rise of the “heroin chic” culture in the late 1990s (Jonnes, 2002). Heroin abusers in the United States are estimated to consume between 13 and 18 metric tons of heroin each year (Office of National Drug Control Policy, 2004). The average age of the individual at first use of heroin dropped from 27 in 1988 to 19 by the middle of the 1990s (Cohen et al., 1996; Hopfer, Mikulich, & Crowley, 2000). Adolescents (12–17 years of age) make up just under 22% of those who admit using heroin in the United States (Hopfer et al., 2000). One major reason for this increase in popularity among younger drug


Chapter Fourteen

abusers in the late 1990s was the availability of increasingly high potency heroin for relatively low prices. In the mid 1980s the average sample of heroin from the street was about 5% to 6% pure (Sabbag, 1994). By the start of the 21st century, heroin that was produced in South America and sold in the United States averaged 46% pure, and heroin produced in Mexico averaged 27% pure (Office of National Drug Control Policy, 2004). Heroin produced in Asia usually averaged about 29% pure when sold on the streets in the United States (Office of National Drug Control Policy, 2004). These figures reflect the glut of heroin available to illicit users in the United States. To explain the oversupply, the entire world’s need for pharmaceutical heroin10 could be met by cultivation of 50 square miles of opium poppies; in contrast, an estimated 1,000 square miles of poppies are under cultivation at this time (Walton, 2002). The high purity of the heroin being sold, combined with its relatively low cost and the misperception that insufflated (snorted) heroin was nonaddicting, contributed to an increase in heroin use in the United States in the early 1990s (Ehrman, 1995). The level of heroin abuse/addiction in the United States reached a plateau in the early years of the 21st century and has remained at about this level (Office of National Drug Control Policy, 2001).

Other Narcotic Analgesics That Might Be Abused Codeine. Surprisingly, codeine has emerged as a popular opiate of abuse, accounting for 12% of all drug-related deaths (Karch, 2002). There is little information available on codeine abuse, although it is possible that some of the codeine-related deaths are those of heroin addicts who miscalculate the amount of codeine they will need to block their withdrawal discomfort when they are unable to obtain their primary drug of choice. Oxycontin. Oxycontin has been a drug of abuse since its introduction in 1995. A generic form of this substance was released in 2004. Abusers will often crush the time-release spheres within the capsule and inject the material into a vein. Other abusers will simply ingest 10This

includes the medicinal use of heroin in countries where it is an accepted, and valuable, pharmaceutical agent.

a larger than prescribed dose for the euphoric effect. In part because of a number of media reports, Oxycontin quickly gained a reputation as a “killer” drug. However, clinical research has suggested that the vast majority of those who died from drug overdoses had ingested multiple agents such as benzodiazepines, alcohol, cocaine, or other narcotic analgesics (Cone et al., 2003). The authors found that only about 3% of the drug-induced deaths reported only Oxycontin as the cause of death. Still, Oxycontin was heavily marketed by the pharmaceutical company that produced it, which also downplayed its abuse potential (Meier, 2003). But, while prescription-drug abusers may differ in their pharmaceutical choices, the dynamic of abuse shares a common theme: whatever a manufacturer’s claims about a drug’s “abuse liability,” both hardcore addicts and recreational users will quickly find ways to make a drug their own. (Meier, 2003, p. 89, quotes in original)

It is estimated that Oxycontin is involved in approximately half of the estimated 4 million episodes of nonprescribed narcotic analgesic abuse that occurs each year in the United States (Office of National Drug Control Policy, 2004). Indeed, there is evidence that this medication may have unique dosing characteristics that make it especially attractive to drug abusers, which clouds the issue of whether it is a valuable tool in the fight against pain. Buprenorphine. Another drug that is growing in popularity as an opiate of abuse is buprenorphine. As was noted earlier in this chapter, buprenorphine is a useful narcotic analgesic. Researchers are also considering oral doses of buprenorphine as an alternative to methadone (discussed in Chapter 32). Unfortunately, street addicts have discovered that intravenously administered buprenorphine has a significant abuse potential (Horgan, 1989; Moore, 1995). Buprenorphine is the most commonly abused opiate in Australia and New Zealand (Stimmel, 1997a), and there have been reports of its abuse from countries such as Ireland and India (Singh, Mattoo, Malhotra, & Varma, 1992) as well as in the United States (Torrens, San, & Cami, 1993). Researchers actually know very little about the abuse of buprenorphine (Fudala & Johnson, 1995). Apparently,


Opiate Abuse and Addiction

the user will inject either buprenorphine alone or a mixture of buprenorphine and diazepam, cyclizine, or temazepam. It is not clear how significant buprenorphine will be as a drug of abuse, but the reader should be aware that there are limited reports of intravenous buprenorphine abuse in this country. Fentanyl. A popular drug of abuse, in part because of its potency, fentanyl is thought to be 50 to 100 times as potent as morphine, It is used by physicians for the control of pain following surgery or for chronic pain (Drummer & Odell, 2001). When it is abused, fentanyl can be injected, smoked, or snorted; transdermal skin patches may be heated and the fumes inhaled (Karch, 2002). Some abusers also empty the transdermal patches by poking holes in the patch material and draining the reservoir. The drug that is obtained in this manner is used orally, injected, or possibly smoked. Because standard urine toxicology screens do not detect fentanyl, it is not clear how widespread the abuse of this pharmaceutical actually is at this time.

Methods of Opiate Abuse When opiates are abused, they might be injected under the skin (a subcuteaneous injection, or “skin popping”), injected directly into a vein (mainlining), smoked, or used intranasally (technically, insufflation). As the potency of heroin sold on the streets has increased, skin popping has become less and less popular and insufflation has increased in popularity (Karch, 2002). Opiates such as heroin are well absorbed through the lungs (as when it is smoked). Historically, the practice of smoking opium has not been common in the United States since the start of the 20th century. Supplies of opium are quite limited in the United States, and opium smoking wastes a great deal of the chemical. However, in parts of the world where supplies are more plentiful, the practice of smoking opium remains quite common. The practice of snorting heroin powder and smoking heroin have become commonplace in the United States and are fueled by a popular myth that you cannot become addicted unless you inject heroin into your body (Drummer & Odell, 2001; Smith, 2001). By the time the person learns the grim truth, he or she has become dependent on heroin. The practice of snorting (insufflation) heroin is quite similar to the way that cocaine powder is inhaled. The user will use a razor

blade or knife to dice the powder until it has a fine, talcum-like consistency. The powder then is arranged in a small pile, or a line, and inhaled through a straw. With the higher levels of potency that began to emerge in the middle to late 1990s, the practice of heroin smoking again became popular. However, the blood levels achieved when heroin is smoked are only 50% that of injected heroin at best (Drummer & Odell, 2001). This is because up to 80% of the heroin is destroyed by the heat of smoking (Drummer & Odell, 2001). Nonetheless, heroin smoking remains popular, spurred on in part by the mistaken belief that you cannot become addicted to heroin if you only smoke it. One method by which heroin might be smoked is known as “chasing the dragon” (Strang, Griffiths, Powis, & Gossop, 1992). In this process, the user heats heroin powder in a piece of aluminum foil, using a cigarette lighter or match as the heat source. The resulting fumes are then inhaled, allowing the individual to get high without exposure to possibly contaminated needles (Karch, 2002). Another way heroin is abused is by smoking a combination of heroin and crack cocaine pellets called “speedball rock,” “moon rock,” or “parachute rock” (Dygert & Minelli, 1993). This combination of chemicals reportedly results in a longer high and a less severe post-cocaine use depression (Levy & Rutter, 1992). However, there is evidence that cocaine might exacerbate the respiratory depression produced by opiates. The most common method of heroin abuse is the intravenous injection. In this process, the addict mixes heroin in the spoon with water, or glucose and water, in order to dissolve it. Lemon juice, citric acid or vitamin C may be added to aid dissolving. This cocktail is heated until it boils, drawn into the syringe through a piece of cotton wool or cigarette filter to remove solid impurities, and injected whilst still warm. (Booth, 1996, p 14)

Where do opioid addicts obtain their drugs? Opiate abusers obtain their daily supply of the drug from many sources. The usual practice for street addicts is to buy street opiates unless they have access to a “pharmaceutical.”11 Pharmaceuticals are obtained by either “making” 11See



Chapter Fourteen

a doctor12 or by diverting medication from a patient with a legitimate need for it to illicit abusers. Some opioid addicts have been known to befriend a person with a terminal illness, such as cancer, in order to steal narcotic analgesics from the suffering patient for their own use. This is how most users obtain their supplies of pharmaceuticals such Vicodin and Oxycontin. Opiates such as heroin are obtained from supplies smuggled into the United States from other countries, especially Southeast Asia, Mexico, and South American countries such as Colombia (DEA, 1995). These are mixed with adulterants then distributed for sale on the local level. The opiates are usually sold in a powder form in small individual packets. The powder is mixed with water, then heated in a small container (usually a spoon) over a flame from a cigarette lighter or candle, and then injected by the user. If users are health care professionals with access to pharmaceutical supplies, they might divert medications to themselves. This is difficult, however, because of the rigid controls on supplies of narcotics. Users will often inject the pharmaceutical, although some abusers will ingest an opioid. When users inject a pharmaceutical, they usually crush the tablet until it is a fine powder or take the capsule apart and mix the powder with water. The mixture is then heated in a small container (usually a spoon, but bottle caps or other small containers are also used for this purpose) over a small fire (usually a match, a candle, or a cigarette lighter), which helps mix the powder with the water. The resulting mixture is then injected, although the method of injection by intravenous opiate abusers is different from the way a physician or nurse injects medication into a vein. The process has changed little in the past 50 years, and Lingeman’s (1974) description of the technique called “booting” remains as valid today as when it first appeared a quarter of a century ago. As the individual “boots” the drug, he or she injects it a little at a time, letting it back up into the eye dropper, injecting a little more, letting the blood-heroin mixture back up, and so on. The addict believes that this technique prolongs the initial pleasurable sensation of the heroin as it first takes effect—a feeling of warmth in the abdomen, euphoria, and sometimes a sensation similar to an orgasm. (p. 32) 12See


In the process, however, the hypodermic needle and the syringe (or the eye dropper attached to a hypodermic needle, a common substitute for a hypodermic needle) become contaminated with the individual’s blood. When other intravenous drug abusers share the same needle, which is a common practice, contaminated blood from one individual is passed to the next, and the next, and the next. Sometimes, the opiate abuser will attempt to inject a pharmaceutical tablet or capsule originally intended for oral ingestion. Unfortunately, this practice inserts starch or other fillers13 not intended for intravenous use directly into the bloodstream (Wetli, 1987). When tablets or capsules are used for intravenous injection, the fillers cannot be inactivated by the body’s defenses. Further, repeated exposure to the compounds used as fillers or the adulterants often found in street drugs can cause extensive scarring at the point of injection. These scars form the famous tracks caused by repeated injections of illicit opiates. The development of tolerance. Over time, opiate abusers become tolerant to the euphoric effects of narcotics. As a result of their growing tolerance they do not experience the rush or flash with the same intensity as when they first started to use. To reacquire the rush experience, narcotics addicts will often increase the dosage level of the drugs being abused, possibly to phenomenonal levels. Heroin addicts have been known to increase their daily dosage level 100-fold over extended periods of time in their attempt to overcome their developing tolerance to the euphoric effects of the drug (O’Brien, 2001). One reason for the loss of drug-induced euphoria in opiate addicts may be that with the chronic administration of narcotics, the brain reduces the amount of endorphins it produces (Klein & Miller, 1986). Over time, the brain substitutes the chemical opiates for natural endorphins, and the effect of the narcotics on the person becomes less intense. Further, there appears to be a “threshold effect” (Parry, 1992, p. 350) or a level after which the user will experience a “stable genial state” (p. 350) without becoming high on the opiate he or she is using. When chronic opioid abusers reach this state, they are no longer using the drug to get high. At this point, they are taking narcotics just 13

See Glossary.


Opiate Abuse and Addiction

to function in a normal state (“to maintain,” as many people say when they reach this point). As when narcotic analgesics are used in a medical setting, the illicit user will develop tolerance to each of the various effects of the opiates at different rates (Jaffe & Jaffe, 2004). Where, for example, individuals might develop some degree of tolerance to the respiratory depression induced by narcotic analgesics, they are unlikely to become tolerant to the constipating side effect of this class of drugs (Zevin & Benowitz, 1998). For this reason the chronic abuse of narcotics can (and often does) cause significant constipation problems for illicit users (Karch, 2002; Reisine & Pasternak, 1995). Opiate abusers also never develop tolerance to the pupillary constriction induced by this class of medications (Nestler, Hyman, & Malenka, 2001).

Scope of the Problem of Opiate Abuse and Addiction Opiate abuse around the world. Although heroin is the drug that comes to mind when people think about the abuse of opiates, it is not the most common form of narcotic to be abused in the world. The United Nations (2004) estimated that there are 15 million opiate abusers worldwide, of which half live in Asia. Another 25% live in Europe, and only 2.5 million (or 16.67% of the total) live in the United States (United Nations, 2004). The abuse of prescribed narcotic analgesics. Each year, approximately 1.6 million people are thought to abuse a prescribed narcotic analgesic for the first time in just the United States alone (Zickler, 2001). Reynolds and Bada (2003) gave an even higher estimate, noting that each year 1.1 million women between the ages of 14 and 55 took a nonprescribed narcotic analgesic. The United Nations (2004) gave a lower estimate of 1.1 million prescription narcotic abusers in the United States. Prescription drug abuse might take many different forms. For example, a man who had received a prescription for a narcotic analgesic after breaking a bone might share a leftover pill or two with a family member who had the misfortune to sprain his or her ankle and be in severe pain. With the best of intent, this person has provided another with medications that are, technically, being abused, in the sense that the second person did not receive a prescription for the narcotic analgesic that she or he ingested. Nationally, an estimated

11 million people have abused opioid medications not prescribed for them at some point in their lives (Kreek, 2000). Fully 13% of the high school seniors of the class of 2003 admitted to having abused an opiate other than heroin at least once, and 1.5% admitted to the use of heroin at least once (Johnston, O’Malley, & Bachman, 2003a). Most people who abuse narcotic analgesics on a regular basis try to avoid being identified as medication abusers, or as “drug seeking.” It is not uncommon for patients to visit different physicians or different hospital emergency rooms to obtain multiple prescriptions for the same disorder. Patients have also been known to manufacture symptoms (after doing a bit of research) to allow them to simulate the signs of a disorder virtually guaranteed to result in a prescription for a narcotic analgesic. Finally, patients with actual disorders have been known to exaggerate their distress in the hope of being able to obtain a prescription for a narcotic analgesic from an overworked physician. Thus, one of the warning signs that a physician will look for in a medication-seeking patient is that he or she has had multiple consultations for the same problem. Heroin abuse/addiction. The reputation of heroin is that it is the most potent and most commonly abused narcotic analgesic. During the latter part of the 20th century heroin was reputed to enslave any person foolish enough to abuse it, as evidenced by Lingeman’s statement that “the majority [of heroin abusers] go on to mainlining” (1974, p. 106). However, much of its reputation has been exaggerated at best or is wildly inaccurate. Clinical research suggests that as an analgesic it is no more potent than hydromorphone (O’Brien, 2001). Further, researchers have concluded that only a fraction of those who briefly abuse opiates, perhaps one of every four people, will become addicted (O’Brien, 2001).14 But one should remember that heroin remains a potentially addictive substance, and that approximately half of those who repeatedly abuse an opioid such as heroin will go on to become addicted (Jenike, 1991). The most realistic estimates suggest that 3 million people in the United States have used heroin at least 14

However, because it is not possible to predict in advance who will become addicted, and who will not, the abuse of narcotic analgesics is not recommended.


Chapter Fourteen

once (O’Connor, 2000) and that there are about 980,000 current users (including those who are addicted to it) (D’Aunno & Pollack, 2002). The United Nations (2004) estimated that there are 1.4 million heroin abusers/addicts in the United States. It is not known what percentage of this number are addicted to heroin. Actually, scientists know very little about the natural history of heroin abuse/addiction. Users are presumed to take approximately 2 years between the initiation of heroin abuse and the development of physical dependence on this chemical (Hoegerman & Schnoll, 1991). Further, there is a wide variation in individual opiate abuse patterns. This is clearly seen in a subpopulation of narcotic abusers who engage in occasional abuse without becoming addicted (Shiffman, Fischer, Zettler-Segal, & Benowitz, 1990). These people are called “chippers.” Chippers seem to use opiates more in response to social stimuli or because of transient states of internal distress than because they are addicted to one of these compounds. They also seem to have no trouble abstaining from opiates when they wish to do so. But because research in this area is prohibited, scientists know virtually nothing about heroin chipping or what percentage of those who start out as chippers progress to a more addictive pattern of heroin use. Researchers generally agree that the typical heroin addict is estimated to spend about $250 a week to support his or her habit (Abt Associates, Inc., 1995a). They also agree that males make up about three-fourths of the total of those who are addicted to heroin in the United States (Kaplan & Sadock, 1996). But this ratio also suggests that of the estimated 900,000 heroin addicts in this country, perhaps 675,000 are males and 225,000 are female. If the higher estimate of 1 million active heroin addicts is used, then some 250,000 women are addicted to heroin in the United States. Geographically, heroin-addicted persons are thought to be concentrated on the coasts, with New York City and California accounting for the vast majority of heroin addicted people in this country.

Complications Caused by Chronic Opiate Abuse Withdrawal from opioids for the addicted person. The hallmark sign of an addiction to opiates is the existence of the classic pattern of opioid withdrawal symptoms.

The symptoms of withdrawal from narcotics will vary in intensity as a result of several different factors: (a) the dose of the opiate that was abused, (b) the length of time the person has used the drug,15 (c) the speed with which withdrawal is attempted (Jaffe & Jaffe, 2004), and (d) the half-life of the opioid being abused (Jaffe & Jaffe, 2004; Kosten & O’Connor, 2003). Heroin withdrawal symptoms peak 36 to 72 hours after the last dose, and the acute withdrawal discomfort lasts for 7 to 10 days; the acute phase of methadone withdrawal peaks 4 to 6 days after the last dose and continues for approximately 14 to 21 days (Collins & Kleber, 2004; Kosten & O’Connor, 2003). As a general rule, an opiate-addicted person who has been using the equivalent of 50 mg of morphine a day for 3 weeks will have an easier detoxification than would someone who has been using the equivalent of 50 mg of morphine a day for 3 months. Also, an opiate addict who is gradually withdrawn from opiates at the equivalent of 10 mg of morphine a day will have an easier detoxification than would the opiate-dependent person who just suddenly stops using the drug (“cold turkey”). A number of aspects to the phemonenon of withdrawal from narcotics makes it unique. First, in many patients, the symptoms of narcotics withdrawal can be managed through the use of hypnotic suggestion (Erlich, 2001). Second, the individual’s perception of and response to the withdrawal process is influenced to a large degree by his or her cognitive “set.” This set is, in turn, influenced by such factors as the individual’s knowledge, attention, motivation, and degree of suggestibility. Opiate withdrawal discomfort is a learned phenomenon. This seems to be confirmed in reallife settings where narcotics addicts are forced to go through the withdrawal process cold turkey. For example, when the individual is in a therapeutic community that actively discourages reports of withdrawal discomfort, opiate-dependent individuals do not go through the dramatic withdrawal displays so often noted in methadone detoxification programs (Peele, 1985). Further, when narcotics addicts are incarcerated and denied further access to the drug, they are often able to go through withdrawal without the dramatic symptoms seen at a detoxification center. 15

However, after 2–3 months of continuous use, there is generally no increase in the severity of the withdrawal symptoms.


Opiate Abuse and Addiction

Acute withdrawal. To avoid withdrawal-related symptoms, opiate-dependent individuals must either inject another dose of their drug of choice or substitute another drug. Withdrawal symptoms include a craving for more narcotics, tearing of the eyes, running nose, repeated yawning, sweating, restless sleep, dilated pupils, anxiety, anorexia, irritability, insomnia, weakness, abdominal pain, nausea, vomiting, GI upset, chills, diarrhea, muscle spasms and muscle aches, irritability, and in males, possible ejaculation (Collins & Kleber, 2004; Gold, 1993; Hoegerman & Schnoll, 1991; Kosten & O’Connor, 2003). It has been suggested that 600 to 800 mg of ibuprofen can provide significant relief from the muscle pain experienced in opiate withdrawal (Collins & Kleber, 2004). Constipation is a potential complication of narcotics withdrawal and in rare cases can result in fecal impaction and intestinal obstruction (Jaffe, 1990; Jaffe & Jaffe, 2004). On very rare occasions, withdrawal can cause or contribute to seizures, especially if the opiate being abused was one that could precipitate seizures (Collins & Kleber, 2004). Anxiety is a common withdrawal-induced emotion and might make the person so uncomfortable as to reinforce the tendency toward continued drug use (Bauman, 1988; Collins & Kleber, 2004). Indeed, the individual’s fear of withdrawal-induced distress might almost reach phobic proportions (Collins & Kleber, 2004). Rather than the use of a benzodiazepine to control this anxiety, Seroquel (quetiapine fumarate) has been suggested as a means to control opiate withdrawalrelated anxiety (Winegarden, 2001). In a medical setting, opiate-dependent individuals will often emphasize their physical distress during withdrawal in an attempt to obtain additional drugs. Such displays are often quite dramatic but are hardly a reflection of reality. Withdrawal from narcotics may be uncomfortable but it is not fatal if the patient is in good health, and it is rarely a medical emergency (Henry, 1996; Mattick & Hall, 1996; O’Brien, 1998, 2001; Sadock & Sadock, 2003).16 The subjective experience 16This

assumes that the patient is using only opioids and that the individual has no concurrent medical problems such as a seizure disorder or cardiac disease. A physician should supervise any drug withdrawal program in order to reduce potential danger to life that might exist if the patient is a polydrug user.

has been compared to a bad case of influenza (Brust, 1998; Kosten & O’Connor, 2003; Mattick & Hall, 1996; Weaver, Jarvis, & Schnoll, 1999).17 The acute symptoms of the opiate withdrawal syndrome will eventually abate in the healthy individual, even in the absence of treatment. Extended withdrawal symptoms. There is evidence of a second phase of withdrawal from narcotics that lasts beyond the period of acute withdrawal. During this time, which may last for several months, the individual may experience feelings of fatigue, heart palpitations, and a general sense of restlessness (Satel, Kosten, Schuckit, & Fischman, 1993). There is evidence that this phase of protracted abstinence might extend for up to 30 weeks after acute withdrawal (O’Brien, 1996; Satel et al., 1993). During this stage of protracted abstinence, the physical functioning of the individual slowly returns to normal. The authors support this hypothetical phase of protracted abstinence by citing research studies that have found significant changes in respiration rate, size of the pupils of the eyes, blood pressure changes, and body temperature changes in recovering narcotics addicts for more than 17 weeks after the last dose of narcotics. However, Mattick and Hall (1996) suggested that the case for the existence of a protracted phase of withdrawal was quite weak and that this phenomenon is not an accepted part of the recovery process from opiate addiction. Although opiate-dependent persons often attempt to taper or withdraw from opiates on their own, little is known about this phenomenon (Collins & Kleber, 2004; Gossop, Battersby, & Strang, 1991). Some individuals will simply go cold turkey and stop using opioids; others will attempt to control their withdrawal distress through the use of benzodiazepines or other pharmaceuticals. Organ damage. Scientists have long known that patients in extreme pain (such as found in some forms of cancer, for example) who receive massive doses of narcotic analgesics for extended periods of time fail to show evidence of opiate-induced damage to any of the body’s organ systems. This is consistent with historical evidence from early in the 20th century, before the strict safeguards imposed by the government were 17

The problem of opiate withdrawal in the infant will be discussed in Chapter 20.


Chapter Fourteen

instituted, where cases would come to light in which a physician (or less often a nurse) had been addicted to morphine for years or even decades. The health care professional involved would take care to utilize proper sterile technique, thus avoiding the danger of infections inherent in using contaminated needles. With the exception of the opiate addiction, the addicted physician or nurse would appear to be in good health. For example, the famed surgeon William Halsted was addicted to morphine for 50 years without suffering any apparent physical problems (Smith, 1994). However, health care professionals have access to pharmaceutical-quality narcotic analgesics, not street drugs. The typical opiate addict must inject drugs purchased from illicit sources and of questionable purity. In addition to this, the lifestyle of the opioid addict carries with it serious health risks. For example, morphine abuse has been implicated as a cause of decreased sexual desire for both men and women as well as causing erectile problems in men (Finger, Lung, & Slagel, 1997). Other common health complications found in heroin abusers include cerebral vascular accidents (strokes), cerebral vasospasms, infectious endocarditis, liver failure, disorders of the body’s blood clot formation mechanisms, malignant hypertension, heroin-related nephropathy, and uremia (Brust, 1993, 1997; Karch, 2002). Heroin addicts have been known to die from pulmonary edema, but the mechanism by which heroin may induce this condition is not clear (Karch, 2002). Also, chronic opiate abuse is known to be associated with a reduction in the effectiveness of the immune system, although the reasons are not known (Karch, 2002). The chronic abuse of opiates has also been identified as a cause of renal disease and rhabdomyolysis18 (Karch, 2002). Researchers did find evidence suggesting an autoimmune syndrome in which the kidneys are damaged in chronic heroin abusers for reasons that were not well understood. This is perhaps most clearly seen in chronic oxycodone abusers, who suffer from a drug-induced autoimmune syndrome resulting in damage to the kidneys (Hill, Dwyer, Kay, & Murphy, 2002). At this point it is not clear whether the heroin-induced kidney failure was caused by the same mechanism as that induced by oxycodone addiction. It is also not clear 18See


whether these effects are due directly to the abuse of heroin or if they are due to the adulterants that are added to illicit opiates (for more information on drug fillers, see Chapter 36). However, one complication of intravenous heroin abuse/addiction that occasionally develops in some users is what is known as cotton fever (Brent, 1995; Karch, 2002). The heroin abuser/addict will try to purify the heroin by using wads of cotton as a crude filter. During times of hardship, when heroin supplies are scarce, some users will try to use the residual heroin found in old cotton filters. When they inject the mixture, they will inject microscopic cotton particles as well as the impurities filtered out by the cotton; this can often cause pulmonary arteritis (a serious medical condition in which the pulmonary artery becomes inflamed). There is much debate in the medical community as to whether prolonged exposure to narcotic analgesics alters the function of the nervous system. Studies involving rats, for example, have found that the chronic use of heroin seems to cause the shrinkage of dopamine-utilizing neurons in the brain’s “reward system” (Nestler, 1997). Further, there appears to be an associational learning process at work through which specific sights/sounds/smells/activities are associated with the impending use of opiates (Schroeder, Holahan, Landry, & Kelly, 2000). These microscopic neurological changes then contribute to the phenomenon of relapse in patients who are exposed to specific sights/sounds/smells/activities formerly associated with the use of the desired substance. These findings are consistent with the theories suggesting that chronic exposure to opiates can result in physical changes within the brain (Dole, 1988, 1989; Dole & Nyswander, 1965). This theory, however, has been challenged. Hartman (1995) stated that opiates, including heroin, do not appear to have neurotoxic effects on human cognition. There is also evidence that the heroin-induced shrinkage in the dopamine-using neurons of the rat brain will reverse with abstinence (Nestler, 1997). These findings raise questions about whether the observed opiateinduced neurological changes are permanent. Generally, the complications seen when narcotics are abused at above-normal dosage levels are an exaggeration of the side effects observed when these medications


Opiate Abuse and Addiction

are used in medical practice. Thus, where morphine can cause constipation when used by physicians, morphine abusers/addicts experience pronounced constipation that can reach the levels of intestinal obstruction. Further, when abused at high dosage levels, many narcotics are capable of causing seizures (Foley, 1993). This rare complication of narcotics use is apparently caused by the high dosage level of the opioid being administered and usually responds to the effects of a narcotics blocker such as Narcan (naloxone), according to the author. One exception to this rule are seizures caused by the drug meperidine. Naloxone may actually reduce the patient’s seizure threshold, making it more likely that he or she will continue to experience meperidine-induced seizures (Foley, 1993). Thus, the physician must identify the specific narcotics being abused in order to initiate the proper intervention for opioid-induced seizures. There is research evidence suggesting that heroin abuse might be the cause of neurological damage, at least in isolated cases. In rare cases, this practice has resulted in a progressive spongiform leukoencephalophy, a condition similar to the “mad cow” disease in English cattle in the mid 1990s (Karch, 2002; Kriegstein et al., 1999). At this point very little is known about how inhaling heroin fumes might lead to a case of progressive spongiform leukoencephalophy, and there is a chance that this is caused by one or more chemicals added to the heroin to dilute it rather than by the heroin itself (“Heroin Encephalophy,” 2002; Kriegstein et al., 1999). There was an outbreak of heroin-induced progressive spongiform leukoencephalophy in the Netherlands in the 1990s, with the first cases in the United States being identified in 1996. At first these cases were thought to be associated with the practice of “chasing the dragon,” but at least one case has been identified in an intravenous drug abuser (“Heroin Encephalophy,” 2002). There also has been one case report of a possible heroin-induced inflammation of the nerves in the spinal cord in a man from Holland who resumed the practice of smoking heroin after 2 months of abstinence (Nuffeler, Stabba, & Sturzenegger, 2003). However, the etiology of the inflammatory process in this patient’s spinal cord was not clear, and it is possible that heroin was not a factor in the development of this disorder.

Overdose of Illicit Opiates A given opiate abuser might overdose on narcotics for many reasons. For example, it is difficult to estimate the potency of illicit narcotics, and the user might miscalculate the amount of heroin that he or she had purchased or might safely inject, bringing on an overdose. Some of these individuals die before they reach the hospital, but others survive long enough for health care professionals to intervene and rescue them from the effects of the drug overdose. An overdose of narcotics will produce a characteristic pattern of reduced consciousness, pinpoint pupils, and respiratory depression, with death occurring from respiratory arrest (Carvey, 1998; Drummer & Odell, 2001; Henry, 1996). Without medical intervention, death usually occurs 5 to 10 minutes following an intravenous injection of an opiate overdose, and 30 to 90 minutes following an intramuscular injection of an overdose of narcotic analgesics (Hirsch et al., 1996). However, these data apply only for cases of overdose with pharmaceutical opiates. Medical experts are still not sure whether deaths from illicit narcotics are caused by the drugs themselves, by the various combinations of substances commonly abused by illicit drug users, or by the multitude of other chemicals commonly added to street narcotics to dilute them. For example, there is evidence that the concurrent use of heroin and cannabis might increase the individual’s chances of a heroin overdose, although the exact mechanism for this is not known (Drummer & Odell, 2001). Illicit drugs are commonly adulterated before sale. In the early 1990s a typical sample of street heroin usually contained between 68 and 314 mg of the common adulterant quinine (Scaros, Westra, & Barone, 1990). If the addict were to inject the heroin over a 10-second period, he or she would be injecting between 10 and 131 mg of quinine per second. This is up to 182 times the maximum recommended rate of injection of quinine. This rate of quinine injection is in itself capable of causing a fatal reaction in many individuals. Thus, some question exists as to whether deaths by “narcotics overdoses” are indeed caused by the narcotics or by other substances that are mixed in with narcotics sold on the streets. Street myths and narcotics overdose. There are several street myths about the treatment of opiate overdose.


First, there is the myth that cocaine (or another CNS stimulant) will help in the control of an opiate overdose. Another myth is that it is possible to control the symptoms of an overdose by putting ice packs under the arms and on the groin of the overdose victim. A third myth is that the person who had the overdose should be kept awake and walking around until the drug wears off. Unfortunately, the treatment of an opiate overdose is a complicated matter and does not lend itself to such easy solutions. Even in the best-equipped hospital, a narcotics overdose may result in death. The current treatment of choice for a narcotics overdose is a combination of respiratory and cardiac support as well as a trial dose of Narcan (naloxone hydrochloride) (Henry, 1996). Naloxone hydrochloride is thought to bind at the receptor sites within the brain occupied by opiate molecules, displacing them from the receptors and reversing the effects of the opiate overdose. Unfortunately, naloxone has a therapeutic half-life of only 60 to 90 minutes. Its effects are thus quite shortlived and it might be necessary for the patient to receive several doses before he or she has fully recovered from the opiate overdose (Roberts, 1995). Although naloxone-induced complications are rare, they do occasionally develop when this drug is used to treat opiate overdoses (Henry, 1996). Finally, the patient might have ingested or injected a number of different chemicals, each of which has its own toxicological profile. For

Chapter Fourteen

these reasons, remember that known or suspected opiate overdoses are life-threatening emergencies that always require immediate medical support and treatment.

Summary The narcotic family of drugs has been effectively utilized by physicians for several thousand years. After alcohol, the narcotics might be thought of as man’s oldest drug. Various members of the narcotic family have been found effective in the control of severe pain, severe cough, and severe diarrhea. The only factor that limits their application in the control of less grave conditions is the addiction potential that this family of drugs represents. The addiction potential of narcotics has been known for hundreds if not thousands of years. For example, opiate addiction was a common complication of military service in the last century and was called the “soldier’s disease.” But it was not until the advent of the chemical revolution, when synthetic narcotics were first developed, that new forms of narcotic analgesics became available to drug users. Fentanyl and its chemical cousins are products of the pharmacological revolution that began in the late 1800s and which continues to this day. This chemical is estimated to be several hundred, to several thousand, times as powerful as morphine and promises to remain a part of the drug abuse problem for generations to come.


Hallucinogen Abuse and Addiction

those produced by the ergot fungus, which grows in various forms of grain. Historical evidence long suggested that this fungus could produce exceptionally strong compounds. For example, the ingestion of grain products infected by ergot fungus can cause vasoconstriction so severe that entire limbs have been known to auto-amputate, and sometimes the individual will die from gangrene (Walton, 2002). It was thought that the ingestion of ergot fungus-infected bread caused the death of 40,000 people in the French district of Aquitaine around the year 1000 C.E. (Walton, 2002). Compounds produced by the ergot fungus were of interest to scientists eager to isolate chemicals that might have a use in the fight against disease. In 1943, during a clinical research project exploring the characteristics of one compound obtained from the rye ergot fungus Claviceps purpurea (Lingeman, 1974), Lyergic acid diethylamide-25 (LSD-25, or LSD) was identified as a hallucinogen. Actually, this discovery was made by accident, as the purpose of the research was to find a cure for headaches (Monroe, 1994). But a scientist accidentally ingested a small amount of LSD-25 while conducting an experiment, and later that day began to experience LSD-induced hallucinations. After he recovered, the scientist correctly concluded that the source of the hallucinations was the specimen of Claviceps purpurea on which he had been working. He again ingested a small amount of the fungus and experienced hallucinations for the second time, confirming his original conclusion. Following World War II, there was a great deal of scientific interest in the various hallucinogenics, especially in light of the similarities between the subjective effects of these chemicals and various forms of mental illness. Further, because they were so potent, certain agencies of the United States government, such as the

Introduction It has been estimated that about 6,000 different species of plants might be used for their psychoactive properties (Brophy, 1993), including several species of mushrooms that will, when ingested, produce hallucinations (Rold, 1993). Many of these plants and mushrooms have been used for centuries in religious ceremonies, healing rituals, for predicting the future (Berger & Dunn, 1982; Metzner, 2002), and on occasion to prepare warriors for battle (Rold, 1993). Even today, certain religious groups use mushrooms with hallucinogenic properties as part of their worship, although their use is illegal in the United States (Karch, 2002). There are those who advocate the use of hallucinogenic substances as a way to explore alernative realities or gain knowledge about one’s self (Metzner, 2002). Hallucinogens are also popular drugs of abuse. In this chapter, the hallucinogens will be examined.

History of Hallucinogens in the United States Over the years, researchers have identified approximately 100 different hallucinogenic compounds that might be found in various plants or mushrooms. In some cases, the active agent(s) has been isolated and studied by scientists. Psilocybin is an example of such a compound; it was isolated from certain mushrooms that are found in the southwestern region of the United States. However, many potential hallucinogens have not been subjected to systematic research, and much remains to be discovered about their mechanism of action in humans (Glennon, 2004). One family of organic compounds that has been subjected to the greatest level of scientific scrutiny are 197


Chapter Fifteen

14 13 12 Percentage

11 10 9 8 7 6 5 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 Year

FIGURE 15.1 Lifetime LSD use by adolescents and young adults. Source: Johnston, O’Malley, Bachman, & Schulenburg (2004b).

Department of Defense and the Central Intelligence Agency, experimented with various chemical agents, including LSD, as possible chemical warfare weapons (Budiansky, Goode, & Gest, 1994). There is strong evidence that the United States Army administered doses of LSD to soldiers without their knowledge or permission between 1955 and 1975 as part of its research into possible uses for the compound (Talty, 2003). In the 1950s, the term psychedelic was coined to identify this class of compounds (Callaway & McKenna, 1998). By the 1960s these chemicals had moved from the laboratory into the streets where they quickly became popular drugs of abuse (Brown & Braden, 1987). The popularity and widespread abuse of LSD in the 1960s prompted the classification of this chemical as a controlled substance in 1970 (Jaffe, 1990), but the classification did not solve the problem of its abuse. Over the years, LSD abuse has waxed and waned, reaching a low point in the late 1970s and then increasing until it was again popular in the early 1990s. The abuse of LSD in the United States peaked in 1996, and it has gradually been declining since then (Markel, 2000). Whereas 12% of high school seniors in the class of 2000 admitted to having used LSD once and 8% reported that they had used it within the past year

(Markel, 2000), only 5.9% reported having ever used LSD in 2002 (Johnston, O’Malley, & Bachman, 2003a). The incidence of reported LSD abuse by young adults in recent years is reviewed in Figure 15.1. The hallucinogen Phencyclidine (PCP) deserves special mention. Because of its toxicity, PCP fell into disfavor in the early 1970s (Jaffe, 1989), but in the 1980s, a form of PCP that could be smoked was introduced and it again became popular with illicit drug users in part because smokers could more closely control how much of the drug they used. PCP remained a common drug of abuse until the middle to late 1990s, when it declined in popularity (Karch, 2002). PCP is still occasionally seen, especially in the big cities on the east and west coasts (Drummer & Odell, 2001), and it is often sold to unsuspecting users in the guise of other, more desired, substances. It is also sold on the streets as part of the compound called “dip dope” or “dip”; cigarettes or marijuana cigarettes are dipped into this compound, a mixture of PCP, formaldehyde, and methanol, before being smoked (Mendyk & Fields, 2002). Another drug, N, alphadimethyl-1,3-benzodioxole-t-ethanamine (MDMA), became quite popular as a chemical of abuse in the late 1970s and early 1980s and continued to be a drug


Hallucinogen Abuse and Addiction

of abuse in the 1990s and the first part of the 21st century. Both PCP and MDMA will be discussed in later sections of this chapter.

Scope of the Problem It is difficult to estimate the number of casual hallucinogenic abusers in the United States, but evidence suggests that the number has fallen over the past 5 to 10 years. In contrast to the 12.6% of 12th grade students in 1998 who admitted using LSD at least once, only 5.9% of the seniors of the class of 2003 said they had used it at least once (Johnston, O’Malley, & Bachman 2003a). In years past, the majority of those who used the hallucinogens such as LSD were those who experimented with it, then either totally avoided further hallucinogen use or only abused it on an episodic basis (Jaffe, 1989). LSD was repackaged and reformulated in the mid 1990s so that the typical dose contained lower amounts than were seen in the 1960s and 1970s (Gold, Schuchard, & Gleaton 1994), which may have contributed to the resurgence in interest in LSD in the early to mid 1990s.

Pharmacology of the Hallucinogens To comprehend how the hallucinogenic compounds affect the user, it is necessary to understand that normal consciousness rests on a delicate balance of neurological function. Compounds such as serotonin and dopamine, although classified as neurotransmitters, might better be viewed as neuromodulators that shift the balance of brain function from normal waking states to the pattern of neurological activity seen in sleep or various abnormal brain states (Hobson, 2001). The commonly abused hallucinogenics can be divided into two major groups, the indolealkylamines,1 and the phenylalkylamines (Glennon, 2004).2 The “classic” hallucinogens such as LSD seem to act as agonists to the 5-HT serotonin receptor site, and their effects are blocked by experimental 5-HT antagonists (Drummer & Odell, 2001; Glennon, 2004). In spite of the differences between hallucinogens in chemistry 1LSD

is a member of this group of hallucinogens. of each major group exist, but will not be discussed further in this text. See Glennon (2004) for more information about these subcategories of hallucinogenic compounds. 2Subcategories

and potency, illicit drug abusers tend to adjust their intake of the drugs being abused to produce similar effects (Schuckit, 2000). The “classic” hallucinogens all produce hallucinations, or hallucinatory-like experiences, by alterning the normal function of serotonin in the raphe nuclei of the brain. This has the effect of allowing acetylcholine neurons that normally are most active during dream states to express themselves during the waking state. In other words, the user begins to dream while remaining in an altered state of waking, a condition interpreted as hallucinations by the individual (Hobson, 2001). One exception to this rule is DMT. The effects of DMT last only about 20 minutes and for this reason DMT is often called a “businessman’s high.” The drug experience may fit into a typical half-hour lunch break, making it a popular drug of abuse for some of the business community. With this one exception, however, DMT is very similar to the other hallucinogens to be discussed in this chapter. It is common for a person under the influence of one of the hallucinogens to believe that he or she has a new insight into reality. But these drugs do not generate new thoughts so much as alter one’s perception of existing sensory stimuli (Snyder, 1986). The waking-dreams that are called hallucinations are usually recognized by the user as being drug induced (Lingeman, 1974). Thus, the terms hallucinogen or hallucinogenic are usually applied to this class of drugs. As LSD is the most popular hallucinogen, this chapter will focus on LSD as the prototypical hallucinogenic, and other drugs in this class will be discussed only as needed. The Pharmacology of LSD There is much to be discovered about how LSD affects the human brain (Sadock & Sadock, 2003). LSD is one of the most potent chemicals known to man. Researchers have compared it to hallucinogenic chemicals naturally found in plants, such as psilocybin and peyote, and found that LSD is between 100 and 1,000 times as powerful as these “natural” hallucinogens (Schwartz, 1995). It has been estimated to be 3,000 times as potent as mescaline (O’Brien, 2001) but is also weaker than synthetic chemicals such as the hallucinogenic DOM/STP (Schuckit, 2000).


For the casual user, LSD might be effective at doses as low as 50 micrograms, although the classic LSD “trip” usually requires twice that amount of the drug (Schwartz, 1995). Users in the 1960s might have ingested a single 100–200 microgram dose, but current LSD doses on the street seem to fall in the 20–80 microgram range, possibly to make it more appealing to first-time users (Gold & Miller, 1997c). Although it is possible to inject LSD directly into a vein, the most common method of abuse is through oral doses (Henderson, 1994a). The LSD molecule is water soluble and simular in structure to the neurotransmitter serotonin (Klein & Kramer, 2004). Indeed, it seems to bind to the 5-HT2a receptor site in the human brain (Glennon, 2004). Although many drug abusers claim to have absorbed LSD through the skin after it was detected by urine toxicology testing, this is not possible (Henderson, 1994a). It is usually administered orally but can be taken intranasally, intravenously, and by inhalation (Klein & Kramer, 2004). LSD is rapidly absorbed from the gastrointestinal tract after oral ingestion and is distributed to all body tissues (Mirin, Weiss, & Greenfield, 1991). Only about 0.0l% of the original dose actually reaches the brain (Lingeman, 1974). Although much remains to be discovered about how LSD affects the brain, it is known that LSD functions as a serotonin agonist. Classified as a hallucinogenic compound, LSD actually causes misinterpretations of reality that are better called illusions for the most part, with actual hallucinations being seen only when very high doses of LSD are taken (Pechnick & Ungerleider, 2004). The majority of the serotonin-based neurons in the brain are located in the region known as the midbrain raphe nuclei, which is also known as the dorsal midbrain raphe (Hobson, 2001; Mirin et al., 1991). Evidence emerging from sleep research suggests that one function of the raphe nuclei of the brain is to suppress neurons that are most active during rapid eye movement (REM) sleep. By blocking the action of this region of the brain, acetylcholine-induced REM sleep begins to slip over into the waking state, causing perceptual and emotional changes normally seen only when the individual is asleep (Henderson, 1994a; Hobson, 2001; Lemonick, Lafferty, Nash, Park, & Thompson, 1997).

Chapter Fifteen

Tolerance to the effects of LSD develop quickly, often within 2 to 4 days of continual use (Henderson, 1994a; Mirin et al., 1991; Schwartz, 1995). If the user has become tolerant to the effects of LSD, increasing the dosage level will have little effect (Henderson, 1994a). However, the individual’s tolerance to LSD will also abate after 2 to 4 days of abstinence (Henderson, 1994a; Jaffe, 1989). Cross tolerance between the different hallucinogens is also common (Callaway & McKenna, 1998). Thus, most abusers alternate between periods of active hallucinogen use and times during which they abstain from further use of these compounds In terms of direct physical mortality, LSD is perhaps the safest compound known to modern medicine, and scientists have yet to identify a lethal LSD dosage level (Pechnick & Ungerleider, 2004). Some abusers have survived doses up to 100 times those normally used without apparent ill effect (Pechnick & Ungerleider, 2004). Reports of LSD-induced death are exceptionally rare and usually reflect accidental death caused by the individual’s misperception of sensory data rather than the direct effects of the compound (Drummer & Odell, 2001; Pechnick & Ungerleider, 2004). But this is not to say that LSD is entirely safe. The LSD currently available through illicit markets is much more potent than that used in the 1960s, and it is capable of inducing seizures in the user for more than 60 days after it was last used (Klein & Kramer, 2004). The biological half-life of LSD has not been determined accurately. It is known that the drug is rapidly biotransformed by the liver and that it is rapidly eliminated from the body. Indeed, so rapid is the process of LSD biotransformation and elimination that traces of the major metabolite of LSD, 2-oxy-LSD, will remain in the user’s urine for only 12 to 36 hours after the last use of the drug (Schwartz, 1995). The estimates of the biological half-life of LSD range from 2–3 hours (Jaffe, 1989, 1990; Karch, 1996; Shepherd & Jagoda, 1990; Weiss, Greenfield, & Mirin, 1994) to 5 hours (Henderson, 1994a). The subjective effects of a single dose of LSD appear to last between 8 and 12 hours (Kaplan & Sadock, 1996; Klein & Kramer, 2004), although Mendelson and Mello (1998) suggested that the drug’s effects might last 18 hours.


Hallucinogen Abuse and Addiction

The duration of an LSD-induced “trip” are apparently dose related, with larger doses having a longer effect on the person’s perception (Drummer & Odell, 2001). Only about 1% to 3% of a single dose of LSD is excreted unchanged, with the rest being biotransformed by the liver and excreted in the bile (Drummer & Odell, 2001). LSD continues to challenge researchers, who struggle with such questions as how, when the person ingests such a small dose of the original compound and when such a small portion of the total ingested dose actually reaches the brain, can LSD have such a profound impact on the user’s state of mind? Obviously, there is much to learn about the hallucinogens.

Subjective Effects of Hallucinogens Subjectively, users will begin to feel the first effects of a dose of LSD in 5 to 10 minutes. These initial effects include such symptoms as anxiety, gastric distress, and tachycardia (Schwartz, 1995). In addition, users might also experience increased blood pressure, increased body temperature, dilation of the pupils, nausea, and muscle weakness after ingesting the drug (Jaffe, 1989). Other side effects include an exaggeration of normal reflexes (a condition known as hyperreflexia), dizziness, and some degree of muscle tremor (Jaffe, 1989). Lingeman (1974) characterized these changes as “relatively minor” (p. 133), although for the inexperienced user they might cause some degree of anxiety. The hallucinogenic effects of LSD usually begin 30 minutes to an hour after the user first ingests the drug, peak 2–4 hours later, and gradually wane after 8–12 hours (Pechnick & Ungerleider, 2004). Scientists believe that the effects of a hallucinogen such as LSD will vary depending on a range of factors including (a) the individual’s personality makeup, (b) the user’s expectations for the drug, (c) the environment in which the drugs are used, and (d) the dose of the compounds used (Callaway & McKenna, 1998). Users often refer to the effects of LSD as a trip, during which they might experience such effects as a loss of psychological boundaries, a feeling of enhanced insight, a heightened awareness of sensory data, enhanced recall of past events, a feeling of contentment, and a sense of being “one” with the universe (Callaway & McKenna, 1998). The LSD trip is made up of several distinct phases

(Brophy, 1993). First, within a few minutes of taking LSD there is a release of inner tension. This stage, which will last 1–2 hours (Brophy, 1993), is characterized by either laughing or crying as well as a feeling of euphoria (Jaffe, 1989). The second stage usually begins between 30–90 minutes (Brown & Braden, 1987) and 2–3 hours (Brophy, 1993) following the ingestion of the drug. During this portion of the LSD trip the individual will have perceptual distortions such as visual illusions and synesthesia3 that are the hallmark of the hallucinogenic experience (Pechnick & Ungerleider, 2004). The third phase of the hallucinogenic experience will begin 3–4 hours after the drug is ingested (Brophy, 1993). During this phase of the LSD trip users will experience a distortion of the sense of time. They may also have marked mood swings and a feeling of ego disintegration. Feelings of panic are often experienced during this phase, as are occasional feelings of depression (Lingeman, 1974). These LSD-related anxiety reactions will be discussed in the next section. It is during the third stage of the LSD trip that individuals express a belief that they possess quasi-magical powers or that they are magically in control of events around them (Jaffe, 1989). This loss of contact with reality is potentially fatal, and individuals have been known to jump from windows or attempt to drive motor vehicles during this phase of the LSD trip. Shea (2002) warned that on rare occasions LSD might induce suicidal thoughts or acts in the individual who has ingested it. The effects of LSD normally start to wane 4–12 hours after ingestion (Pechnick & Ungerleider, 2004). As the individual begins to recover, he or she will experience “waves of normalcy” (Mirin et al., 1991, p. 290; Schwartz, 1995), which gradually blend into the waking state of awareness. Within 12 hours, the acute effects of LSD have cleared, although users might experience a “sense of psychic numbness, [that] may last for days” (Mirin et al., 1991, p. 290). The LSD “bad trips.” As noted earlier, it is not uncommon for the individual who has ingested LSD to experience significant levels of anxiety, which may reach the levels of panic reactions. This is known as a “bad trip” or the “bummer.” Scientists used to believe that a bad trip was more likely with novice users, but 3See



now it is believed that even experienced LSD abusers might have a bad trip. The likelihood of a bad trip seems to be determined by three factors: (a) the individual’s expectations for the drug (which is known as the “set”), (b) the setting in which the drug is used, and (c) the psychological health of the user (Mirin et al., 1991). If the person does develop a panic reaction to the LSD experience, he or she will often respond to calm, gentle reminders from others that these feelings are caused by the drug and that they will pass. This is known as “talking down” the LSD user. In extreme cases, the individual might require pharmacological intervention for the LSD-induced panic attack. There is some evidence that the newer, atypical antipsychotic medications clozapine and risperidone bind to the same receptor sites as LSD and that they can abort the LSD trip within about 30 minutes of the time the medication was administered (Walton, 2002). This recommendation has not been replicated by researchers, however, and it is somewhat controversial. The use of diazepam to control anxiety and haloperidol to treat psychotic symptoms has been suggested by some physicians (Jenike, 1991; Kaplan & Sadock, 1996; Schwartz, 1995); others (Jenike, 1991) have advised against the use of diazepam in controlling LSD-induced anxiety. In the latter case the theory is that diazepam distorts the individual’s perception, which might contribute to even more anxiety. Normally, this distortion is so slight as to be unnoticed, but when combined with the effects of LSD, the benzodiazepine-induced sensory distortion may cause the patient to have even more anxiety than before (Jenike, 1991). Many samples of hallucinogens sold on the street are adulterated with belladonna or other anticholinergics (Henderson, 1994a). These substances, when mixed with phenothiazines, may bring about coma and death through cardiorespiratory failure. Thus, it is imperative that the physician treating a bad trip know what drug(s) have been used and if possible be provided with a sample of the drugs ingested to determine what medication is best in treating each patient. The LSD-induced bad trip normally lasts only a few hours and typically will resolve itself as the drug’s effects wear off (Henderson, 1994b). However, in rare cases LSD is capable of activating a latent psychosis (Henderson, 1994b). Carvey (1998) noted that

Chapter Fifteen

various Indian tribes who have used the hallucinogen mescaline for centures fail to have significantly higher rates of psychosis than the general population, suggesting that the psychosis seen in the occasional LSD user is not a drug effect, but the final answer to this question has not been identified as of this time. One reason it is so difficult to identify LSD’s relationship to the development of psychiatric disorders such as a psychosis is that the “LSD experience is so exceptional that there is a tendency for observers to attribute any later psychiatric illness to the use of LSD” (Henderson, 1994b, p. 65, italics added for emphasis). Thus, as the author points out, psychotic reactions that develop weeks, months, or even years after the last use of LSD have been attributed to the individual’s use of this hallucinogen rather than to nondrug factors. At this time, it has been suggested that LSD is capable of causing long-term complications such as a drug-induced psychosis. However, this theory has also been challenged by other researchers. One extremely rare complication of LSD use is the overdose (Schuckit, 2000). Some symptoms of an LSD overdose include convulsions and hyperthermia. Medical care is necessary in any suspected drug overdose to reduce the risk of death. In a hospital setting, the physician can take appropriate steps to monitor the patient’s cardiac status and to counter drug-induced elevation in body temperature, cardiac arrhythmias, seizures, and other such effects. The LSD flashback. The “flashback” is a spontaneous recurrence of the LSD experience, now classified as the hallucinogen persisting perceptual disorder by the American Psychiatric Association (2000) (Pechnick & Ungerleider, 2004). The exact mechanism by which flashbacks occur remains unknown (Drummer & Odell, 2001). They might develop days, weeks, or months after the individual’s last use of LSD, and even first-time abusers have been known to have them (Batzer, Ditzler, & Brown, 1999; Pechnick & Ungerleider, 2004). Flashbacks have been classified as being (a) perceptual, (b) somatic, or (c) emotional (Weiss & Millman, 1998). The majority of flashbacks involve visual sensory distortion, according to the authors. Somatic flashbacks consist of feelings of depersonalization, and emotional flashbacks involve periods when the individual reexperiences distressing emotions felt during the period of active LSD use (Weiss & Millman, 1998).


Hallucinogen Abuse and Addiction

The “majority” (Schwartz, 1995, p. 409) of those who use LSD at least 10 times can expect to experience at least one flashback. Flashbacks might be triggered by stress, fatigue, marijuana use, emerging from a dark room, illness, the use of certain forms of antidepressant medications, and occasionally by intentional effort on the part of the individual. The use of sedating agents such as alcohol might also trigger LSD-induced flashbacks, although the reasons for this are not understood (Batzer et al., 1999). They usually last a few seconds to a few minutes, although occasionally to 24–48 hours or even longer (Kaplan & Sadock, 1996). Approximately 50% of those who develop flashbacks will do so in the first 6 months following their last use of LSD. In about 50% of the cases, the individual will continue to experience flashbacks for longer than 6 months and possibly for as long as 5 years (Schwartz, 1995; Weiss, Greenfield, & Mirin, 1994). Flashback experiences often are occasionally frightening to the inexperienced user; however, for the most part they seem to be accepted by seasoned LSD users in much the same way that chronic alcohol users accept some physical discomfort as being part of the price they must pay for their chemical use. LSD abusers might not report flashbacks unless specifically questioned about these experiences (Batzer et al., 1999). People’s reactions to LSD flashbacks vary from one individual to another. Some LSD abusers enjoy the visual hallucinations, “flashes” of color, halos around different objects, the perception that things are growing smaller, the perception that things are growing larger, and feelings of depersonalization that are common in an LSD flashback (Pechnick & Ungerleider, 2004). Other individuals have been known to become depressed, develop a panic disorder, or even become suicidal after an LSD-related flashback (Kaplan & Sadock, 1996). The only treatment needed for the typical patient is reassurance that the episode will end. On rare occasions a benzodiazepine might be used to control the flashback-induced anxiety that might develop. Post-hallucinogen perceptual disorder. Post-hallucinogen perceptual disorder is a rare, poorly understood complication of LSD use/abuse (Hartman, 1995). Some chronic users of LSD will experience a disturbance in their visual perceptual system that may or may not become permanent. Victims of this disorder report seeing afterimages or distorted “trails” following behind

objects in the environment for extended periods after their last use of LSD (Hartman, 1995). The exact mechanism by which LSD might cause these effects is not known at this time. Although LSD has been studied by researchers for the past 50 years, much remains to be discovered about this elusive chemical. For example, there is one case report of a patient who developed grand mal seizures after taking LSD while taking the antidepressant fluoxetine (Ciraulo, Creelman, Shader, & O’Sullivan, 1995). The reason for this interaction between these two chemicals is not known. Unfortunately, even before scientists were able to learn all that there was to learn about LSD, another popular hallucinogen appeared. This is called PCP.

Phencyclidine (PCP) The drug Phencyclidine (PCP) was first introduced in 1957 as an experimental intravenously administered surgical anesthetic (Milhorn, 1991). By the mid 1960s, researchers had discovered that 10% to 20% of the patients who had received PCP experienced a druginduced delirium as well as a drug-induced psychosis that lasted up to 10 days in some patients, so the decision was made to discontinue using the drug with humans (McDowell, 2004; Milhorn, 1991). However, phencyclidine continued to be used in veterinary medicine in the United States until 1978, when all legal production of PCP in the United States was discontinued. The compound was classified a controlled substance under the Comprehensive Drug Abuse Prevention and Control Act of 1970 (Slaby, Lieb, & Tancredi, 1981). PCP continues to be used as a veterinary anesthetic in other parts of the world and is legally manufactured by pharmaceutical companies outside of the United States (Kaplan, Sadock, & Grebb, 1994). As a drug of abuse in this country, PCP’s popularity has waxed and waned. Currently it is not a popular drug of abuse, although it is still encountered from time to time. Only 2.5% of the class of 2003 admitted to having ever used PCP (Johnston et al., 2003a). It is occasionally used as a component of “dip dope,” discussed earlier (Mendyk & Fields, 2002). Although intentional PCP abuse is rare, unintentional PCP use remains a very real problem. PCP is easily manufactured in illicit laboratories by people


with minimal training in chemistry. Because of this, it is often mixed into other street drugs to enhance the effects of low-quality illicit substances. Further, misrepresentation is common, with PCP being substituted for other compounds that are not as easily obtained. When it is intentionally abused, it usually is smoked. The practice of smoking PCP, either alone or with compounds such as marijuana, allows abusers to titrate the dose to suit their taste or needs. If the individual finds the drug experience too harsh and aversive, he or she can simply stop smoking the PCP-laced cigarette for awhile. Methods of PCP administration. PCP can be smoked, used intranasally, taken by mouth, injected into the muscle tissue, or injected intravenously (Karch, 2002; Weaver, Jarvis, & Schnoll, 1999). The most common method is by smoking it either alone or mixed with other compounds. Subjective experience of PCP abuse. Phencyclidine’s effects might last for several days, during which time users will experience rapid fluctuations in their level of consciousness (Weaver et al., 1999). The main experience for users is a sense of dissociation in which reality appears distorted or distant. Parts of their bodies might feel numb or as if they were no longer attached. These experiences might prove frightening, especially to novice users, resulting in panic reactions. Some of the other desired effects of PCP intoxication include a sense of euphoria, decreased inhibitions, a feeling of immense power, a reduction in the level of pain, and altered perception of time, space, and the user’s body image (Milhorn, 1991). Not all of the drug’s effects are desired by the user. Indeed, “most regular users report unwanted effects” (Mirin et al., 1991, p. 295) caused by PCP. Some of the more common negative effects include feelings of anxiety, restlessness, and disorientation. In some cases, the user retains no memory of the period of intoxication, a reflection of the anesthetic action of the drug (Ashton, 1992). Other negative effects of PCP include disorientation, mental confusion, assaultiveness, anxiety, irritability, and paranoia (Weiss & Mirin, 1988). Indeed, so many people have experienced so many different undesired effects from PCP that researchers remain at a loss to explain why the drug was ever a popular drug of abuse (Newell & Cosgrove, 1988). PCP can cause users to experience a drug-induced depressive

Chapter Fifteen

state, which in extreme cases might reach suicidal proportions (Jenike, 1991; Weiss & Mirin, 1988). This is consistent with the observations of Berger and Dunn (1982), who, drawing upon the wave of PCP abuse that took place in the 1970s, reported that the drug would bring the user either to “the heights, or the depths” (p. 100) of emotional experience. Scope of PCP use/abuse. Researchers have found that approximately 2.5% of high school seniors who graduated in 2002 admitted to having used PCP at least once, a figure that has remained relatively stable for the past decade (Johnston et al., 2003a). Pharmacology of PCP Chemically, phencyclidine is a weak base, soluble in both water and lipids. When ingested orally, because it is a weak base it will be absorbed mainly through the small intestine rather than through the stomach lining (Zukin & Zukin, 1992). This will slow the absorption of the drug into the body, for the drug molecules must pass through the stomach to reach the small intestine. But the effects of an oral dose of PCP are still generally seen in just 20 to 30 minutes, and last for between 3 and 8 hours (“Consequences of PCP Abuses Are Up,” 1994). When smoked, PCP is rapidly absorbed through the lungs. The user will begin to experience symptoms of PCP intoxication within about 2–3 minutes after smoking the drug (Schnoll & Weaver, 2004). When smoked, much of the PCP will be converted into the chemical phenylcyclohexene by the heat of the smoking process (Shepherd & Jagoda, 1990) and only about 30% to 50% of the PCP in the cigarette will actually be absorbed (Crowley, 1995). When injected or ingested orally, 70% to 75% of the available PCP will reach the circulation (Crowley, 1995). The effects of injected PCP last for about 3–5 hours. PCP is very lipid soluble; because of this it tends to accumulate in fatty tissues and in the tissues of the brain (Schnoll & Weaver, 2004). Indeed, the level of PCP in the brain might be 31 to 113 times as high as blood plasma levels (Shepherd & Jagoda, 1990). Further, animal research data suggest that PCP remains in the brain for up to 48 hours after it is no longer detectable in the blood (Hartman, 1995). Once in the brain, PCP tends to act at a number of different receptor sites, including blocking those utilized by a neurotransmitter known as N-methyl-D-aspartic acid

Hallucinogen Abuse and Addiction

(NMDA) (Drummer & Odell, 2001; Zukin, Sloboda, & Javitt, 1997). PCP functions as an NMDA channel blocker, preventing NMDA from being able to carry out its normal function (Zukin et al., 1997). PCP also binds to the sigma opioid receptor site, which is how it causes many of its less pleasant effects (Daghestani & Schnoll, 1994; Drummer & Odell, 2001); it is found at some of the same cannabinoid receptor sites occupied by THC, which might explain its hallucinogenic effects (Glennon, 2004). One of the factors that influences the subjective effect of PCP is the dosage level and the route of administration utilized by the individual. Another is the specific neurotransmitter system(s) being influenced by the dose of PCP (Roberts, 1995). Thus, PCP might function as an anesthetic, a stimulant, a depressant, or a hallucinogenic (Brown & Braden, 1987; Weiss & Mirin, 1988). PCP is biotransformed by the liver into a number of inactive metabolites, which are then excreted mainly by the kidneys (Zukin et al., 1997; Zukin & Zukin, 1992). Following a single dose of PCP, only about 10% (Karch, 2002) to 20% (Crowley, 1995) of the drug will be excreted unchanged. Unfortunately, one characteristic of PCP is that it takes the body an extended period of time to biotransform/excrete the drug. The half-life of PCP following an overdose may be as long as 20 (Kaplan et al., 1994) to 72 hours (Jaffe, 1989), and in extreme cases it might be several weeks (Grinspoon & Bakalar, 1990). One reason for the extended half-life of PCP is that it tends to accumulate in the body’s adipose (fat) tissues, where in chronic use it can remain for days or even weeks following the last dose of the drug. There have even been cases of a chronic PCP user losing weight, either because of intentional attempts to lose weight or because of trauma, causing the adipose tissue to release unmetabolized PCP back into the general circulation, triggering flashback-type experiences long after the last use of the drug (Zunkin & Zunkin, 1992). In the past, physicians believed it was possible to reduce the half-life of PCP in the body by making the urine more acidic. This was done by having the patient ingest large amounts of ascorbic acid or cranberry juice (Grinspoon & Bakalar, 1990; Kaplan & Sadock, 1996). However, patients receiving this treatment were discovered to be vulnerable to developing a condition known as myoglobinuria, which may cause the kidneys to fail


(Brust, 1993). Because of this potential complication, many physicians do not recommend the acidification of the patient’s urine for any reason. There is virtually no research data on the possibility that the user might become tolerant to the effects of PCP. However, clinical evidence with burn patients who have received repeated doses of the anesthetic agent ketamine, which is similar in chemical structure to PCP, suggests that some degree of tolerance to its effects are possible (Zukin et al., 1997). There is no evidence of physical dependence on PCP (Weiss et al., 1994; Zevin & Benowitz, 1998). Symptoms of mild levels of PCP intoxication. Small doses of PCP, usually less than 1 mg, do not seem to have an effect on the user (Crowley, 1995). At dosage levels of about 5 mg, the individual will experience a state resembling that seen in alcohol intoxication (Crowley, 1995; Mirin et al., 1991). The individual will experience muscle coordination problems, staggering gait, slurred speech, and numbness of the extremities (Jaffe, 1989). Other effects of mild doses of PCP include agitation, some feelings of anxiety, flushing of the skin, visual hallucinations, irritability, possible sudden outbursts of rage, and feelings of euphoria, nystagmus, changes in the body image, and depression (Beebe & Walley, 1991; Crowley, 1995; Milhorn, 1991). The acute effects of a small dose of about 5 mg of PCP last between 4 and 6 hours. Following the period of acute effects is a post-PCP recovery period that can last 24 to 48 hours (Beebe & Walley, 1991; Milhorn, 1991). During the post-PCP recovery period the user will gradually “come down,” or return to normal. Symptoms of moderate levels of PCP intoxication. As the dosage level increases to the 5–10 mg range, many users will experience a range of symptoms, including a disturbance of body image in which different parts of their bodies will no longer seem “real” (Brophy, 1993). The user may also experience slurred speech, nystagmus, dizziness, ataxia, tachycardia, and an increase in muscle tone (Brophy, 1993; Weiss & Mirin, 1988). Other symptoms of moderate levels of PCP intoxication might include paranoia, severe anxiety, belligerence, and assaultiveness (Grinspoon & Bakalar, 1990) as well as demonstration of unusual feats of strength (Brophy, 1993; Jaffe, 1989) and extreme salivation (Brendel, West, & Hyman, 1996). Some people


Chapter Fifteen

have exhibited drug-induced fever, drug-induced psychosis, and violence. Symptoms of severe levels of PCP intoxication. As the dosage level reaches the 10–25 mg level or higher, the individual’s life is in extreme danger. At this dosage level the PCP user might experience vomiting or seizures; even if the user is still conscious, his or her reaction time would be seriously impaired. The user who has ingested more than 10 mg of PCP might experience hypertension and severe psychotic reactions similar to schizophrenia (Grinspoon & Bakalar, 1990; Kaplan & Sadock, 1996; Weiss & Mirin, 1988). Estimates of the period of time that the PCP-induced coma might last range from up to 10 days (Mirin et al., 1991) to several weeks (Zevin & Benowitz, 1998). Further, because of the absorption/distribution characteristics of the drug, the individual might slip into, and apparently recover from, a PCP-induced coma several times before the drug is eliminated from the body (Carvey, 1998). Other symptoms of severe PCP intoxication might include cardiac arrhythmias, encopresis, visual and tactile hallucinations, and a drug-induced paranoid state. PCP overdoses have caused death from respiratory arrest, convulsions, and hypertension (Brophy, 1993). Complications of PCP Abuse It is difficult to understand why people would be drawn to PCP as the subjective experience of PCP abuse is positive only about half the time and is decidedly unpleasant or adverse the rest of the time. Paradoxically, the very fact that it is not possible to predict in advance which experience the user will have adds a measure of excitement and attractiveness to PCP abuse for many people (Schnoll & Weaver, 2004). One of the more uncomfortable consequences of PCP abuse is a drug-induced psychosis, which might not abate for days, weeks (Jaffe, 1989; Jenike, 1991; Weiss & Mirin, 1988), or months (Ashton, 1992). It is theorized that a history of a previous psychotic episode or a preexisting vulnerability to psychosis may exist in those individuals who develop this complication of PCP abuse (Mirin et al., 1991; Weiss & Millman, 1998). But this is only a theory, and it is possible that PCP can induce a psychotic episode even in individuals who normally would lack the genetic predisposition for such a reaction. It is known that PCP can cause “a long

lasting syndrome marked by neuropsychological deficits, social withdrawal, and affective blunting as well as hallucinations, formal thought disorder, paranoia and delusions” (Jentsch et al., 1997, p. 954). Thus, the effects of PCP abuse can be both profound and devasting for the individual. It is known that the PCP psychosis usually will progress through three different stages, each of which lasts approximately 5 days (Mirin et al., 1991; Weiss & Mirin, 1988). The first stage is usually the most severe and is characterized by paranoid delusions, anorexia, insomnia, and unpredictable assaultiveness. During this phase, the individual is extremely sensitive to external stimuli (Jaffe, 1989; Mirin et al., 1991), and the “talking down” techniques that might work with an LSD bad trip do not often work with PCP (Brust, 1993; Jaffe, 1990). The middle phase is marked by continued paranoia and restlessness, but users are usually calmer and in intermittent control of their behavior (Mirin et al., 1991; Weiss & Mirin, 1988). This phase will again usually last 5 days and will gradually blend into the final phase of the PCP psychosis recovery process. This final phase is marked by a gradual recovery over 7 to 14 days; however, in some patients the PCP psychosis may last for months (Mirin et al., 1991; Slaby et al., 1981; Weiss & Mirin, 1988). Social withdrawal and severe depression are also common following chronic use of PCP (Jaffe, 1990). There would appear to be some minor withdrawal symptoms following prolonged periods of hallucinogen use. Chronic PCP users have reported memory problems, which seem to clear when they stopped using the drug (Jaffe, 1990; Newell and Cosgrove, 1988). Recent evidence would suggest that chronic PCP users demonstrate the same pattern of neuropsychological deficits found in other forms of chronic drug use, suggesting that PCP might cause chronic brain damage (Grinspoon & Bakalar, 1990; Jentsch et al., 1997; Newell & Cosgrove, 1988). Research has also revealed that PCP can, at high dosage levels, cause hypertensive episodes (Lange, White, & Robinson, 1992) that in extreme cases might last as long as 3 days after the drug was ingested (Weiss & Millman, 1998). These periods of unusually high blood pressure may then cause the individual to experience a cerebral vascular accident (CVA or stroke) (Brust,

Hallucinogen Abuse and Addiction

1993; Daghestani & Schnoll, 1994). Although research into this area is lacking, the possibility does exist that this is the mechanism through which PCP is able to bring about brain damage in the user. The majority of PCP users who die do so because of traumatic injuries that they suffer while under the drug’s effects (“Consequences of PCP Abuse,” 1994). For example, because of the assaultiveness frequently induced by PCP, many users end up as either the victim or the perpetrator of a homicide while under the drug’s effects (Ashton, 1992). In spite of its extremely deleterious effects, at the start of the 21st century PCP continues to lurk in the shadows and may again become a popular drug of abuse, just as it has been in the past.

Ecstasy: Evolution of a New Drug of Abuse History of ecstasy. The hallucinogen N, alphadimethyl-1,3 benzo-dioxole-t-ethanamine (MDMA) was first isolated in 1914.4 It was thought that MDMA would function as an appetite suppressant, but when the initial animal studies did not suggest that the compound was worth developing, researchers quickly lost interest in it. In the mid 1960s some psychiatrists suggested that MDMA might be useful as an aid in psychotherapy (Batki, 2001; Gahlinger, 2004; Rochester & Kirchner, 1999). MDMA also briefly surfaced as a drug of abuse during the 1960s but was eclipsed by LSD, which was more potent and did not cause the nausea or vomiting often experienced by MDMA users. The compound was considered unworthy of classification as an illegal substance when the drug classification system currently in use was set up in the early 1970s. Partially because it was not classified as an illicit substance, illicit drug producers became interested in MDMA in the mid 1970s. The marketing process behind MDMA was impressive and numerous possible product names were discussed before “ecstasy” was 4

Cook (1995) said that MDMA was patented in 1913, and Rochester and Kirchner (1999) suggested that the patent was issued in 1912 in Germany. Schuckit (2000) suggested that MDMA was first synthesized in 1912 and that the patent was for this compound was issued in 1914. There obiously is some disagreement over the exact date that the patent for this chemical was issued.


selected (Kirsch, 1986; McDowell, 2004), a demand for the “product” was generated, and supply/distribution networks evolved to meet this demand. The original samples of ecstasy included a “package insert” (Kirsch, 1986, p. 81) that “included unverified scientific research and an abundance of 1960s mumbo-jumbo” (p. 81) about how the drug should be used and its purported benefits. The package inserts also warned the user not to mix ecstasy with alcohol or other chemicals, to use it only occasionally, and to take care to ensure a proper “set” in which to use it. Within the span of a few years, MDMA had become a popular drug of abuse in both the United States and Europe. The Drug Enforcement Administration (DEA) classified MDMA as a controlled substance with no recognized medical use in 1985 (McDowell, 2004). As of that date, “trafficking in MDMA [was made] punishable by fifteen years in prison and a $125,000 fine” (Kirsch, 1986, p. 84). Unfortunately, it has remained a popular drug of abuse, in part because of its reputation as a safe hallucinogen that helps the user feel closer to others. At the start of the 21st century MDMA has not only remained a popular drug of abuse but has actually become the most commonly abused stimulant in dance clubs (Gahlinger, 2004). Although there are reports of MDMA being produced in the United States, the majority is manufactured in Europe and smuggled into this country (United Nations, 2003). Scope of the problem of MDMA abuse. Globally, 8 million people are thought to have abused MDMA in 2003, a number that is greater than the combined number of heroin and cocaine abusers around the world (United Nations, 2003, 2004). In the United States, MDMA abuse seems to have peaked around the year 2000 and has dropped by about 50% since then (Office of National Drug Control Policy, 2004). Still, the total worldwide annual production of MDMA is estimated to be about 113 tons, and there is evidence that MDMA abuse continues to increase globally (United Nations, 2003, 2004). Much of the early abuse of MDMA was fueled by the belief that it was harmless (Ramcharan et al., 1998). It found wide acceptance in a subculture devoted to loud music and parties centered around the use of MDMA and dancing, a pattern similar to that of the LSD parties of the 1960s (Randall, 1992). Such parties, known as “raves,” began in Spain, spread to England in


Chapter Fifteen

the early 1980s, and from there to the United States (McDowell, 2004; Rochester & Kirchner, 1999). Such parties remain popular, and each weekend an estimated 20 million MDMA tablets are thought to be consumed in the United Kingdom alone (Rogers et al., 2003). These parties have been described as the modern equivalent of the Dionysian religious festivals of ancient Rome (Walton, 2002). MDMA is viewed by many as a “dance making drug” because users feel the urge to dance for extended periods of time (“The Agony of ‘Ecstasy,’” 1994). As one measure of its early popularity, between 10% and 40% of older adolescents and young adults admit to having used MDMA at least once (Schuckit, 2000). Researchers have found that 8.3% of the high school seniors who were surveyed in 2002 admitted to having used MDMA at least once (Johnston et al., 2003a). Pharmacology of MDMA The chemical structure of MDMA is so similar to that of the amphetamines that it was classified as a “semisynthetic hallucinogenic amphetamine” by Klein and Kramer (2004, p. 61). The chemical structure of MDMA is also similar to that of the hallucinogens, MDA, and mescaline (Creighton, Black, & Hyde, 1991; Gahlinger, 2004; Kirsch, 1986; Schuckit, 2000). MDMA is well absorbed from the gastrointestinal tract, and the most common method of MDMA use is through oral ingestion (McDowell, 2004). The effects of a dose of MDMA usually begin in about 20 minutes and peak within an hour (Gahlinger, 2004; McDowell, 2004) to an hour and a half (Schwartz & Miller, 1997). Peak blood levels are usually seen in 1–3 hours after a single dose is ingested (Ramcharan et al., 1998). Maximum blood levels of MDMA are achieved about 2–4 hours following a single dose, and the estimated half-life of a single dose is estimated to be between 4 and 7 (Karch, 2002) to 8 hours or more (Gahlinger, 2004; Klein & Kramer, 2004; Schwartz & Miller, 1997). MDMA is biotransformed in the liver, and its elimination half-life5 is estimated to be approximately 8 hours. One major metabolite of MDMA is a compound that is itself a hallucinogen: MDA. However, one study, which used a single volunteer subject, found that almost 5See

Chapter 3 and Glossary.

three-fourths of the MDMA ingested was excreted unchanged in the urine within 72 hours of ingestion. Because it is so highly lipid soluble, MDMA is able to cross the blood-brain barrier into the brain itself without significant delay. Within the brain, MDMA functions as an indirect serotonin agonist (McDowell, 2004). It first forces the release of, and then inhibits the reabsorption of, serotonin, with a smaller effect on norepinephrine and dopamine (Gahlinger, 2004; Parrott, Morinan, Moss, & Scholey, 2004). Scientists think that MDMA’s main effects involve the serotonin neurotransmitter system, but there is very little objective research into its effects on users, and virtually all that is known about the drug’s effects is based on studies done on illicit drug abusers. Patterns of MDMA abuse. MDMA users tend to have drug-use patterns that are different from those in other chemicals of abuse. First, the typical MDMA abuser tends to be a polydrug user (Karch, 2002; McDowell, 2004; Schwartz & Miller, 1997). When MDMA is used by itself, the typical abuser will ingest 1–2 tablets of the drug, each of which contains 100–140 mg of the drug, and then abstain from further MDMA use for at least a week (GouzoulisMayfrank et al., 2000). This rather unusual drug use pattern reflects the pharmacology of MDMA in the brain. By blocking the reuptake of serotonin, MDMA contributes to a tendency for subsequent doses not to cause the euphoria that is the goal of abusers but many side effects mediated by other neurotransmitter systems. Further, taking a double dose of the drug does not increase the desired effects of MDMA but makes the individual more likely to experience unpleasant side effects (Bravo, 2001; Peroutka, 1989) and increases the chances of MDMA-induced brain damage (McGuire & Fahy, 1991). The typical dosage levels ingested by abusers seem to be between 60 and 250 mg (Gouzoulis-Mayfrank et al., 2000). Subjective and objective effects of MDMA abuse. Currently, at least six different methods of making MDMA are known to exist, and specific instructions on how to make the drug are available on the Internet (Rochester & Kirchner, 1999). Specialized equipment and training in organic chemistry are required to avoid the danger of contaminating the MDMA by toxins, but beyond these requirements the drug is easily synthesized. In past decades MDMA was usually produced in Europe


Hallucinogen Abuse and Addiction

and then shipped to the United States, but it is increasingly being made in this country. Virtually all that is known about MDMA’s effects are based on observations of illicit drug users, as there has been little objective research into the subjective, pharmacological, or toxicological effects of this drug (Bravo, 2001; Karch, 2002). The subjective effects of MDMA are, to a large degree, dependent on the “set” and the individual’s expectations for the drug (Bravo, 2001). At dosage levels of 75–100 mg, users report experiencing a sense of euphoria, a sense of closeness to others, and improved self-esteem (Beebe & Walley, 1991; Bravo, 2001). At this dosage level, the user might also possibly experience mild visual hallucinations (Evanko, 1991). After the period of acute drug intoxication, some users will experience a degree of confusion, anxiety, headache, feelings of derealization and/or depersonalization, as well as depression and paranoia during or following their use of MDMA (Bravo, 2001; Buia, Gulton, Park, Shannon, & Thompson, 2000; Cohen, 1998). Many of these feelings may persist for several hours to several days following the last use of the drug. Some of the subjective effects of MDMA include “tachycardia, an occasional ‘wired’ feeling, jaw clenching, nystagmus, a nervous desire to be in motion, transient anorexia, panic attacks, nausea and vomiting, ataxia, urinary urgency, . . . insomnia, tremors, inhibition of ejaculation, and rarely, transient hallucinations” (Climko, Roehrich, Sweeney, & Al-Razi, 1987, p. 365). The user’s tendency to clench the teeth while under the effects of MDMA is also known as bruxism (grinding of teeth) and has been linked to excessive wear on the teeth (McDowell, 2004; Redfearn, Agrawl, & Mair, 1998). Many abusers will attempt to control this effect by using baby pacifiers or candy to suck on after ingesting the drug (Gahlinger, 2004; Klein & Kramer, 2004). Other effects of a “typical” dose of MDMA include increase in heart rate, muscle tremor, tightness in jaw muscles, nausea, insomnia, headache, difficulty concentrating, vertigo, dry mouth, a decrease in appetite, ataxia, and sweating (Bravo, 2001). People who are sensitive to the effects of MDMA might experience numbness and tingling in extremities of the body, vomiting, increased sensitivity to cold, visual hallucinations, crying, blurred vision, nystagmus, and the experience of having the floor appear to shake.

MDMA has been implicated as the cause of decreased sexual desire and in men, inhibition of the ejaculatory reflex and erectile problems (Finger, Lund, & Slagel, 1997; McDowell, 2004). However, males are often sexually aroused when the effects of MDMA begin to wear off (Buia et al., 2000). Complications of MDMA Use There is a significant overlap between the therapeutic and toxic levels of MDMA (Karch, 2002). Research using animals suggests that the LD50 following a single intravenous dose of MDMA is approximately 8–23 mg/ kg in dogs, and 17–28 mg/ kg in Rhesus monkeys (Karch, 1996). In the early 1950s, the United States Army conducted a series of secret research projects to explore MDMA’s possible military applications, and the data from these studies suggest that just 14 of the more potent MDMA pills being produced in illicit laboratories might prove fatal to the user if ingested together (Buia et al., 2000). MDMA-related cardiac problems. There is growing evidence that MDMA has a negative effect on cardiac function, a discovery that has profound implications for people who abuse this compound. Although the mechanism by which MDMA is able to cause death is not known at this time, it is known that the majority of deaths in MDMA abusers are the result of cardiac arrhythmias (Beebe & Walley, 1991; Schwartz & Miller, 1997). It is thought that MDMA, like its chemical cousins the amphetamines, shares the ability to alter cardiac function (Gahlinger, 2004; Karch, 2002; Klein & Kramer, 2004). Further, there is experimenal evidence that MDMA functions as a cardiotoxin6 causing inflammation of the heart muscle (Badon et al., 2002). Some chronic abusers have been found to suffer from cardiomyopathy (Klein & Kramer, 2004). Further, other abusers had altered cardiac function when they were admitted to the hospital. MDMA is able to cause tachycardia (Gahlinger, 2004), and one study of the records of 48 patients admitted to a hospital accident and trauma center following MDMA use found that two-thirds had heart rates above 100 beats per minute (Williams, Dratcu, Taylor, Roberts, & Oyefeso, 1998). It was 6See



recommended that MDMA overdoses be treated with the same protocols used to treat amphetamine overdoses, with special emphasis placed on assessing and protecting cardiac function (Gahlinger, 2004; Rochester & Kirchner, 1999). There is also evidence suggesting that the chronic use of MDMA might result in damage to the valves of the heart (Setola et al., 2003). The authors examined the impact of MDMA on tissue samples in laboratories and found that MDMA caused many of the same changes to the cardiac tissue samples that were found when tissue samples were exposed to the now-banned weight-loss medication fenfluramine.7 Given the widespread popularity of MDMA, these research findings hint at a possible future epidemic of MDMA-induced cardiac problems in chronic abusers. MDMA-related neurological problems. There is preliminary evidence that for reasons not understood women might be more vulnerable to MDMA-induced brain damage than men (Greenfield, 2003). Researchers have found that MDMA impacts the system of blood vessels that serves the brain, as evidenced by reports of intracranial hemorrhage in some abusers (Sternbach & Varon, 1992) and nonhemorrhagic cerebrovascular accidents.8 There is one case report of a young woman who developed a condition known as cerebral venous sinus thrombosis (a blood clot) after ingesting MDMA at a rave party (Rothwell & Grant, 1993). The authors speculated that dehydration may have been a factor in the development of the cerebral venous sinus thrombosis in this case and warned of the need to maintain adequate fluid intake while exercising under the influence of MDMA. Unfortunately, animal research has demonstrated that MDMA causes the body to secrete abnormal amounts of the antidiuretic hormone (ADH) (Gahlinger, 2004; Henry & Rella, 2001). This hormone then promotes water reabsorption by the kidneys, reducing urine production and forcing the water back into the body. If the user ingests a great deal of water in an attempt to avoid dehydration, he or she might be vulnerable to 7This

medication was called “Fen-Phen” and was withdrawn from the market after reports suggested that patients who were taking it developed potentially life-threatening damage to their heart valves. 8 See Glossary.

Chapter Fifteen

developing abnormally low blood sodium levels (hyponatemia), which could cause or contribute to arrhythmias, seizures, or other problems (Henry & Rella, 2001; Parrott et al., 2004). Thus, the problem of how to deal with MDMA-related dehydration is far more complex than simply having the user ingest fluids. Preliminary evidence suggests that MDMA might induce, or at least exacerbate, memory problems (Rogers et al., 2003). The authors found that the regular MDMA abusers in their sample achieved scores on memory function tests that were more than 20% lower than those of their control group. The methodology utilized by this study (volunteers solicited over the Internet) was unique, and it is not known whether these results will generalize to the population of MDMA abusers, but the initial findings do suggest that MDMA might interfere with normal memory function long after the drug’s desired effects have ended. MDMA has also been implicated as the cause of the serotonin syndrome9 (Henry & Rella, 2001; Karch, 2002; Sternbach, 2003). Because temperature dysregulation is one effect of the serotonin syndrome, this process might explain why some abusers develop severe hyperthermia following MDMA ingestion (Klein & Kramer, 2004). MDMA has also been implicated as the cause of increased seizure activity in users, for reasons that are not well understood (Karch, 2002). On occasion, these MDMA-related seizures have been fatal (Henry, 1996; Henry & Rella, 2001). The available evidence suggests that MDMA is a neurotoxin10 in both humans and animals. This might explain the observed relationship between MDMA abuse and Parkinson’s disease (Gahlinger, 2004). Animal studies suggest that MDMA functions as a neurotoxin for both dopaminergic neurons and serotonergic neurons. Earlier research studies had discovered evidence that MDMA functioned as a selective neurotoxin in humans that destroyed serotonergic neurons alone (Batki, 2001; Gouzoulis-Mayfrank et al., 2000; Marston, Reid, Lawrence, Olverman, & Butcher, 1999; McCann, Szabo, Scheffel, Dannals, & Ricaurte, 1998; Morgan, 1999; Reneman, Booij, Schmand, van den Brink, & Gunning, 2000; Ritz, 9See

Glossary. Glossary.



Hallucinogen Abuse and Addiction

1999; Vik, Cellucci, Jarchow, & Hedt, 2004; Wareing, Risk, & Murphy, 2000). MDMA-induced brain damage has been found to be dose-related, with higher levels of impairment in individuals who had ingested greater amounts of MDMA. The frequency of use is not thought to be correlated with the degree of brain damage (Croft, Klugman, Baldeweg, & Gruzelier, 2001). Researchers disagree as to whether this MDMA-induced brain damage is permanent (Walton, 2002) or if some limited degree of recovery is possible (Buchert et al., 2003; Buchert et al., 2004; Gouzoulis-Mayfrank et al., 2000; Renerman et al., 2001; Ritz, 1999). Although at one point it was suspected that the neurotoxic effects of MDMA were possibly due to contaminants in the MDMA rather than the drug itself (Rochester & Kirchner, 1999), positive emission tomographic (PET) studies have uncovered significant evidence suggesting global, dose-related decreases in brain 5-HT transporter, a structural element of neurons that utilize serotonin (Buchert et al., 2004; McCann et al., 1998). Even limited MDMA use has been associated with a 35% reduction in 5-HT metabolism (an indirect measure of serotonin activity in the brain) for men and almost a 50% reduction in 5-HT metabolism in women (Hartman, 1995), findings that are highly suggestive of organic brain damage at a cellular level. However, there is preliminary evidence to suggest that a single dose of the selective serotonin reuptake inhibitor Prozac (fluoxotine) might protect neurons from MDMA-induced damage if it is ingested within 24 hours of the MDMA (Walton, 2002). MDMA-related emotional problems. The MDMA user might experience flashbacks very similar to those seen with LSD use (Creighton et al., 1991). These MDMA flashbacks usually develop in the first few days following the use of the drug (Cook, 1995). In another interesting drug effect seen at normal dosage levels, the user will occasionally “relive” past memories. The memories that are experienced anew are often ones that were suppressed because of the pain associated with the experience (Hayner & McKinney, 1986). Thus, users might find themselves reliving an experience they did not want to remember. This effect, which many psychotherapists thought might prove of benefit in the therapeutic

relationship, may seem so frightening to the user as to be “detrimental to the the individual’s mental health” (p. 343). Long-time use has contributed to episodes of violence and also to suicide (“The Agony of ‘Ecstasy,’ ” 1994). MDMA abuse might also result in such residual effects as anxiety attacks, persistent insomnia, irritability, rage reactions, and a drug-induced psychosis (Gahlinger, 2004; Hayner & McKinney, 1986; Karch, 2002; McGuire & Fahy, 1991). The exact mechanism by which MDMA might cause a paranoid psychosis is not clear at this time (Karch, 2002). It is theorized that MDMA is able to activate a psychotic reaction in a person who has a biological predisposition for this disorder (McGuire & Fahy, 1991). As the effects wane, users typically experience a depressive reaction that might be quite severe and last 48 hours or more (Gahlinger, 2004). MDMA-related gastrointestinal problems. In Europe, where MDMA abuse is common, there have been a number of reports of liver toxicity and hepatitis in MDMA abusers. The exact relationship between the MDMA abuse and the development of liver problems is not clear at this time, and it is possible that these were idiosyncratic reactions in isolated individuals (Karch, 2002). Another possibility is that the liver problems were induced by one or more contaminants in the MDMA dose consumed by the user (Cook, 1995; Henry, Jeffreys, & Dawling, 1992; Henry & Rella, 2001; Jones, Jarvie, McDermid, & Proudfoot, 1994). Other MDMA-related physical problems. MDMA abuse has been identified as a cause of rhabdomyolysis,11 which appears to be a consequence of the motor activity induced by or associated with the abuse of this compound (Gahlinger, 2004; Karch, 2002; Klein & Kramer, 2004; Sauret, Marinides, & Wang, 2002). MDMA overdose. Symptoms of an MDMA overdose include restlessness, agitation, sweating, tachycardia, hypertension, hypotension, heart palpitations, renal failure, muscle rigidity, and visual hallucinations (Jaffe, 2000a; Williams et al., 1998). There have been rare reports of fatalities as a result of MDMA abuse. Many experienced MDMA users eventually have one or more of the complications noted above, suggesting that the 11See



possibilty for an adverse reaction continues throughout the period of MDMA use (Williams et al., 1998). While fatalities involving MDMA alone are rare, the potential danger for abusers is increased if multiple agents are ingested (McDowell, 2004). The Economist (“Better than Well,” 1996) estimated that MDMA causes one death for each 3 million doses. Although the use of ß-blocking agents (Beta blockers, or Beta adrenergic blockers) were recommended early in the 1990s (Ames, Wirshing, & Friedman, 1993), the team of Rochester and Kirchner (1999) advised against the use of these agents as this might make control of blood pressure more difficult since the a-adrenergic system would remain unaffected. Drug interactions involving MDMA. Little research has been done on the possible interactions between illicit drugs, such as MDMA, and pharmaceuticals (Concar, 1997). There have been case reports of interactions between the anti-HIV agent Ritonavir and MDMA (Concar, 1997; “Ecstasy-using HIV Patients,” 1997; Harrington, Woodward, Hooton, & Horn, 1999). Each agent affects the serotonin level in the blood, and the combination of these two chemicals results in a threefold higher level of MDMA than normal; some fatalities have been reported in users who have mixed these compounds (Concar, 1997).

Chapter Fifteen

Summary Weil (1986) suggested that people initially use chemicals to alter the normal state of consciousness. Hallucinogen use in this country, at least in the last generation, has followed a series of waves, as first one drug and then another becomes the current drug of choice for achieving this altered state of consciousness. In the sixties, LSD was the major hallucinogen, and in the seventies and early eighties, it was PCP. Currently, MDMA seems to be gaining in popularity as the hallucinogen of choice, although research suggests that MDMA may cause permanent brain damage, especially to those portions of the brain that utilize serotonin as a primary neurotransmitter. If we accept Weil’s (1986) hypothesis as correct, it is logical to expect that other hallucinogens will emerge over the years as people look for a more effective way to alter their state of consciousness. One might expect that these drugs in turn will slowly fade as they are replaced by newer hallucinogenics. Just as cocaine faded from the drug scene in the 1930s and was replaced for a period of time by the amphetamines, so one might expect wave after wave of hallucinogen abuse as new drugs become available. Thus, chemical dependency counselors will have to maintain a working knowledge of an evergrowing range of hallucinogens in the years to come .


Abuse of and Addiction to the Inhalants and Aerosols

records from the 1800s document the use of such agents as nitrous oxide for parties. The use of gasoline fumes to get high is thought to have started prior to World War II (Morton, 1987), with the first documentation of this practice being found in the early 1950s (Blum, 1984). By the mid 1950s and early 1960s, the popular press was reporting “glue sniffing” (Morton, 1987; Westermeyer, 1987). In this practice the individual will use model airplane glue as an inhalant. The active agent of model glue in the 1950s was often toluene. Nobody knows how the practice of glue sniffing first started, but there is evidence that it began in California, when teenagers accidentally discovered the intoxicating powers of toluene-containing model glue (Berger & Dunn 1982). The first known reference to this practice appeared in 1959, in the magazine section of a Denver newspaper (Brecher, 1972). Local newspapers soon began to carry stories on the dangers of inhalant abuse, in the process explaining just how to use airplane glue to become intoxicated and what effects to expect. Within a short time, a “Nationwide Drug Menace” (Brecher, 1972, p. 321) emerged in the United States. Currently, inhalant abuse is thought to be a worldwide problem (Brust, 1993) and is especially common in Japan and Europe (Karch, 2002). Brecher (1972) suggested that the inhalant abuse “problem” was essentially manufactured through distorted media reports. The author said that in response to media reports of deaths due to glue sniffing, one newspaper tracked down several stories and found only nine deaths that could be attributed to this practice. Of this number, six deaths were due to asphyxiation: Each victim had used an airtight plastic bag and had suffocated. In another case, there was evidence that asphyxiation was also the cause of death, and in the eighth case there was no evidence that the victim had

Introduction The inhalants are unlike the other chemicals of abuse. They are toxic substances that include various cleaning agents, herbicides, pesticides, gasoline, kerosene, certain forms of glue, lacquer thinner, and chemicals used in felt-tipped pens. These agents are not primarily intended to function as recreational substances. When inhaled, many of the chemicals in these compounds will alter the manner in which the brain functions. At low doses, inhalants may cause the user to experience a sense of euphoria. It is often possible for adolescents and even children to purchase many agents that have the potential to be abused by inhalation. For these reasons, children, adolescents, or even the rare adult will occasionally abuse chemical fumes. Because these chemicals are inhaled, they are often called inhalants, or volatile substances (Esmail, Meyer, Pottier, & Wright, 1993). For the purpose of this text, the term inhalants will be used. The inhalation of volatile substances, or inhalants, has become a major concern in the European Union, where 1 in every 7 adolescents in the 15- to 16-year age group abuses inhalants (“Solvent Abuse Puts Teens at Risk,” 2003). Because they are so easily accessible to children and adolescents, inhalants continue to be a major substance for chemical abuse among adolescents in the United States as well. In this chapter, the problem of inhalant abuse will be discussed.

History of Inhalant Abuse The first episodes of inhalant abuse in modern history involve anesthetics, dating back to the 19th century. Indeed, the earliest documented use of the anesthetic gases appears to have been for recreation, and historical 213


Chapter Sixteen

been using inhalants. Finally, in the ninth case, the individual had been using gasoline as an inhalant but was reported to be in poor health prior to this incident. Brecher (1972) noted that “among tens of thousands of glue-sniffers prior to 1964, no death due unequivocally to glue vapor had as yet been reported. The lifesaving advice children needed was not to sniff glue with their heads in plastic bags” (p. 331). Since these words were written, research has shown that the use of inhalants may introduce potentially toxic chemicals into the user’s body (Brunswick, 1989; Jaffe, 1989). Some of the consequences of inhalant abuse include cardiac arrhythmias, anoxia, damage to the visual perceptual system through increased intraocular pressure, and neuropathies (Greydanus & Patel, 2003). Thus, while the media might have played a role in the development of this crisis back in the late 1950s and early 1960s, by the 1990s it had become a legitimate health concern.

Pharmacology of the Inhalants Many chemical agents reach the brain more rapidly and efficiently when they are inhaled rather than ingested by mouth or injected. When a chemical is inhaled, it is able to enter the bloodstream without its chemical structure being altered in any way by the liver. Once in the blood, one factor that influences how fast that chemical might reach the brain is whether the molecules are able to form chemical bonds with the lipids in the blood. As a general rule, inhalants are quite lipid soluble (Crowley & Sakai, 2004; Henretig, 1996). Further, inhalants share the characteristic of being able to rapidly cross the blood-brain barrier to reach the brain itself in an extremely short period of time, usually within seconds (Blum, 1984; Crowley & Sakai, 2004; Hartman, 1995; Heath, 1994; Watson, 1984). Cone (1993) grouped all of the inhalants into two broad classifications: (a) anesthetic gases and (b) volatile hydrocarbons. In contrast to this classification scheme, Monroe (1995) suggested three classes of chemicals that might be inhaled: 1. The solvents, such as glues, paint, paint thinner, gasoline, kerosene, lighter fluid, fingernail polish, fingernail polish remover, correction fluids for use in the office, felt tip markers.

2. Various gases, such as butane in cigarette lighters, propane gas, the propellant in whipping cream cans, cooking sprays. 3. The nitrites, such as butyl nitrite and amyl nitrite. However, Espeland (1997)1 suggested four classes of inhalants: 1. Volatile organic solvents such as those found in paint and fuel2. 2. Aerosols, such as hair sprays, spray paints, and deodorants 3. Volatile nitrites (such as amyl nitrite or its close chemical cousin, butyl nitrite) 4. General anesthetic agents such as nitrous oxide As these different classification systems suggest, there are many chemicals, with a multitude of uses, that produce fumes; when inhaled, many of these will alter the user’s sense of reality. Of the different classes of inhalants commonly abused, children and adolescents will most often abuse the first two classes. Children or adolescents have limited access to the third category of inhalants, and extremely limited access to general anesthetics, the final class of inhalants. Because so many different compounds might be abused, it is virtually impossible to speak of a “pharmacology” of inhalants. Many of the more common inhalants are biotransformed by the liver before being elimated from the circulation by the kidneys, but some are exhaled without extensive biotransformation taking place (Brooks, Leung, & Shannon, 1996; Crowley & Sakai, 2004). Further, many of the compounds that are abused were never intended for introduction into the human body but were designed as industrial solvents or for household use. Even in research into the effects of a specific compound on the human body, it has only rarely involved the concentrations of these agents at the levels commonly used by inhalant abusers (Blum, 1984; 1Children and adolescents have only limited access to volatile nitrites, although butyl nitrite is sometimes sold without a prescription in some states. Except in rare cases, the abuse of surgical anesthetics is usually limited to a small percentage of health care workers, because access to anesthetic gases is carefully controlled. 2Technically, alcohol might be classified as a solvent. However, since the most common method of alcohol use/abuse is through oral ingestion, ethyl alcohol will not be discussed in this chapter.


Abuse of and Addiction to the Inhalants and Aerosols

Fornazzazri, 1988; Morton, 1987). For example, the maximum permitted exposure to toluene fumes in the workplace is 50–100 parts per million ( ppm) (Crowley, 2000). But when toluene is used as an inhalant, the users may willingly expose themselves to levels 100 times as high as the maximum permitted industrial exposure level. Once in the brain, the inhalants are thought to alter the normal function of the membranes of the neurons. There is preliminary evidence that the inhalants affect the gamma-amino-butyric acid (GABA) and/or N-methyl-D-aspartate (NMDA) neurotransmitter systems (Crowley & Sakai, 2004). However, the effect of a specific inhalant on neuron function is dependent on the exact compounds being abused. There is no standard formula by which to estimate the biological half-life of an inhalant, since so many different chemicals are abused. It should be noted, however, that the half-life of most solvents tends to be longer in obese users than in thin ones (Hartman, 1995). As a general rule, the half-life of the various compounds commonly abused through inhalation might range from hours to days, depending on the exact chemicals being abused (Brooks et al., 1996). Either directly or indirectly, the compounds that are inhaled for recreational purposes are all toxic to the human body to one degree or another (Blum, 1984; Fornazzazri, 1988; Morton, 1987). But most of what is known about the effects of these chemicals is based on the short-term impact on the individual. There is very little research into the effects of chronic exposure to many of the compounds abused by inhalant users. For these reasons, it is difficult to talk about the pharmacology of the inhalants.3 Ultimately, the material devoted to this topic would be many tens of thousands of pages long, as there are literally thousands of compounds that might be abused by inhalation. But behavioral observations of animals who have been exposed to inhalants suggest that many inhalants act like alcohol or barbiturates on the brain. Indeed, alcohol and the benzodiazepines have been found to potentiate the effects of many inhalants such as toluene. However, ultimately, the pharmacology of a given inhalant will depend on the various chemicals found in the specific 3Hartman

(1995) provides an excellent technical summary of the neuropsychological effects of chronic exposure to some of the more common industrial solvents.

compound being abused. Such compounds often contain dozens or scores of different chemicals.

Scope of the Problem Although the mass media most often focus on inhalant abuse in the United States, in reality it is a worldwide problem (Spiller & Krenzelok, 1997). Inhalant abuse is growing in popularity, increasing by 44% in sixth graders in recent years (“Huffing Can Kill Your Child,” 2004). In the United States, the group most likely to abuse inhalants is children and adolescents, especially boys in their early teens, who live in poor or rural areas where more expensive drugs of abuse are not easily available (Drummer & Odell, 2001; Henretig, 1996; Jaffe, 1989; Spiller & Krenzelok, 1997). Just under 16% of eighth graders surveyed in 2003 admitted to having abused an inhalant at least once, a percentage lower than the 21.6% of eighth graders who admitted to having abused an inhalant in 1996 (Anderson & Loomis, 2003; Johnston, O’Malley, & Bachman, 2003a). There is mixed evidence that inhalants are becoming increasingly popular with younger teens (Anderson & Loomis, 2003; Greydanus & Patel, 2003). Behaviorally, most adolescents who abuse inhalants will do so only a few times and then stop without going on to develop other drug-use problems (Crowley, 2000). The mean age for first-time inhalant abuse is about 13 years (Anderson & Loomis, 2003), and the mean age of inhalant abusers is about 16.6 years (with a standard deviation of 7.3 years) (Spiller & Krenzelok, 1997). Inhalant abuse is most popular among 11- to 13-yearolds, after which it becomes less and less common (Brooks et al., 1996). However, there are reports of children as young as 7 or 8 years of age abusing inhalants (Henretig, 1996). Physical dependence on inhalants is quite rare, with only about 4% of those who abuse inhalants becoming dependent on them (Crowley & Sakai, 2004). But it is believed that for children and adolescents, inhalants are the most commonly abused substance after alcohol and tobacco (Wilson-Tucker & Dash, 1995). The practice of abusing inhalants appears to involve boys more often than girls by a ratio of about 3:1 (Crowley, 2000). Inhalant users are usually between 10 and 15 years of age (Miller & Gold, 1991b). In England, 3% to 10% of the adolescents asked admitted to the use of inhalants at least once, and about 1%


Chapter Sixteen

were thought to be current users (Esmail et al., 1993). The most commonly abused compounds appear to be spray paint and gasoline, which collectively accounted for 61% of the compounds abused by subjects in a study by Spiller and Krenzelok (1997). Unfortunately, for a minority of those who abuse them, the inhalants appear to function as a “gateway” chemical, setting the stage for further drug use in later years. Approximately one-third of the children/adolescents who abuse inhalants go on to abuse one or more of the traditional drugs of abuse within 4 years (Brunswick, 1989). Crowley (2000) reported, for example, that people who admitted to the use of inhalants were 45 times as likely to have used self-injected drugs, whereas individuals who admitted to the use of both inhalants and marijuana were 89 times as likely to have injected drugs as the general population.

may be poured into a plastic bag, which is then placed over the mouth and nose so that the individual can inhale the fumes, a practice called “bagging” (Anderson & Loomis, 2003; Esmail et al., 1993; Nelson, 2000). Sometimes, the compound is poured into a rag that is then placed over the individual’s mouth and nose, a practice called “huffing” (Anderson & Loomis, 2003; Nelson, 2000). Fumes from aerosol cans may also be inhaled directly or sprayed into the mouth, according to Esmail et al. (1993). Finally, some users have attempted to boil the substance to be abused, so that they might inhale the fumes (Nelson, 2000). Obviously, if the substance being boiled is flammable, there is a significant risk of fire if the compound should ignite.

Why Are Inhalants So Popular?

The initial effects of the fumes on the individual might include a feeling of hazy euphoria, somewhat like the feeling of intoxication caused by alcohol, although nausea and vomiting may also occur (Anderson & Loomis, 2003; Crowley, 2000; Henretig, 1996; McHugh, 1987). Inhalant-induced feelings of euphoria usually last less than 30 minutes (McHugh, 1987). Other reported effects include a floating sensation, decreased inhibitions, amnesia, slurred speech, excitement, double vision, ringing in the ears, and hallucinations (Blum, 1984; Kaminski, 1992; Morton, 1987; Schuckit, 2000). Occasionally, the individual will feel as if he or she is omnipotent, and episodes of violence have been reported (Morton, 1987). These effects are usually shortlived, lasting around 45 minutes (Mirin et al., 1991; Schuckit, 2000). After the initial euphoria, depression of the central nervous system (CNS) develops. The stages of inhalant intoxication are summarized in Figure 16.1. The inhalant-induced euphoria is not achieved without some aftereffects. Some inhalant abusers experience an inhalant-induced hangover, which usually will clear “in minutes to a few hours” (Westermeyer, 1987, p. 903). Abusers also report a residual sense of drowsiness and/or stupor, which will last for several hours after the last use of inhalants (Kaplan, Sadock, & Grebb, 1994; Miller & Gold, 1991b). Further, there have been reports of the inhalant-induced headache reported by many users and lasting for several days after the last use (Heath, 1994).

The inhalants are utilized by children/adolescents for several reasons. First, these chemicals have a rapid onset of action, usually a few seconds. Second, inhalant users report pleasurable effects, including a sense of euphoria, when they use these chemicals. Third, and perhaps most important, the inhalants are relatively inexpensive and are easily available to children or adolescents (Cohen, 1977). Virtually all of the commonly used inhalants may be easily purchased, without legal restrictions being placed on their sale to teenagers. An additional advantage for the user is that the inhalants are usually available in small, easily hidden packages. Unfortunately, as we will discuss in the next section, many of the inhalants are capable of causing harm and even death to the user. The inhalant abuser thus runs a serious risk whenever he or she begins to “huff.”4

Method of Administration Inhalants can be abused in a number of ways depending on the specific chemical involved. Some compounds may be inhaled directly from the container, a practice called “sniffing” or “snorting” (Anderson & Loomis, 2003). Other compounds, such as glue and adhesives, 4

See Glossary.

Subjective Effects of Inhalants


Abuse of and Addiction to the Inhalants and Aerosols Stage 1 Sense of euphoria, visual and/or auditory hallucinations, and excitement

A partial list of the possible consequences of inhalant abuse includes the following: (Anderson & Loomis, 2003; Brunswick, 1989; Crowley & Sakai, 2004; Hartman, 1995; Henretig, 1996; Karch, 1996; Monroe, 1995; Morton, 1987; Weaver, Jarvis, & Schnoll, 1999): Liver damage Cardiac arrhythmias5

Stage 2 Confusion, disorientation, loss of self-control, blurred vision, tinnitus, mental dullness

Stage 3 Sleepiness, ataxia, diminished reflexes, nystagmus

Kidney damage or failure, which may become permanent Transient changes in lung function Anoxia and/or respiratory depression possibly to the point of respiratory arrest Reduction in blood cell production possibly to the point of aplastic anemia Possible permanent organic brain damage (including dementia, and inhalant-induced organic psychosis) Permanent muscle damage secondary to the development of rhabdomyolysis6 Vomiting, with the possibility of the user aspirating some of the material being vomited, resulting in death

Stage 4 Seizures, EEG changes noted on examination, paranoia, bizarre behavior, tinnitus; possible death of inhalant user

FIGURE 16.1 The stages of inhalant abuse.

Complications From Inhalant Abuse When the practice of abusing the inhalants first surfaced, most health care professionals did not think it could cause many serious complications. However, in the last quarter of the 20th century, researchers uncovered evidence that inhalant abuse might cause a wide range of physical problems. Depending on the concentration and the compound being abused, even a single episode of abuse might result in the user’s developing symptoms of solvent toxicity (Hartman, 1995).

In addition to the effects listed above, inhalant abuse might also cause damage to the bone marrow, sinusitis (irritation of the sinus membranes), erosion of the nasal mucosal tissues, and laryngitis (Crowley & Sakai, 2004; Henretig, 1996; Westermeyer, 1987). The individual might develop a cough or wheezing, and inhalant abuse can exacerbate asthma in individuals prone to this disorder (Anderson & Loomis, 2003). There also may be chemical burns on the skin (Anderson & Loomis, 2003). The impact of the inhalants on the central nervous system (CNS) are perhaps the most profound, if only because inhalant abusers are usually so young. Many of the inhalants have been shown to cause damage to the central nervous system, resulting in such problems as cerebellar ataxia,7 tremor, peripheral neuropathies, 5See

Glossary. Glossary. 7A loss of coordination caused by physical damage to a region of the brain that is involved in motor coordination. 6See


memory problems, coma, optic neuropathy, and deafness (Anderson & Loomis, 2003; Brooks et al., 1996; Fornazzazri, 1988; Maas, Ashe, Spiegel, Zee, & Leigh, 1991). One study found that 44% of chronic inhalant abusers had abnormal magnetic resonance imaging (MRI) results, compared with just 25% of chronic cocaine abusers (Mathias, 2002). Inhalant abuse has been classified as “one of the leading causes of death in those under 18” (Esmail et al., 1993, p. 359). Death might occur the first time the individual uses one of these compounds or the 200th time (“Huffing Can Kill Your Child,” 2004). Each year, between 100 and 1,000 deaths in the United States are directly attributable to inhalant abuse (Hartman, 1995; Wisneiwski, 1994). Depending on the compound being used, there is a very real danger that the individual using an inhalant might be exposed to toxic levels of various heavy metals such as copper or lead (Crowley, 2000). For example, gasoline sniffing by children is a major cause of lead poisoning (Henretig, 1996; Monroe, 1995; Parras, Patier, & Ezpeleta, 1988). Exposure to lead is a serious condition that may have long-term consequences for the child’s physical and emotional growth. Further, although the standard neurological examination is often unable to detect signs of solventinduced organic brain damage until it is quite advanced, sensitive neuropsychological tests often find signs of significant neurological dysfunction in workers who are exposed to solvent fumes on a regular basis (Hartman, 1995). Toluene is found in many forms of glue and is the solvent most commonly abused (Hartman, 1995). Researchers have found that chronic toluene exposure can result in intellectual impairment (Crowley & Sakai, 2004; Maas et al., 1991; Rosenberg, 1989). Finally, researchers have identified what appears to be a withdrawal syndrome that develops following extended periods of inhalant abuse and is very similar to alcoholinduced “delirium tremens” (DTs) (Blum, 1984; Mirin et al., 1991). But the exact withdrawal syndrome that develops after episodes of inhalant abuse depends on the specific chemicals being abused, the duration of inhalant abuse, and the dosage levels (Miller & Gold, 1991b). Some of the symptoms of inhalant withdrawal include muscle tremors, irritability, anxiety, insomnia,

Chapter Sixteen

muscle cramps, hallucinations, sweating, nausea, and possible seizures (Crowley, 2000). Inhalant abuse and suicide. Espeland (1997) suggested a disturbing relationship between inhalant abuse and adolescent suicide. Some suicidal adolescents put inhalant into a plastic bag and then put the bag over their heads. The plastic bag is then closed about the head/neck area, allowing the inhalant to cause the individual to lose consciousness. The person will quickly suffocate as the oxygen in the bag is used up, and unless found, will die. In such cases, it is quite difficult to determine whether the individual intended to end his or her own life or if the death was an unintended side effect of the inhalant-abuse method.

Anesthetic Misuse Berger and Dunn (1982) reported that nitrous oxide and ether, the first two anesthetic gases to be used, were first introduced as recreational drugs rather than as surgical anesthetics. Indeed, these gases were routinely utilized as intoxicants for quite some time before they were utilized by medicine. Horace Wells, who introduced medicine to nitrous oxide, noted the pain-killing properties of this gas when he observed a person under its influence trip and gash his leg, without any apparent pain (Brecher, 1972). As medical historians know, the first planned demonstration of nitrous oxide as an anesthetic was something less than a success. Because nitrous oxide has a duration of effect of about 2 minutes following a single dose and thus must be continuously administered, the patient returned to consciousness in the middle of the operation and started to scream in pain. However, in spite of this rather frightening beginning, physicians began to understand how to use nitrous oxide properly to bring about surgical anesthesia, and it is now an important anesthetic agent (Brecher, 1972). Julien (1992) noted that the pharmacological effects observed with the general anesthetics are the same as those observed with the barbiturates. There is a doserelated range of effects from the anesthetic ranging from an initial period of sedation and relief from anxiety on through sleep and analgesia. At extremely high dosage levels, the anesthetic gases can cause death.


Abuse of and Addiction to the Inhalants and Aerosols

Below is a discussion of one of the most commonly abused anesthetic gases: nitrous oxide. Nitrous oxide. This gas presents a special danger as precautions must be observed to maintain a proper oxygen supply to the individual’s brain. Room air alone will not provide sufficient oxygen to the brain when nitrous oxide is used (Julien, 1992), and oxygen must be supplied under pressure to avoid the danger of hypoxia (a decreased oxygen level in the blood that can result in permanent brain damage if not corrected immediately). In surgery, the anesthesiologist takes special care to ensure that the patient has an adequate oxygen supply. However, few nitrous oxide abusers have access to supplemental oxygen sources and thus they run the risk of serious injury, or even death, when they use this compound. It is possible to achieve a state of hypoxia from virtually any of the inhalants, including nitrous oxide (McHugh, 1987). In spite of this danger, nitrous oxide is a popular drug of abuse in some circles (Schwartz, 1989). Nitrous oxide abusers report that the gas is able to bring about a feeling of euphoria, giddiness, hallucinations, and a loss of inhibitions (Lingeman, 1974). Dental students, dentists, medical school students, and anesthesiologists, all of whom have access to surgical anesthetics through their professions, will occasionally abuse agents such as nitrous oxide as well as ether, chloroform, trichlorothylene and halothane. Also, children and adolescents will occasionally abuse the nitrous oxide used as a propellant in certain commercial products by finding ways to release the gas from the container. In rare cases, abusers might even make their own nitrous oxide, risking possible death from impurities in the compound they produce (Brooks et al., 1996). The volatile anesthetics are not biotransformed by the body to any significant degree but enter and leave unchanged (Glowa, 1986). Once the source of the gas is removed, the concentration of the gas in the brain begins to drop and normal circulation brings the brain to a normal state of consciousness within moments. While the person is under the influence of the anesthetic gas, however, the ability of the brain cells to react to painful stimuli seems to be reduced. The medicinal use of nitrous oxide, chloroform, and ether are confined for the most part to dental or general

surgery. Very rarely, however, one will encounter a person who has abused or is currently abusing these agents. Little information is available concerning the dangers of this practice, nor is there much information about the side effects of prolonged use.

Abuse of Nitrites Two different forms of nitrites are commonly abused: amyl nitrite and its close chemical cousins butyl nitrite and isabutyl nitrite. When inhaled, these substances function as coronary vasodilators, allowing more blood to flow to the heart. This effect made amyl nitrite useful in the control of angina pectoris. The drug was administered in small glass containers, embedded in cloth layers. The user would “snap” or “pop” the container with his or her fingers and inhale the fumes in order to control the chest pain of angina pectoris.8 With the introduction of nitroglycerine preparations, which are as effective as amyl nitrite but lack many of its disadvantages, amyl nitrite fell into disfavor and few people now use it for medical purposes (Schwartz, 1989). It does continue to have a limited role in diagnostic medicine and the medical treatment of cyanide poisoning. Amyl nitrite is available only by prescription, but butyl nitrite and isabutyl nitrite are often sold legally by mail order houses or in speciality stores, depending on specific state regulations. In many areas, butyl nitrite is sold as a room deodorizer, packaged in small bottles that may be purchased for under 10 dollars. Both chemicals are thought to cause the user to experience a prolonged, more intense orgasm when they are inhaled just before the individual reaches orgasm. However, amyl nitrite is also known to be a cause of delayed orgasm and ejaculation in the male user (Finger, Lund, & Slagel, 1997). Aftereffects include an intense, sudden headache, increased pressure of the fluid in the eyes (a danger for those with glaucoma), possible weakness, nausea, and possible cerebral hemorrhage (Schwartz, 1989). 8

It was from the distinctive sound of the glass breaking within the cloth ampule that both amyl nitrite and butyl nitrite have come to be known as “poppers” or “snappers” by those who abuse these chemicals.


When abused, both amyl nitrite and butyl nitrite will cause a brief (90 second) rush that includes dizziness, giddiness, and the rapid dilation of blood vessels in the head (Schwartz, 1989), which in turn causes an increase in intracranial pressure (“Research on Nitrites,” 1989). It is this increase in intracranial pressure that may on occasion contribute to the rupture of unsuspected aneurysms, causing the individual to suffer a cerebral hemorrhage (stroke). The use of nitrites is common among male homosexuals and may contribute to the spread of the virus that causes AIDS (“Research on Nitrites,” 1989; Schwartz, 1989). By causing the dilation of blood vessels in the body, including the anus, the use of either amyl or butyl nitrite during anal intercourse (a common practice for male homosexuals) may actually aid the transmission of HIV from the active to the passive member of the sexual unit (“Research on Nitrites,” 1989). Given the multitude of adverse effects, one questions why the use of these substances is popular during sexual intercourse.

Chapter Sixteen

Summary For many individuals, the inhalants are the first chemicals abused. For the most part, inhalant abuse seems to be a phase that mainly involves teenagers, although occasionally children will abuse an inhalant. The abuse of these chemicals appears to be a phase, during which the individual will engage in the abuse of them on an episodic basis. Individuals who use these inhalants do not usually do so for more than 1 or 2 years. But some will continue to inhale the fumes of gasoline, solvents, certain forms of glue, or other substances for many years. The effects of these chemicals on the individual seem to be rather short-lived. There is evidence, however, that prolonged use of certain agents can result in permanent damage to the kidneys, brain, and liver. Death, either through hypoxia or through prolonged exposure to inhalants, is possible. Very little is known about the the effects of prolonged use of this class of chemicals.


The Unrecognized Problem of Steroid Abuse and Addiction

however, and mental health and chemical dependency professionals should have a working knowledge of the effects of this class of medications.

Introduction The problem of steroid abuse/addiction might be viewed as a social disease. Society places much emphasis on appearances and winning. To achieve the goal of victory, athletes look for something—anything—that will give them an edge over the competition. This might include the use of a certain coaching technique or special equipment, or the use of a chemical substance designed to enhance performace. A whole industry has evolved to help people modify their appearance so they might better approximate the social ideal of size, shape, and appearance. For decades persistent rumors have circulated that anabolic steroids are able to significantly enhance athletic performance or physical appearance (Dickensheets, 2001). These rumors are fueled by real or suspected use of a steroid by different athletes or teams of athletes. Rather than risk failure, others have initiated the unsupervised use of anabolic steroids, with disastrous results. In response to an ever-growing number of adverse reactions to the steroids, federal and state officials placed rigid controls on their use in the 1990s.1 However the rumors and the problem still persist. In reality, very little is known about anabolic steroid abuse (Karch, 2002). This is unfortunate, because in spite of their considerable potential to harm the user, these substances are viewed by many as a means to increase muscle mass or improve physical appearance (Pope & Brower, 2004). Recognition of the problem of anabolic steroid abuse is slowly growing,

An Introduction to the Anabolic Steroids The term anabolic refers to the action of this family of drugs to increase the speed of growth of body tissues (Redman, 1990) or to their ability to force body cells to retain nitrogen (and thus indirectly enhance tissue growth) (Bagatell & Bremner, 1996). The term steroids indicates that the steroids are chemically similar to testosterone, the male sex hormone. Because of the chemical similarity with testosterone, steroids have a masculinizing (androgenic) effect upon the user (Landry & Primos, 1990). At times, the anabolic steroids are referred to as the anabolic-androgenic steroids. Athletes abuse steroids because they are thought to (a) increase lean muscle mass, (b) increase muscle strength, (c) increase aggressiveness, and (d) reduce the period of time necessary for recovery between exercise periods (Karch, 2002). On occasion, they may be abused because of their ability to bring about a sense of euphoria (Johnson, 1990; Kashkin, 1992; Lipkin, 1989; Schrof, 1992). However, this is not the primary reason most people abuse the anabolic steroids. Repeated, heavy physical exercise can actually result in damage to muscle tissues. The anabolic steroids have been found to stimulate protein synthesis, a process that indirectly may help muscle tissue development, possibly increase muscle strength, and limit the amount of damage done to muscle tissues through heavy physical exercise (Congeni & Miller, 2002; Gottesman, 1992; Pettine, 1991; Pope & Katz, 1990). Athletes are not the only people vulnerable to steroid abuse. Many nonathletic users believe that steroid use will help them look more physically attractive (Bahrke,


response to these controls, a $4 billion a year industry in what are known as “nutritional” supplements, which are composed of various combinations of amino acids, vitamins, proteins, and naturally occurring simulants such as ephedrine, has developed (Solotaroff, 2002). As is true of the anabolic steroids, the consequences of long-term use of many of these compounds at high dosage levels are not known.



Chapter Seventeen

1990; Brower, 1993; Corrigan, 1996; Johnson, 1990; Pettine, 1991; Pope & Brower, 2004; Schrof, 1992). In addition, there is a subgroup of people, especially some law-enforcement/security officers, who abuse steroids because of their belief that these substances will increase their strength and aggressiveness (Corrigan, 1996; Galloway, 1997; Schrof, 1992).

Medical Uses of Anabolic Steroids Although the anabolic steroids have been in use since the mid 1950s, there still is no clear consensus on how they work (Wadler, 1994). There are few approved uses for these compounds (Dobs, 1999; Sturmi & Diorio, 1998). It is thought that the steroids force the body to increase protein synthesis and inhibit the action of chemicals known as the glucocorticoids, which cause tissue break down. In a medical setting, the anabolic steroids might be used to promote tissue growth and help damaged tissue recover from injury (Shannon, Wilson, & Stang, 1995). Physicians may also use a steroid to treat certain forms of anemia, help patients regain weight after periods of severe illness, treat endometriosis, treat delayed puberty in adolescents, and as an adjunct to the treatment of certain forms of breast cancer in women (Bagatell & Bremner, 1996; Congeni & Miller, 2002). The steroids may also promote the growth of bone tissue following injuries to the bone in certain cases and might be useful in the treatment of certain forms of osteoporosis (Congeni & Miller, 2002). There is evidence that the steroids might be of value in treating AIDS-related weight loss (the so-called wasting syndrome) and certain forms of chronic kidney failure (Dobs, 1999). The anabolic steroids can be broken down into two classes: (a) those that are active when used orally and (b) those that are active only when injected into muscle tissue. Anabolic steroids intended for oral use tend to be more easily administered but have a shorter half-life and are also more toxic to the liver than parenteral forms of steroids (Bagatell & Bremner, 1996; Tanner, 1995).

The Legal Status of Anabolic Steroids Since 1990, anabolic steroids have been classified as a Category III2 controlled substance, available with 2

See Appendix 4.

a doctor’s prescription for certain medical purposes. The law identified 28 different anabolic steroids as being illegal for nonmedical purposes, and their sale by individuals who are not licensed to sell medications was made a crime punishable by a prison term of up to 5 years (10 years if the steroids are sold to minors) (Fultz, 1991).

Scope of the Problem of Steroid Abuse The true scope of anabolic steroid abuse in the United States is not known (Karch, 2002). It is thought that males are more likely to abuse steroids than females, possibly by as much as a 13:1 ratio, in part because few adolescent girls are interested in adding muscle mass (Pope & Brower, 2004). Estimates of the total number of steroid abusers in the United States range from more than 1 million people who either are abusing or have abused steroids to as many as 3 million current users (Dickensheets, 2001). In Canada, some 83,000 people between the ages of 11 and 18 admitted to having used a steroid at least once in the past year (Peters, Copeland, & Dillon, 1999). Steroid abuse is not unknown in high school, with 3.5% of the high school seniors of the class of 2003 admitting to the use of steroids (Johnston, O’Malley, & Bachman, 2003a). In contrast to use of the other recreational chemicals, most users do not use steroids as drugs of abuse until early adulthood. The median age for anabolic steroid abusers is 18 (Karch, 2002). Most college-age steroid users did not begin to use these compounds until just before starting or after they entered college (Brower, 1993; Dickensheets, 2001).

Source and Methods of Steroid Abuse Because of their illegal status and strict controls on their being prescribed by physicians, most anabolic steroids are obtained from illicit sources (Galloway, 1997). These sources include drugs smuggled into the United States or legitimate pharmaceuticals that are diverted to the black market. There is a thriving market for what are known as “designer” (Knight, 2003, p. 114) steroids, which are not detected by standard laboratory tests utilized by sports regulatory agencies. Another common source of steroids is veterinary products,


The Unrecognized Problem of Steroid Abuse and Addiction

which are sold on the street for use by humans. These compounds are distributed through an informal network that frequently is centered around health clubs or gyms (Johnson, 1990; Schrof, 1992). If a physician suspects that a patient has been abusing anabolic steroids, he or she might confront the individual and force a confession that the person has been using anabolic steroids for personal reasons. Some physicians will attempt to limit their patients’ use of anabolic steroids, promising to prescribe medications for them if they will promise to use only the medications prescribed by the physician (Breo, 1990). This misguided attempt at “harm reduction”3 is made by the physician on the grounds that he or she would then be able to monitor and control the individual’s steroid use. However, in most cases the user supplements the prescribed medications with steroids from other sources. Thus, this method of harm reduction is not recommended for physicians (Breo, 1990). Rarely, users will obtain their steroids by diverting4 prescribed medications or by obtaining multiple prescriptions for steroids from different physicians. But between 80% (Bahrke, 1990) and 90% (Tanner, 1995) of the steroids used by athletes comes from the black market,5 with many of the steroids smuggled into the United States coming from the former Soviet Union (Karch, 2002). Various estimates of the scope of the illicit steroid market in the United States range from a $100 million (DuRant, Rickert, Ashworth, Newman, & Slavens, 1993; Middleman & DuRant, 1996) to a $300–500 million (Fultz, 1991; Wadler, 1994) to a $1 billion dollar a year industry (Hoberman & Yesalis, 1995). There are more than 1,000 known derivatives of the testosterone molecule (Sturmi & Diorio, 1998). Because performance-enhancing drugs are prohibited in many sports, chemists will attempt to alter the basic testosterone molecule to develop a designer steroid that might be invisible to the current tests used to detect such compounds. An example of such a designer steroid is tetrahydrogestrinone (THG). This compound appears to have “all the hallmarks of an anabolic steroid, crafted to escape detection in urinanalysis 3See

Glossary. Glossary. 5As used here, black market is a term that is applied to any steroid obtained from illicit sources and then sold for human consumption. 4See

tests” (Kondro, 2003, p. 1466). THG was undetectable by standard urine tests until late 2003. Acting on an anonymous tip and a syringe, the Olympic Analytical Laboratory in Los Angeles developed a test that would expose this steroid in the urine of athletes. Armed with the new test, various regulatory agencies have conducted urine toxicology tests on samples provided by athletes in various fields, prompting a flurry of reports that various athletes had tested positive for this substance, were suspected of having abused it, or were about to be suspended for having submitted a urine sample that had traces of THG in it (“Athletes Caught,” 2003; Knight, 2003). Anabolic steroids may be injected into muscle tissue, taken orally, or used both ways at once. Anabolic steroid abusers have developed a vocabulary of their own to describe many aspects of steroid abuse, the most common of which are summarized in Table 17.1. Many of the practices described in Table 17.1 are quite common among steroid abusers. For example, fully 61% of steroid-abusing weight lifters were found to have engaged in the practice of “stacking” steroids (Brower, Blow, Young, & Hill, 1991; Pope & Brower, 2004; Porcerelli & Sandler, 1998). Some steroid abusers who engage in the process of “pyramiding” are, at the midpoint of the cycle, using massive amounts of steroids. Episodes of pyramiding are interspaced with periods of abstinence from anabolic steroid use that may last several weeks, months (Landry & Primos, 1990), or even as long as a year (Kashkin, 1992). Unfortunately, during the periods of abstinence, much of the muscle mass gained by the use of steroids will be lost, sometimes quite rapidly. When this happens, anabolic steroid abusers often become frightened into prematurely starting another cycle of steroid abuse in order to recapture the muscle mass that has disappeared (Corrigan, 1996; Schrof, 1992; Tanner, 1995).

Problems Associated With Anabolic Steroid Abuse Numerous adverse effects of anabolic steroids have been documented at relatively low doses when these medications were used to treat medical conditions (Hough & Kovan, 1990). The potential consequences of long-term steroid abuse are not known (Kashkin, 1992; Porcerelli & Sandler, 1998; Schrof, 1992; Wadler, 1994). One reason


Chapter Seventeen

TABLE 17.1 Some Terms Associated With Steroid Abuse Term



Mixing different compounds for use at the same time.

Bulking up

Increasing muscle mass through steroid use. Nonusers also use the term to refer to the process of eating special diets and exercising in order to add muscle mass before a sporting event such as a football game or race.


Taking multiple doses of a steroid(s) over a period of time, according to a schedule, with drug holidays built into the schedule.


Using drugs to improve performance.


Steroids that are designed for injection.


Taking massive amounts of steroids, usually by injection or a combination of injection and oral administration.


Steroids designed for oral use.


Taking anabolic steroids according to a schedule that calls for larger and larger doses each day for a period of time, followed by a pattern of smaller doses each day.


Taking steroids on an inconsistent basis.


Slowly decreasing the dosage level of a steroid being abused.

for this lack of information is that many steroid abusers utilize dosage levels that are often 10 (Hough & Kovan, 1990), 40–100 (Congeni & Miller, 2002), or even 1,000 times the maximum recommended therapeutic dosage level for these compounds (Council on Scientific Affairs, 1990a; Wadler, 1994). In one study, the dosage range of steroids being used by a sample of weight lifters was between 2 and 26 times the recommended dosage level for these agents (Brower, Blow, Young, & Hill, 1991). Another study found that the lowest dose of anabolic steroids being used by a group of weight lifters was still 350% above the usual therapeutic dose when the same drug was used by physicians (Landry & Primos, 1990). There is very little information available on the effects of the anabolic steroids on the user at these dosage levels (Johnson, 1990; Kashkin, 1992). It is known that the effects of the anabolic steroids on muscle tissue last for several weeks after the drugs are discontinued (Pope & Katz, 1991). This characteristic is known to muscle builders, who often discontinue their use of steroids before competition in order to avoid having

their steroid use detected by urine toxicological screens (Knight, 2003). The adverse effects of anabolic steroids depend on the (a) route of administration used, (b) the specific drugs taken, (c) the dose utilized, (d) the frequency of use, (e) the health of the individual, and (f) the age of the individual (Johnson, 1990). However, even at recommended dosage levels, steroids are capable of causing sore throat or fever, vomiting (with or without blood being mixed into the vomit), dark-colored urine, bone pain, nausea, unusual weight gain or headache, and a range of other side effects (Congeni & Miller, 2002). Although physicians and sports officials will often conduct blood/urine tests in an attempt to detect illicit steroid abuse, there is an ongoing “arms race” between the steroid abusers and regulatory agencies. The former search for anabolic steroids or similar compounds that cannot be detected by urine/blood toxicology testing, whereas the latter search for new methods by which unauthorized steroid use might be detected. A good example of this is the controversy over tetrahydrogestrinone (THG) that erupted in late


The Unrecognized Problem of Steroid Abuse and Addiction

2003, discussed above. Thus, a “clean” urine sample does not rule out steroid use in modern sporting events or the possibility that the individual is at risk for any of a wide range of complications.

Complications of Steroid Abuse The reproductive system. Males who utilize steroids at the recommended dosage levels might experience enlargement of breasts6 (to the point that breast formation is similar to that seen in adolescent girls). The male steroid abuser might also experience increased frequency of erections or continual erections (a condition known as priapism, which is a medical emergency), unnatural hair growth/hair loss, reduced sperm production, and a frequent urge to urinate. In men, steroid abuse may cause the degeneration of the testicles, enlargement of the prostate gland, difficulty in urination, impotence, and sterility (Blue & Lombardo, 1999; Galloway, 1997; Kashkin, 1992; Pope & Brower, 2004; Pope & Katz, 1994; Sturmi & Diorio, 1998). On rare occasions steroid abuse has resulted in carcinoma (cancer) of the prostate (Johnson, 1990; Landry & Primos, 1990; Tanner, 1995) and urinary obstruction (Council on Scientific Affairs, 1990a). Both men and women might experience infertility and changes in libido as a result of steroid abuse (Sturmi & Diorio, 1998). Women who use steroids at recommended dosage levels may experience an abnormal enlargement of the clitoris, irregular menstrual periods, unnatural hair growth and/or hair loss, a deepening of the voice, and a possible reduction in the size of the breasts (Galloway, 1997; Pope & Brower, 2004; Pope & Katz, 1988; Redman, 1990; Tanner, 1995). The menstrual irregularities caused by steroid use will often disappear after the steroids are discontinued (Johnson, 1990). The Council on Scientific Affairs (1990a) suggested that women who use steroids may experience beard growth, which is one example of the unnatural hair growth pattern anabolic steroids might cause. Another possible outcome is for the woman who is using anabolic steroids to develop “male pattern” baldness. Often, steroid-induced baldness in a woman is irreversible (Tanner, 1995). 6Technically,

this is called gynecomastia.

The liver, kidneys, and digestive system. Steroid abusers may experience altered liver function, which may be detected through blood tests such as the serum glautamic-oxaloacetic transaminase (SGOT) and the serum glautamic-pyruvic transaminase (SGPT) (Johnson, 1990; Karch, 2002; Sturmi & Diorio, 1998). Oral forms of anabolic steroids might be more likely to result in liver problems than injected forms (Tanner, 1995). Anabolic steroid abuse has been implicated as a cause of hepatoxicity7 (Pope & Brower, 2004; Stimac, Milic, Dintinjana, Kovac, & Ristic, 2002). In addition, there is evidence that steriods, when used for periods of time at excessive doses, might contribute to the formation of both cancerous and benign liver tumors (Karch, 1996; Sturmi & Diorio, 1998; Tanner, 1995). The cardiovascular system. Anabolic steroids are mainly abused by those who wish to increase muscle size. Unfortunately the heart is itself a muscle and it is affected by steroid use (“Steroids and Growth Hormones,” 2003). Anabolic steroid abuse may cause hypertension, cardiomyopathy, and heart disease for some abusers. One mechanism for these effects is a steroid-induced reduction in high-density lipoprotein levels and a concurrent increase in the low-density lipoprotein levels by up to 36%, contributing to accelerated atherosclerosis of the heart and its surrounding blood vessels (Blue & Lombardo, 1999; Fultz, 1991; Johnson, 1990; Tanner, 1995). Anabolic steroid abuse might also result in the user’s experiencing a thrombotic stroke—a stroke caused by a blood clot in the brain (Karch, 2002; Tanner, 1995). Such strokes are a side effect of high doses of the anabolic steroids, which cause blood platelets to clump together, forming clots. Researchers have also found evidence that steroids have a direct, dose-related cardiotoxic effect (Slovut, 1992). Indeed, there is evidence of physical changes in the structure of the heart of some steroid users, although the mechanism by which steroids cause this effect is not known (Middleman & DuRant, 1996). The central nervous system. Although it was disputed in the latter part of the 20th century, researchers now accept the fact that anabolic steroids cause behavioral changes in the user. The massive doses of steroids utilized by some athletes has been identified as the trigger of a drug-induced psychosis in some 7

See Glossary.


cases (Johnson, 1990; Kashkin, 1992; Pope & Brower, 2004; Pope & Katz, 1994; Pope, Katz, & Champoux, 1986). Kashkin (1992) reported that about 50% of steroid abusers will abuse other substances in an effort to control the side effects of the anabolic steroids. Some of the drugs that might be abused included diuretics (to counteract steroid-induced bloating) and antibiotics (to control steroid-induced acne). Although most abusers reported minimal impact on measured aggression levels, Pope, Kouri, and Hudson (2000) found that 2% to 10% of male abusers became manic and/or developed other neuropsychiatric problems after abusing steroids. The authors found no significant premorbid sign that might identify those steroid abusers who would develop such problems as a result of their steroid use, raising questions as to why they responded so strongly to the chemicals they injected. Other responses noted in some of their subjects included depressive reactions or drug-induced psychotic reactions (Pope & Brower, 2004; Pope & Katz, 1987, 1988). Sometimes, the individual becomes violent after using steroids, a condition known by abusers as the “roid rage” (Fultz, 1991; Galloway, 1997; Johnson, 1990). In rare cases, steroid-induced violence has resulted in the death of the user or a victim who became the target of the abuser’s anger (Pope, Phillips, & Olivardia, 2000), and it has been recommended that large, muscular, perpetrators of interpersonal violence be screened for steroid abuse (Pope & Brower, 2004). In 1994, Pope and Katz carried out an investigation into the psychiatric side effects of anabolic steroid abuse. Their research sample was 88 steroid-abusing athletes and 68 individuals who were not abusing steroids. Twenty-three percent of the steroid-abusing athletes were found to have experienced a major mood disturbance, such as mania or depression, and an increased level of aggressiveness, which was attributed to their steroid use. This was illustrated by an incident in which one member of the sample of steroid abusers reportedly started to smash three different automobiles out of frustration over a traffic delay (Pope & Katz, 1994). Another individual was implicated in a murder plot, while yet a third beat his dog to death. Still another individual in the research sample rammed his head through a wooden door, and several others were expelled from their homes because of their threatening behavior (“The Back

Chapter Seventeen

Letter,” 1994). Other psychiatric effects of anabolic steroid abuse include loss of inhibition, lack of judgment, irritability, a “strange edgy feeling” (Corrigan, 1996, p. 222), impulsiveness, and antisocial behavior (Corrigan, 1996). In the early 1990s a number of researchers challenged the suspected relationship between anabolic steroid abuse and increased violent tendencies. Yesalis, Kennedy, Kopstein, and Bahrke (1993) suggested that for some unknown reason, anabolic steroid abusers might have exaggerated their self-report of violent behavior noted in earlier studies. A second possibility, according to the authors, was that violent individuals are prone to abuse steroids for some unknown reason, giving the illusion of a causal relationship. However, by the start of the 21st century, the evidence suggesting a relationship between aggressive behavior and steroid abuse for at least a minority of steroid abusers has been clearly identified (Pope, Kouri, & Hudson, 2000; Pope, Phillips, & Olivardia, 2000). Steroids have been identified as the cause of depressive reactions, especially during the withdrawal phase (Pope & Brower, 2004). Such depressive reactions seem to respond well to simple discontinuation of the offending substance(s) (Schuckit, 2000) or the selective serotonin reuptake inhibitors (SSRIs) (Pope & Brower, 2004). On occasion, steroid abusers have developed a form of body dysmorphic disorder, especially following their decision to discontinue steroid use (Pope & Brower, 2004). This condition responds well to psychotherapy combined with the use of the appropriate SSRI, according to the authors. Other complications. Patients with medical conditions such as certain forms of breast cancer; diabetes mellitus; diseases of the blood vessels, kidney, liver, or heart; or males who suffer from prostate problems should not utilize steroids unless the physician is aware that the patient has these problems (United States Pharmacopeial Convention, 1990). The anabolic steroids are thought to be possibly carcinogenic (Johnson, 1990), and their use is not recommended for patients with either active tumors or a history of tumors except under a physician’s supervision. Other side effects caused by steroid use include severe acne (especially across the back) and possibly a foul odor on the breath (Redman, 1990). There has been one isolated case of unnatural bone degeneration that was


The Unrecognized Problem of Steroid Abuse and Addiction

attributed to the long-term use of steroids by a weight lifter (Pettine, 1991). Also, animal research suggests that anabolic steroids may contribute to the degeneration of tendons, a finding that is consistent with clinical case reports of athletes who are using anabolic steroids having tendons rupture under stress (Karch, 1996). Surprisingly, although anabolic steroids are often abused to improve athletic performance, the evidence that steroids actually do improve the user’s athletic abilities is mixed (Tanner, 1995). One factor that complicates research into athletic performance is the individual’s belief that these drugs will improve his or her abilities. The authors suggested that the athlete’s expectation of improved performance might contribute, at least in part, to the observed performance on the part of the user. There is evidence that anabolic steroid abuse might prove to be a “gateway” to the abuse of other compounds such as narcotic analgesics (Kanayama, Cohane, Weiss, & Pope, 2003). The authors suggested that the abuse of anabolic steroids might be one avenue through which some individuals began to abuse opiates, especially as eight of their subjects first purchased opiates from the same source that sold them anabolic steroids. The authors proposed that a history of anabolic steroid abuse might be an underrecognized problem among those admitted to treatment for more traditional substance-use problems. Growth patterns in the adolescent. Adolescents who use steroids run the risk of stunted growth, as these drugs may permanently stop bone growth (Johnson, 1990; Schrof, 1992). A further complication of steroid abuse by adolescents is that the tendons do not grow at the same accelerated rate as the bone tissues, producing increased strain on the tendons and a higher risk of injury to them (Galloway, 1997; Johnson, 1990). Anabolic steroid abuse and blood infections. In addition to the complications of steroid abuse itself, individuals who abuse steroids through intramuscular or intravenous injection often share needles. These individuals run the same risk of contracting infections transmitted by contaminated needles as seen in heroin or cocaine addicts. Indeed, there have been cases of athletes contracting AIDS when they used a needle that had been used by another athlete who was infected (Kashkin, 1992). Drug interactions between steroids and other chemicals. The anabolic steroids interact with a wide range of

medications, including several drugs of abuse. Potentially serious drug interactions have been noted in cases where the individual has utilized acetaminophen in high doses while on steroids. The combination of these two drugs—steroids and acetaminophen—should be avoided except when the individual is being supervised by a physician. Patients who utilize Antabuse (disulfiram) should not take steroids, nor should individuals who are taking Trexan (naltrexone) anticonvulsant medications such as Dilantin (phenytoin), Depakene (valproic acid), or any of the phenothiazines (United States Pharmacopeial Convention, 1990).

Are Anabolic Steroids Addictive? Surprisingly, when used for periods of time at high dosage levels, the anabolic steroids have an addictive potential. Some users have reported preoccupation with the use of these chemicals and a craving when they were not using steroids (Middleman & DuRant, 1996). Further, anabolic steroids have been known to bring about a sense of euphoria both when used for medical purposes and when abused (Fultz, 1991; Middleman & DuRant, 1996). This may explain why steroid use is so attractive to at least some of those who abuse this family of drugs. There also is evidence to suggest that the user might become either physically or psychologically dependent on the anabolic steroids (Johnson, 1990). It has been estimated that 14%–69% of abusers will ultimately become addicted to anabolic steroids (Pope & Brower, 2005). Withdrawal from steroid addiction is