The Epidemic of Mental Illness: Why?

© United Artists/Photofest Lan Fendors, Louise Fletcher, and Jack Nicholson in One Flew Over the Cuckoo's Nest, 1975

The Emperor’s New Drugs: Exploding the Antidepressant Myth
by Irving Kirsch
Basic Books, 226 pp., $15.99 (paper)

Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America
by Robert Whitaker
Crown, 404 pp., $26.00

Unhinged: The Trouble With Psychiatry – A Doctor’s Revelations About a Profession in Crisis
by Daniel Carlat
Free Press, 256 pp., $25.00

It seems that Americans are in the midst of a raging epidemic of mental illness, at least as judged by the increase in the numbers treated for it. The tally of those who are so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) increased nearly two and a half times between 1987 and 2007 – from one in 184 Americans to one in seventy-six. For children, the rise is even more startling – a thirty-five-fold increase in the same two decades. Mental illness is now the leading cause of disability in children, well ahead of physical disabilities like cerebral palsy or Down syndrome, for which the federal programs were created.

A large survey of randomly selected adults, sponsored by the National Institute of Mental Health (NIMH) and conducted between 2001 and 2003, found that an astonishing 46 percent met criteria established by the American Psychiatric Association (APA) for having had at least one mental illness within four broad categories at some time in their lives. The categories were “anxiety disorders,” including, among other subcategories, phobias and post-traumatic stress disorder (PTSD); “mood disorders,” including major depression and bipolar disorders; “impulse-control disorders,” including various behavioral problems and attention-deficit/hyperactivity disorder (ADHD); and “substance use disorders,” including alcohol and drug abuse. Most met criteria for more than one diagnosis. Of a subgroup affected within the previous year, a third were under treatment – up from a fifth in a similar survey ten years earlier.

Nowadays treatment by medical doctors nearly always means psychoactive drugs, that is, drugs that affect the mental state. In fact, most psychiatrists treat only with drugs, and refer patients to psychologists or social workers if they believe psychotherapy is also warranted. The shift from “talk therapy” to drugs as the dominant mode of treatment coincides with the emergence over the past four decades of the theory that mental illness is caused primarily by chemical imbalances in the brain that can be corrected by specific drugs. That theory became broadly accepted, by the media and the public as well as by the medical profession, after Prozac came to market in 1987 and was intensively promoted as a corrective for a deficiency of serotonin in the brain. The number of people treated for depression tripled in the following ten years, and about 10 percent of Americans over age six now take antidepressants. The increased use of drugs to treat psychosis is even more dramatic. The new generation of antipsychotics, such as Risperdal, Zyprexa, and Seroquel, has replaced cholesterol-lowering agents as the top-selling class of drugs in the US.

An advertisement for Prozac, from The American Journal of Psychiatry, 1995

What is going on here? Is the prevalence of mental illness really that high and still climbing? Particularly if these disorders are biologically determined and not a result of environmental influences, is it plausible to suppose that such an increase is real? Or are we learning to recognize and diagnose mental disorders that were always there? On the other hand, are we simply expanding the criteria for mental illness so that nearly everyone has one? And what about the drugs that are now the mainstay of treatment? Do they work? If they do, shouldn’t we expect the prevalence of mental illness to be declining, not rising?

These are the questions, among others, that concern the authors of the three provocative books under review here. They come at the questions from different backgrounds – Irving Kirsch is a psychologist at the University of Hull in the UK, Robert Whitaker a journalist and previously the author of a history of the treatment of mental illness called Mad in America (2001), and Daniel Carlat a psychiatrist who practices in a Boston suburb and publishes a newsletter and blog about his profession.

The authors emphasize different aspects of the epidemic of mental illness. Kirsch is concerned with whether antidepressants work. Whitaker, who has written an angrier book, takes on the entire spectrum of mental illness and asks whether psychoactive drugs create worse problems than they solve. Carlat, who writes more in sorrow than in anger, looks mainly at how his profession has allied itself with, and is manipulated by, the pharmaceutical industry. But despite their differences, all three are in remarkable agreement on some important matters, and they have documented their views well.

First, they agree on the disturbing extent to which the companies that sell psychoactive drugs – through various forms of marketing, both legal and illegal, and what many people would describe as bribery – have come to determine what constitutes a mental illness and how the disorders should be diagnosed and treated. This is a subject to which I’ll return.

Second, none of the three authors subscribes to the popular theory that mental illness is caused by a chemical imbalance in the brain. As Whitaker tells the story, that theory had its genesis shortly after psychoactive drugs were introduced in the 1950s. The first was Thorazine (chlorpromazine), which was launched in 1954 as a “major tranquilizer” and quickly found widespread use in mental hospitals to calm psychotic patients, mainly those with schizophrenia. Thorazine was followed the next year by Miltown (meprobamate), sold as a “minor tranquilizer” to treat anxiety in outpatients. And in 1957, Marsilid (iproniazid) came on the market as a “psychic energizer” to treat depression.

In the space of three short years, then, drugs had become available to treat what at that time were regarded as the three major categories of mental illness – psychosis, anxiety, and depression – and the face of psychiatry was totally transformed. These drugs, however, had not initially been developed to treat mental illness. They had been derived from drugs meant to treat infections, and were found only serendipitously to alter the mental state. At first, no one had any idea how they worked. They simply blunted disturbing mental symptoms. But over the next decade, researchers found that these drugs, and the newer psychoactive drugs that quickly followed, affected the levels of certain chemicals in the brain.

Some brief – and necessarily quite simplified – background: the brain contains billions of nerve cells, called neurons, arrayed in immensely complicated networks and communicating with one another constantly. The typical neuron has multiple filamentous extensions, one called an axon and the others called dendrites, through which it sends and receives signals from other neurons. For one neuron to communicate with another, however, the signal must be transmitted across the tiny space separating them, called a synapse. To accomplish that, the axon of the sending neuron releases a chemical, called a neurotransmitter, into the synapse. The neurotransmitter crosses the synapse and attaches to receptors on the second neuron, often a dendrite, thereby activating or inhibiting the receiving cell. Axons have multiple terminals, so each neuron has multiple synapses. Afterward, the neurotransmitter is either reabsorbed by the first neuron or metabolized by enzymes so that the status quo ante is restored. There are exceptions and variations to this story, but that is the usual way neurons communicate with one another.

When it was found that psychoactive drugs affect neurotransmitter levels in the brain, as evidenced mainly by the levels of their breakdown products in the spinal fluid, the theory arose that the cause of mental illness is an abnormality in the brain’s concentration of these chemicals that is specifically countered by the appropriate drug. For example, because Thorazine was found to lower dopamine levels in the brain, it was postulated that psychoses like schizophrenia are caused by too much dopamine. Or later, because certain antidepressants increase levels of the neurotransmitter serotonin in the brain, it was postulated that depression is caused by too little serotonin. (These antidepressants, like Prozac or Celexa, are called selective serotonin reuptake inhibitors (SSRIs) because they prevent the reabsorption of serotonin by the neurons that release it, so that more remains in the synapses to activate other neurons.) Thus, instead of developing a drug to treat an abnormality, an abnormality was postulated to fit a drug.

That was a great leap in logic, as all three authors point out. It was entirely possible that drugs that affected neurotransmitter levels could relieve symptoms even if neurotransmitters had nothing to do with the illness in the first place (and even possible that they relieved symptoms through some other mode of action entirely). As Carlat puts it, “By this same logic one could argue that the cause of all pain conditions is a deficiency of opiates, since narcotic pain medications activate opiate receptors in the brain.” Or similarly, one could argue that fevers are caused by too little aspirin.

But the main problem with the theory is that after decades of trying to prove it, researchers have still come up empty-handed. All three authors document the failure of scientists to find good evidence in its favor. Neurotransmitter function seems to be normal in people with mental illness before treatment. In Whitaker’s words:

Prior to treatment, patients diagnosed with schizophrenia, depression, and other psychiatric disorders do not suffer from any known “chemical imbalance.” However, once a person is put on a psychiatric medication, which, in one manner or another, throws a wrench into the usual mechanics of a neuronal pathway, his or her brain begins to function…abnormally.

Carlat refers to the chemical imbalance theory as a “myth” (which he calls “convenient” because it destigmatizes mental illness), and Kirsch, whose book focuses on depression, sums up this way: “It now seems beyond question that the traditional account of depression as a chemical imbalance in the brain is simply wrong.” Why the theory persists despite the lack of evidence is a subject I’ll come to.

Do the drugs work? After all, regardless of the theory, that is the practical question. In his spare, remarkably engrossing book, The Emperor’s New Drugs, Kirsch describes his fifteen-year scientific quest to answer that question about antidepressants. When he began his work in 1995, his main interest was in the effects of placebos. To study them, he and a colleague reviewed thirty-eight published clinical trials that compared various treatments for depression with placebos, or compared psychotherapy with no treatment. Most such trials last for six to eight weeks, and during that time, patients tend to improve somewhat even without any treatment. But Kirsch found that placebos were three times as effective as no treatment. That didn’t particularly surprise him. What did surprise him was the fact that antidepressants were only marginally better than placebos. As judged by scales used to measure depression, placebos were 75 percent as effective as antidepressants. Kirsch then decided to repeat his study by examining a more complete and standardized data set.

The data he used were obtained from the US Food and Drug Administration (FDA) instead of the published literature. When drug companies seek approval from the FDA to market a new drug, they must submit to the agency all clinical trials they have sponsored. The trials are usually double-blind and placebo-controlled, that is, the participating patients are randomly assigned to either drug or placebo, and neither they nor their doctors know which they have been assigned. The patients are told only that they will receive an active drug or a placebo, and they are also told of any side effects they might experience. If two trials show that the drug is more effective than a placebo, the drug is generally approved. But companies may sponsor as many trials as they like, most of which could be negative – that is, fail to show effectiveness. All they need is two positive ones. (The results of trials of the same drug can differ for many reasons, including the way the trial is designed and conducted, its size, and the types of patients studied.)

For obvious reasons, drug companies make very sure that their positive studies are published in medical journals and doctors know about them, while the negative ones often languish unseen within the FDA, which regards them as proprietary and therefore confidential. This practice greatly biases the medical literature, medical education, and treatment decisions.

Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999 – Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. This was a better data set than the one used in his previous study, not only because it included negative studies but because the FDA sets uniform quality standards for the trials it reviews and not all of the published research in Kirsch’s earlier study had been submitted to the FDA as part of a drug approval application.

Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.

Kirsch was also struck by another unexpected finding. In his earlier study and in work by others, he observed that even treatments that were not considered to be antidepressants – such as synthetic thyroid hormone, opiates, sedatives, stimulants, and some herbal remedies – were as effective as antidepressants in alleviating the symptoms of depression. Kirsch writes, “When administered as antidepressants, drugs that increase, decrease or have no effect on serotonin all relieve depression to about the same degree.” What all these “effective” drugs had in common was that they produced side effects, which participating patients had been told they might experience.

It is important that clinical trials, particularly those dealing with subjective conditions like depression, remain double-blind, with neither patients nor doctors knowing whether or not they are getting a placebo. That prevents both patients and doctors from imagining improvements that are not there, something that is more likely if they believe the agent being administered is an active drug instead of a placebo. Faced with his findings that nearly any pill with side effects was slightly more effective in treating depression than an inert placebo, Kirsch speculated that the presence of side effects in individuals receiving drugs enabled them to guess correctly that they were getting active treatment – and this was borne out by interviews with patients and doctors – which made them more likely to report improvement. He suggests that the reason antidepressants appear to work better in relieving severe depression than in less severe cases is that patients with severe symptoms are likely to be on higher doses and therefore experience more side effects.

To further investigate whether side effects bias responses, Kirsch looked at some trials that employed “active” placebos instead of inert ones. An active placebo is one that itself produces side effects, such as atropine – a drug that selectively blocks the action of certain types of nerve fibers. Although not an antidepressant, atropine causes, among other things, a noticeably dry mouth. In trials using atropine as the placebo, there was no difference between the antidepressant and the active placebo. Everyone had side effects of one type or another, and everyone reported the same level of improvement. Kirsch reported a number of other odd findings in clinical trials of antidepressants, including the fact that there is no dose-response curve – that is, high doses worked no better than low ones – which is extremely unlikely for truly effective drugs. “Putting all this together,” writes Kirsch,

“leads to the conclusion that the relatively small difference between drugs and placebos might not be a real drug effect at all. Instead, it might be an enhanced placebo effect, produced by the fact that some patients have broken [the] blind and have come to realize whether they were given drug or placebo. If this is the case, then there is no real antidepressant drug effect at all. Rather than comparing placebo to drug, we have been comparing “regular” placebos to “extra-strength” placebos.”

That is a startling conclusion that flies in the face of widely accepted medical opinion, but Kirsch reaches it in a careful, logical way. Psychiatrists who use antidepressants – and that’s most of them – and patients who take them might insist that they know from clinical experience that the drugs work. But anecdotes are known to be a treacherous way to evaluate medical treatments, since they are so subject to bias; they can suggest hypotheses to be studied, but they cannot prove them. That is why the development of the double-blind, randomized, placebo-controlled clinical trial in the middle of the past century was such an important advance in medical science. Anecdotes about leeches or laetrile or megadoses of vitamin C, or any number of other popular treatments, could not stand up to the scrutiny of well-designed trials. Kirsch is a faithful proponent of the scientific method, and his voice therefore brings a welcome objectivity to a subject often swayed by anecdotes, emotions, or, as we will see, self-interest.

Whitaker’s book is broader and more polemical. He considers all mental illness, not just depression. Whereas Kirsch concludes that antidepressants are probably no more effective than placebos, Whitaker concludes that they and most of the other psychoactive drugs are not only ineffective but harmful. He begins by observing that even as drug treatment for mental illness has skyrocketed, so has the prevalence of the conditions treated:

The number of disabled mentally ill has risen dramatically since 1955, and during the past two decades, a period when the prescribing of psychiatric medications has exploded, the number of adults and children disabled by mental illness has risen at a mind-boggling rate. Thus we arrive at an obvious question, even though it is heretical in kind: Could our drug-based paradigm of care, in some unforeseen way, be fueling this modern-day plague?

Moreover, Whitaker contends, the natural history of mental illness has changed. Whereas conditions such as schizophrenia and depression were once mainly self-limited or episodic, with each episode usually lasting no more than six months and interspersed with long periods of normalcy, the conditions are now chronic and lifelong. Whitaker believes that this might be because drugs, even those that relieve symptoms in the short term, cause long-term mental harms that continue after the underlying illness would have naturally resolved.

The evidence he marshals for this theory varies in quality. He doesn’t sufficiently acknowledge the difficulty of studying the natural history of any illness over a fifty-some-year time span during which many circumstances have changed, in addition to drug use. It is even more difficult to compare long-term outcomes in treated versus untreated patients, since treatment may be more likely in those with more severe disease at the outset. Nevertheless, Whitaker’s evidence is suggestive, if not conclusive.

If psychoactive drugs do cause harm, as Whitaker contends, what is the mechanism? The answer, he believes, lies in their effects on neurotransmitters. It is well understood that psychoactive drugs disturb neurotransmitter function, even if that was not the cause of the illness in the first place. Whitaker describes a chain of effects. When, for example, an SSRI antidepressant like Celexa increases serotonin levels in synapses, it stimulates compensatory changes through a process called negative feedback. In response to the high levels of serotonin, the neurons that secrete it (presynaptic neurons) release less of it, and the postsynaptic neurons become desensitized to it. In effect, the brain is trying to nullify the drug’s effects. The same is true for drugs that block neurotransmitters, except in reverse. For example, most antipsychotic drugs block dopamine, but the presynaptic neurons compensate by releasing more of it, and the postsynaptic neurons take it up more avidly. (This explanation is necessarily oversimplified, since many psychoactive drugs affect more than one of the many neurotransmitters.)

With long-term use of psychoactive drugs, the result is, in the words of Steve Hyman, a former director of the NIMH and until recently provost of Harvard University, “substantial and long-lasting alterations in neural function.” As quoted by Whitaker, the brain, Hyman wrote, begins to function in a manner “qualitatively as well as quantitatively different from the normal state.” After several weeks on psychoactive drugs, the brain’s compensatory efforts begin to fail, and side effects emerge that reflect the mechanism of action of the drugs. For example, the SSRIs may cause episodes of mania, because of the excess of serotonin. Antipsychotics cause side effects that resemble Parkinson’s disease, because of the depletion of dopamine (which is also depleted in Parkinson’s disease). As side effects emerge, they are often treated by other drugs, and many patients end up on a cocktail of psychoactive drugs prescribed for a cocktail of diagnoses. The episodes of mania caused by antidepressants may lead to a new diagnosis of “bipolar disorder” and treatment with a “mood stabilizer,” such as Depokote (an anticonvulsant) plus one of the newer antipsychotic drugs. And so on.

Some patients take as many as six psychoactive drugs daily. One well- respected researcher, Nancy Andreasen, and her colleagues published evidence that the use of antipsychotic drugs is associated with shrinkage of the brain, and that the effect is directly related to the dose and duration of treatment. As Andreasen explained to The New York Times, “The prefrontal cortex doesn’t get the input it needs and is being shut down by drugs. That reduces the psychotic symptoms. It also causes the prefrontal cortex to slowly atrophy.”*

Getting off the drugs is exceedingly difficult, according to Whitaker, because when they are withdrawn the compensatory mechanisms are left unopposed. When Celexa is withdrawn, serotonin levels fall precipitously because the presynaptic neurons are not releasing normal amounts and the postsynaptic neurons no longer have enough receptors for it. Similarly, when an antipsychotic is withdrawn, dopamine levels may skyrocket. The symptoms produced by withdrawing psychoactive drugs are often confused with relapses of the original disorder, which can lead psychiatrists to resume drug treatment, perhaps at higher doses.

Unlike the cool Kirsch, Whitaker is outraged by what he sees as an iatrogenic (i.e., inadvertent and medically introduced) epidemic of brain dysfunction, particularly that caused by the widespread use of the newer (“atypical”) antipsychotics, such as Zyprexa, which cause serious side effects. Here is what he calls his “quick thought experiment”:

Imagine that a virus suddenly appears in our society that makes people sleep twelve, fourteen hours a day. Those infected with it move about somewhat slowly and seem emotionally disengaged. Many gain huge amounts of weight – twenty, forty, sixty, and even one hundred pounds. Often their blood sugar levels soar, and so do their cholesterol levels. A number of those struck by the mysterious illness – including young children and teenagers – become diabetic in fairly short order…. The federal government gives hundreds of millions of dollars to scientists at the best universities to decipher the inner workings of this virus, and they report that the reason it causes such global dysfunction is that it blocks a multitude of neurotransmitter receptors in the brain – dopaminergic, serotonergic, muscarinic, adrenergic, and histaminergic. All of those neuronal pathways in the brain are compromised. Meanwhile, MRI studies find that over a period of several years, the virus shrinks the cerebral cortex, and this shrinkage is tied to cognitive decline. A terrified public clamors for a cure.

Now such an illness has in fact hit millions of American children and adults. We have just described the effects of Eli Lilly’s best-selling antipsychotic, Zyprexa.

If psychoactive drugs are useless, as Kirsch believes about antidepressants, or worse than useless, as Whitaker believes, why are they so widely prescribed by psychiatrists and regarded by the public and the profession as something akin to wonder drugs? Why is the current against which Kirsch and Whitaker and, as we will see, Carlat are swimming so powerful? I discuss these questions in Part II of this review.

Part II: The Illusions of Psychiatry1

In my article in the last issue, I focused mainly on the recent books by psychologist Irving Kirsch and journalist Robert Whitaker, and what they tell us about the epidemic of mental illness and the drugs used to treat it.1 Here I discuss the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM) – often referred to as the bible of psychiatry, and now heading for its fifth edition – and its extraordinary influence within American society. I also examine Unhinged, the recent book by Daniel Carlat, a psychiatrist, who provides a disillusioned insider’s view of the psychiatric profession. And I discuss the widespread use of psychoactive drugs in children, and the baleful influence of the pharmaceutical industry on the practice of psychiatry.

One of the leaders of modern psychiatry, Leon Eisenberg, a professor at Johns Hopkins and then Harvard Medical School, who was among the first to study the effects of stimulants on attention deficit disorder in children, wrote that American psychiatry in the late twentieth century moved from a state of “brainlessness” to one of “mindlessness.”2 By that he meant that before psychoactive drugs (drugs that affect the mental state) were introduced, the profession had little interest in neurotransmitters or any other aspect of the physical brain. Instead, it subscribed to the Freudian view that mental illness had its roots in unconscious conflicts, usually originating in childhood, that affected the mind as though it were separate from the brain.

But with the introduction of psychoactive drugs in the 1950s, and sharply accelerating in the 1980s, the focus shifted to the brain. Psychiatrists began to refer to themselves as psychopharmacologists, and they had less and less interest in exploring the life stories of their patients. Their main concern was to eliminate or reduce symptoms by treating sufferers with drugs that would alter brain function. An early advocate of this biological model of mental illness, Eisenberg in his later years became an outspoken critic of what he saw as the indiscriminate use of psychoactive drugs, driven largely by the machinations of the pharmaceutical industry.

When psychoactive drugs were first introduced, there was a brief period of optimism in the psychiatric profession, but by the 1970s, optimism gave way to a sense of threat. Serious side effects of the drugs were becoming apparent, and an antipsychiatry movement had taken root, as exemplified by the writings of Thomas Szasz and the movie One Flew Over the Cuckoo’s Nest. There was also growing competition for patients from psychologists and social workers. In addition, psychiatrists were plagued by internal divisions: some embraced the new biological model, some still clung to the Freudian model, and a few saw mental illness as an essentially sane response to an insane world. Moreover, within the larger medical profession, psychiatrists were regarded as something like poor relations; even with their new drugs, they were seen as less scientific than other specialists, and their income was generally lower.

In the late 1970s, the psychiatric profession struck back – hard. As Robert Whitaker tells it in Anatomy of an Epidemic, the medical director of the American Psychiatric Association (APA), Melvin Sabshin, declared in 1977 that “a vigorous effort to remedicalize psychiatry should be strongly supported,” and he launched an all-out media and public relations campaign to do exactly that. Psychiatry had a powerful weapon that its competitors lacked. Since psychiatrists must qualify as MDs, they have the legal authority to write prescriptions. By fully embracing the biological model of mental illness and the use of psychoactive drugs to treat it, psychiatry was able to relegate other mental health care providers to ancillary positions and also to identify itself as a scientific discipline along with the rest of the medical profession. Most important, by emphasizing drug treatment, psychiatry became the darling of the pharmaceutical industry, which soon made its gratitude tangible.

These efforts to enhance the status of psychiatry were undertaken deliberately. The APA was then working on the third edition of the DSM, which provides diagnostic criteria for all mental disorders. The president of the APA had appointed Robert Spitzer, a much-admired professor of psychiatry at Columbia University, to head the task force overseeing the project. The first two editions, published in 1952 and 1968, reflected the Freudian view of mental illness and were little known outside the profession. Spitzer set out to make the DSM-III something quite different. He promised that it would be “a defense of the medical model as applied to psychiatric problems,” and the president of the APA in 1977, Jack Weinberg, said it would “clarify to anyone who may be in doubt that we regard psychiatry as a specialty of medicine.”

When Spitzer’s DSM-III was published in 1980, it contained 265 diagnoses (up from 182 in the previous edition), and it came into nearly universal use, not only by psychiatrists, but by insurance companies, hospitals, courts, prisons, schools, researchers, government agencies, and the rest of the medical profession. Its main goal was to bring consistency (usually referred to as “reliability”) to psychiatric diagnosis, that is, to ensure that psychiatrists who saw the same patient would agree on the diagnosis. To do that, each diagnosis was defined by a list of symptoms, with numerical thresholds. For example, having at least five of nine particular symptoms got you a full-fledged diagnosis of a major depressive episode within the broad category of “mood disorders.” But there was another goal – to justify the use of psychoactive drugs. The president of the APA last year, Carol Bernstein, in effect acknowledged that. “It became necessary in the 1970s,” she wrote, “to facilitate diagnostic agreement among clinicians, scientists, and regulatory authorities given the need to match patients with newly emerging pharmacologic treatments.”3

The DSM-III was almost certainly more “reliable” than the earlier versions, but reliability is not the same thing as validity. Reliability, as I have noted, is used to mean consistency; validity refers to correctness or soundness. If nearly all physicians agreed that freckles were a sign of cancer, the diagnosis would be “reliable,” but not valid. The problem with the DSM is that in all of its editions, it has simply reflected the opinions of its writers, and in the case of the DSM-III mainly of Spitzer himself, who has been justly called one of the most influential psychiatrists of the twentieth century.4 In his words, he “picked everybody that [he] was comfortable with” to serve with him on the fifteen-member task force, and there were complaints that he called too few meetings and generally ran the process in a haphazard but high-handed manner. Spitzer said in a 1989 interview, “I could just get my way by sweet talking and whatnot.” In a 1984 article entitled “The Disadvantages of DSM-III Outweigh Its Advantages,” George Vaillant, a professor of psychiatry at Harvard Medical School, wrote that the DSM-III represented “a bold series of choices based on guess, taste, prejudice, and hope,” which seems to be a fair description.

Not only did the DSM become the bible of psychiatry, but like the real Bible, it depended a lot on something akin to revelation. There are no citations of scientific studies to support its decisions. That is an astonishing omission, because in all medical publications, whether journal articles or textbooks, statements of fact are supposed to be supported by citations of published scientific studies. (There are four separate “sourcebooks” for the current edition of the DSM that present the rationale for some decisions, along with references, but that is not the same thing as specific references.) It may be of much interest for a group of experts to get together and offer their opinions, but unless these opinions can be buttressed by evidence, they do not warrant the extraordinary deference shown to the DSM. The DSM-III was supplanted by the DSM-III-R in 1987, the DSM-IV in 1994, and the current version, the DSM-IV-TR (text revised) in 2000, which contains 365 diagnoses. “With each subsequent edition,” writes Daniel Carlat in his absorbing book, “the number of diagnostic categories multiplied, and the books became larger and more expensive. Each became a best seller for the APA, and DSM is now one of the major sources of income for the organization.” The DSM-IV sold over a million copies.

As psychiatry became a drug-intensive specialty, the pharmaceutical industry was quick to see the advantages of forming an alliance with the psychiatric profession. Drug companies began to lavish attention and largesse on psychiatrists, both individually and collectively, directly and indirectly. They showered gifts and free samples on practicing psychiatrists, hired them as consultants and speakers, bought them meals, helped pay for them to attend conferences, and supplied them with “educational” materials. When Minnesota and Vermont implemented “sunshine laws” that require drug companies to report all payments to doctors, psychiatrists were found to receive more money than physicians in any other specialty. The pharmaceutical industry also subsidizes meetings of the APA and other psychiatric conferences. About a fifth of APA funding now comes from drug companies.

Drug companies are particularly eager to win over faculty psychiatrists at prestigious academic medical centers. Called “key opinion leaders” (KOLs) by the industry, these are the people who through their writing and teaching influence how mental illness will be diagnosed and treated. They also publish much of the clinical research on drugs and, most importantly, largely determine the content of the DSM. In a sense, they are the best sales force the industry could have, and are worth every cent spent on them. Of the 170 contributors to the current version of the DSM (the DSM-IV-TR), almost all of whom would be described as KOLs, ninety-five had financial ties to drug companies, including all of the contributors to the sections on mood disorders and schizophrenia.5

The drug industry, of course, supports other specialists and professional societies, too, but Carlat asks, “Why do psychiatrists consistently lead the pack of specialties when it comes to taking money from drug companies?” His answer: “Our diagnoses are subjective and expandable, and we have few rational reasons for choosing one treatment over another.” Unlike the conditions treated in most other branches of medicine, there are no objective signs or tests for mental illness – no lab data or MRI findings – and the boundaries between normal and abnormal are often unclear. That makes it possible to expand diagnostic boundaries or even create new diagnoses, in ways that would be impossible, say, in a field like cardiology. And drug companies have every interest in inducing psychiatrists to do just that.

In addition to the money spent on the psychiatric profession directly, drug companies heavily support many related patient advocacy groups and educational organizations. Whitaker writes that in the first quarter of 2009 alone,

Eli Lilly gave $551,000 to NAMI [National Alliance on Mental Illness] and its local chapters, $465,000 to the National Mental Health Association, $130,000 to CHADD (an ADHD [attention deficit/hyperactivity disorder] patient-advocacy group), and $69,250 to the American Foundation for Suicide Prevention.

And that’s just one company in three months; one can imagine what the yearly total would be from all companies that make psychoactive drugs. These groups ostensibly exist to raise public awareness of psychiatric disorders, but they also have the effect of promoting the use of psychoactive drugs and influencing insurers to cover them. Whitaker summarizes the growth of industry influence after the publication of the DSM-III as follows:

In short, a powerful quartet of voices came together during the 1980’s eager to inform the public that mental disorders were brain diseases. Pharmaceutical companies provided the financial muscle. The APA and psychiatrists at top medical schools conferred intellectual legitimacy upon the enterprise. The NIMH [National Institute of Mental Health] put the government’s stamp of approval on the story. NAMI provided a moral authority.

Like most other psychiatrists, Carlat treats his patients only with drugs, not talk therapy, and he is candid about the advantages of doing so. If he sees three patients an hour for psychopharmacology, he calculates, he earns about $180 per hour from insurers. In contrast, he would be able to see only one patient an hour for talk therapy, for which insurers would pay him less than $100. Carlat does not believe that psychopharmacology is particularly complicated, let alone precise, although the public is led to believe that it is:

Patients often view psychiatrists as wizards of neurotransmitters, who can choose just the right medication for whatever chemical imbalance is at play. This exaggerated conception of our capabilities has been encouraged by drug companies, by psychiatrists ourselves, and by our patients’ understandable hopes for cures.

His work consists of asking patients a series of questions about their symptoms to see whether they match up with any of the disorders in the DSM. This matching exercise, he writes, provides “the illusion that we understand our patients when all we are doing is assigning them labels.” Often patients meet criteria for more than one diagnosis, because there is overlap in symptoms. For example, difficulty concentrating is a criterion for more than one disorder. One of Carlat’s patients ended up with seven separate diagnoses. “We target discrete symptoms with treatments, and other drugs are piled on top to treat side effects.” A typical patient, he says, might be taking Celexa for depression, Ativan for anxiety, Ambien for insomnia, Provigil for fatigue (a side effect of Celexa), and Viagra for impotence (another side effect of Celexa).

As for the medications themselves, Carlat writes that “there are only a handful of umbrella categories of psychotropic drugs,” within which the drugs are not very different from one another. He doesn’t believe there is much basis for choosing among them. “To a remarkable degree, our choice of medications is subjective, even random. Perhaps your psychiatrist is in a Lexapro mood this morning, because he was just visited by an attractive Lexapro drug rep.” And he sums up:

Such is modern psychopharmacology. Guided purely by symptoms, we try different drugs, with no real conception of what we are trying to fix, or of how the drugs are working. I am perpetually astonished that we are so effective for so many patients.

While Carlat believes that psychoactive drugs are sometimes effective, his evidence is anecdotal. What he objects to is their overuse and what he calls the “frenzy of psychiatric diagnoses.” As he puts it, “if you ask any psychiatrist in clinical practice, including me, whether antidepressants work for their patients, you will hear an unambiguous ‘yes.’ We see people getting better all the time.” But then he goes on to speculate, like Irving Kirsch in The Emperor’s New Drugs, that what they are really responding to could be an activated placebo effect. If psychoactive drugs are not all they’re cracked up to be – and the evidence is that they’re not – what about the diagnoses themselves? As they multiply with each edition of the DSM, what are we to make of them?

In 1999, the APA began work on its fifth revision of the DSM, which is scheduled to be published in 2013. The twenty-seven-member task force is headed by David Kupfer, a professor of psychiatry at the University of Pittsburgh, assisted by Darrel Regier of the APA’s American Psychiatric Institute for Research and Education. As with the earlier editions, the task force is advised by multiple work groups, which now total some 140 members, corresponding to the major diagnostic categories. Ongoing deliberations and proposals have been extensively reported on the APA website (www.DSM5.org) and in the media, and it appears that the already very large constellation of mental disorders will grow still larger.

In particular, diagnostic boundaries will be broadened to include precursors of disorders, such as “psychosis risk syndrome” and “mild cognitive impairment” (possible early Alzheimer’s disease). The term “spectrum” is used to widen categories, for example, “obsessive-compulsive disorder spectrum,” “schizophrenia spectrum disorder,” and “autism spectrum disorder.” And there are proposals for entirely new entries, such as “hypersexual disorder,” “restless legs syndrome,” and “binge eating.”

Even Allen Frances, chairman of the DSM-IV task force, is highly critical of the expansion of diagnoses in the DSM-V. In the June 26, 2009, issue of Psychiatric Times, he wrote that the DSM-V will be a “bonanza for the pharmaceutical industry but at a huge cost to the new false positive patients caught in the excessively wide DSM-V net.” As if to underscore that judgment, Kupfer and Regier wrote in a recent article in the Journal of the American Medical Association (JAMA), entitled “Why All of Medicine Should Care About DSM-5,” that “in primary care settings, approximately 30 percent to 50 percent of patients have prominent mental health symptoms or identifiable mental disorders, which have significant adverse consequences if left untreated.”6 It looks as though it will be harder and harder to be normal.

At the end of the article by Kupfer and Regier is a small-print “financial disclosure” that reads in part:

Prior to being appointed as chair, DSM-5 Task Force, Dr. Kupfer reports having served on advisory boards for Eli Lilly & Co, Forest Pharmaceuticals Inc, Solvay/Wyeth Pharmaceuticals, and Johnson & Johnson; and consulting for Servier and Lundbeck.

Regier oversees all industry-sponsored research grants for the APA. The DSM-V (used interchangeably with DSM-5) is the first edition to establish rules to limit financial conflicts of interest in members of the task force and work groups. According to these rules, once members were appointed, which occurred in 2006 – 2008, they could receive no more than $10,000 per year in aggregate from drug companies or own more than $50,000 in company stock. The website shows their company ties for three years before their appointments, and that is what Kupfer disclosed in the JAMA article and what is shown on the APA website, where 56 percent of members of the work groups disclosed significant industry interests.

'Give me the first thing that comes to hand'; lithograph by Grandville, 1832

The pharmaceutical industry influences psychiatrists to prescribe psychoactive drugs even for categories of patients in whom the drugs have not been found safe and effective. What should be of greatest concern for Americans is the astonishing rise in the diagnosis and treatment of mental illness in children, sometimes as young as two years old. These children are often treated with drugs that were never approved by the FDA for use in this age group and have serious side effects. The apparent prevalence of “juvenile bipolar disorder” jumped forty-fold between 1993 and 2004, and that of “autism” increased from one in five hundred children to one in ninety over the same decade. Ten percent of ten-year-old boys now take daily stimulants for ADHD – “attention deficit/hyperactivity disorder” – and 500,000 children take antipsychotic drugs.

There seem to be fashions in childhood psychiatric diagnoses, with one disorder giving way to the next. At first, ADHD, manifested by hyperactivity, inattentiveness, and impulsivity usually in school-age children, was the fastest-growing diagnosis. But in the mid-1990s, two highly influential psychiatrists at the Massachusetts General Hospital proposed that many children with ADHD really had bipolar disorder that could sometimes be diagnosed as early as infancy. They proposed that the manic episodes characteristic of bipolar disorder in adults might be manifested in children as irritability. That gave rise to a flood of diagnoses of juvenile bipolar disorder. Eventually this created something of a backlash, and the DSM-V now proposes partly to replace the diagnosis with a brand-new one, called “temper dysregulation disorder with dysphoria,” or TDD, which Allen Frances calls “a new monster.”7

One would be hard pressed to find a two-year-old who is not sometimes irritable, a boy in fifth grade who is not sometimes inattentive, or a girl in middle school who is not anxious. (Imagine what taking a drug that causes obesity would do to such a girl.) Whether such children are labeled as having a mental disorder and treated with prescription drugs depends a lot on who they are and the pressures their parents face.8 As low-income families experience growing economic hardship, many are finding that applying for Supplemental Security Income (SSI) payments on the basis of mental disability is the only way to survive. It is more generous than welfare, and it virtually ensures that the family will also qualify for Medicaid. According to MIT economics professor David Autor, “This has become the new welfare.” Hospitals and state welfare agencies also have incentives to encourage uninsured families to apply for SSI payments, since hospitals will get paid and states will save money by shifting welfare costs to the federal government.

Growing numbers of for-profit firms specialize in helping poor families apply for SSI benefits. But to qualify nearly always requires that applicants, including children, be taking psychoactive drugs. According to a New York Times story, a Rutgers University study found that children from low-income families are four times as likely as privately insured children to receive antipsychotic medicines.

In December 2006 a four-year-old child named Rebecca Riley died in a small town near Boston from a combination of Clonidine and Depakote, which she had been prescribed, along with Seroquel, to treat “ADHD” and “bipolar disorder” – diagnoses she received when she was two years old. Clonidine was approved by the FDA for treating high blood pressure. Depakote was approved for treating epilepsy and acute mania in bipolar disorder. Seroquel was approved for treating schizophrenia and acute mania. None of the three was approved to treat ADHD or for long-term use in bipolar disorder, and none was approved for children Rebecca’s age. Rebecca’s two older siblings had been given the same diagnoses and were each taking three psychoactive drugs. The parents had obtained SSI benefits for the siblings and for themselves, and were applying for benefits for Rebecca when she died. The family’s total income from SSI was about $30,000 per year.9

Whether these drugs should ever have been prescribed for Rebecca in the first place is the crucial question. The FDA approves drugs only for specified uses, and it is illegal for companies to market them for any other purpose – that is, “off-label.” Nevertheless, physicians are permitted to prescribe drugs for any reason they choose, and one of the most lucrative things drug companies can do is persuade physicians to prescribe drugs off-label, despite the law against it. In just the past four years, five firms have admitted to federal charges of illegally marketing psychoactive drugs. AstraZeneca marketed Seroquel off-label for children and the elderly (another vulnerable population, often administered antipsychotics in nursing homes); Pfizer faced similar charges for Geodon (an antipsychotic); Eli Lilly for Zyprexa (an antipsychotic); Bristol-Myers Squibb for Abilify (another antipsychotic); and Forest Labs for Celexa (an antidepressant).

Despite having to pay hundreds of millions of dollars to settle the charges, the companies have probably come out well ahead. The original purpose of permitting doctors to prescribe drugs off-label was to enable them to treat patients on the basis of early scientific reports, without having to wait for FDA approval. But that sensible rationale has become a marketing tool. Because of the subjective nature of psychiatric diagnosis, the ease with which diagnostic boundaries can be expanded, the seriousness of the side effects of psychoactive drugs, and the pervasive influence of their manufacturers, I believe doctors should be prohibited from prescribing psychoactive drugs off-label, just as companies are prohibited from marketing them off-label.

The books by Irving Kirsch, Robert Whitaker, and Daniel Carlat are powerful indictments of the way psychiatry is now practiced. They document the “frenzy” of diagnosis, the overuse of drugs with sometimes devastating side effects, and widespread conflicts of interest. Critics of these books might argue, as Nancy Andreasen implied in her paper on the loss of brain tissue with long-term antipsychotic treatment, that the side effects are the price that must be paid to relieve the suffering caused by mental illness. If we knew that the benefits of psychoactive drugs outweighed their harms, that would be a strong argument, since there is no doubt that many people suffer grievously from mental illness. But as Kirsch, Whitaker, and Carlat argue convincingly, that expectation may be wrong.

At the very least, we need to stop thinking of psychoactive drugs as the best, and often the only, treatment for mental illness or emotional distress. Both psychotherapy and exercise have been shown to be as effective as drugs for depression, and their effects are longer-lasting, but unfortunately, there is no industry to push these alternatives and Americans have come to believe that pills must be more potent. More research is needed to study alternatives to psychoactive drugs, and the results should be included in medical education.

Comment: In addition to psychotherapy and exercise as alternatives to psychiatric drugs, we could add dietary changes (eliminating gluten grains, dairy products, and industrial oils, for example) and breathing/meditation programs such as Éiriú Eolas.
In particular, we need to rethink the care of troubled children. Here the problem is often troubled families in troubled circumstances. Treatment directed at these environmental conditions – such as one-on-one tutoring to help parents cope or after-school centers for the children – should be studied and compared with drug treatment. In the long run, such alternatives would probably be less expensive. Our reliance on psychoactive drugs, seemingly for all of life’s discontents, tends to close off other options. In view of the risks and questionable long-term effectiveness of drugs, we need to do better. Above all, we should remember the time-honored medical dictum: first, do no harm (primum non nocere).

1. See Marcia Angell, ” The Epidemic of Mental Illness: Why? ,” The New York Review, June 23, 2011.
2. Eisenberg wrote about this transition in “Mindlessness and Brainlessness,” British Journal of Psychiatry, No. 148 (1986). His last paper, completed by his stepson, was published after his death in 2009. See Eisenberg and L.B. Guttmacher, “Were We All Asleep at the Switch? A Personal Reminiscence of Psychiatry from 1940 to 2010,” Acta Psychiatrica Scand, No. 122 (2010).
3. Carol A. Bernstein, “Meta-Structure in DSM-5 Process,” Psychiatric News, March 4, 2011, p. 7.
4. The history of the DSM is recounted in Christopher Lane’s informative book Shyness: How Normal Behavior Became a Sickness (Yale University Press, 2007). Lane was given access to the American Psychiatric Association’s archive of unpublished letters, transcripts, and memoranda, and he also interviewed Robert Spitzer. His book was reviewed by Frederick Crews in The New York Review , December 6, 2007 , and by me, January 15, 2009 .
5. See L. Cosgrove et al., “Financial Ties Between DSM-IV Panel Members and the Pharmaceutical Industry,” Psychotherapy and Psychosomatics, Vol. 75 (2006).
6. David J. Kupfer and Darrel A. Regier, “Why All of Medicine Should Care About DSM-5,” JAMA, May 19, 2010.
7. Greg Miller, “Anything But Child’s Play,” Science, March 5, 2010.
8. Duff Wilson, “Child’s Ordeal Reveals Risks of Psychiatric Drugs in Young,” The New York Times, September 2, 2010.
9. Patricia Wen, “A Legacy of Unintended Side-Effects: Call It the Other Welfare,” The Boston Globe, December 12, 2010.

Marcia Angell
The New York Review of Books
Sun, 26 Jun 2011 14:33 CDT

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Gluten Then and Now

© glutendoctors.blogspot.com

Over the past decade, the frequency of conversations about gluten intolerance (GI) and celiac disease (CD) in the United States has gone from almost unheard of to commonplace. Chances are your local supermarket sells dozens of items labeled “gluten free” where none existed five years ago. Restaurants and school lunch programs frequently offer gluten-free alternatives. What happened?

Before I dive into that discussion, I want to clarify some terms to minimize confusion. “Gluten” is the general term for a mixture of tiny protein fragments (called polypeptides), which are found in cereal grains such as wheat, rye, barley, spelt, faro, and kamut. Gluten is classified in two groups: prolamines and glutelins. The most troublesome component of gluten is the prolamine gliadin. Gliadin is the cause of the painful inflammation in gluten intolerance and instigates the immune response and intestinal damage found in celiac disease. Although both conditions have similar symptoms (pain, gas, bloating, diarrhea), or sometimes no gastrointestinal symptoms at all, celiac disease is an autoimmune reaction to gluten that can cause severe degradation of the small intestine; whereas, gluten intolerance/sensitivity is an inability to digest gliadin with no damage to the intestines.

The medical community’s use of improved diagnostic tools (saliva, blood, and stool tests; and bowel biopsies) as well as self-diagnosis by aware individuals has certainly contributed to the swelling ranks of people afflicted with these maladies; however, that’s not the whole story. A combination of hybridized grains, America’s growing appetite for snacks and fast food, and the genetics of gluten intolerance and celiac disease have brought discussions of these once uncommon conditions front and center.

New evidence indicates that the hybrid versions of grains we eat today contain significantly more gluten than traditional varieties of the same grains. Experts such as Dr. Alessio Fasano, medical director of the Center for Celiac Research at the University of Maryland School of Medicine, believe this recent increase in the amount of gluten in our diet has given rise to the number of people suffering from gluten intolerance and celiac disease.

According to Fasano,

“The prevalence of celiac disease in this country is soaring partly because changes in agricultural practices have increased gluten levels in crops.” He further states, “We are in the midst of an epidemic.”

For example, the ancient wheat that Moses ate was probably very different from our wheat today. Moses lived about 3,500 years ago, when wheat, spelt, and barley were all popular grains. Modern wheat varieties, however, have been bred to grow faster, produce bigger yields, harvest more efficiently, and bake better bread. The downside to today’s hybridized cereal grains is that they contain more gluten.

Celiac disease was once considered a rare malady and was estimated to have afflicted approximately 1 in 2,000 people in the United States. According to research done by the Mayo Clinic, CD is four times more common today that it was five decades ago. This increase is due to increased awareness and diagnostics, and the estimate today is that 1 out of every 133 people in the United States has celiac disease. To read more facts and figures please read The University of Chicago Celiac Disease site.

Here are estimates for other parts of the world:

  • 3 in 100: United Kingdom
  • 1 in 370: Italy
  • 1 in 122: Northern Ireland
  • 1 in 99: Finland
  • 1 in 133: United States
  • Once thought rare for African-, Hispanic- and Asian-Americans, current estimates in these populations: 1 in 236
  • 1 in 30 are estimated to have gluten intolerance in the United States.

More than 6,000 years before Moses was born, an agricultural revolution took place in the Middle East that allowed humans to embrace farming (sewing and harvesting wild seeds), herding, and other forms of agriculture and move away from our hunter-fisher-gatherer ancestors. This was the first major introduction of gluten into the human diet.

Comment: For a more in depth look at the shift from hunter gatherer societies to the agriculture revolution read the excellent article Paradise Lost carried in issue 13 of the Dot Connector Magazine.

According to Dr. Loren Cordain, PhD, author of The Paleo Diet,

“The foods that agriculture brought us – cereals, dairy products, fatty meats, salted foods, and refined sugars and oils – proved disastrous for our Paleolithic bodies…. studies of the bones and teeth early farmers revealed that they had more infectious diseases, more childhood mortality, shorter life spans, more osteoporosis, rickets, and other bone mineral density disorders than their ancestors thanks to the cereal-based diet. They were plagued with vitamin and mineral deficiencies and developed cavities in their teeth.”

In other words, people traded their health for sustainable food sources and a less nomadic way of life.

Two hundred years ago, the global diet received another big injection of gluten with the birth of the Industrial Revolution and steam-powered mills that were able to produce refined-grain flours that had significantly longer shelf lives, making flour (aka: gluten) more accessible and available to an almost global market. “We were able to mill and process grains for consumption and eat them in larger quantities than we had ever done in the past,” writes Cordain.

Jack Challem, “The Nutrition Reporter,” offers a different long view of human consumption of gluten:

“Look at in another way, 100,000 generations of people were hunter-gatherers, 500 generations have depended on agriculture, and only 10 generations have lived since the start of the industrial age, and only two generations have grown up with highly processed fast foods. This short period of time in the course of man’s existence that grains have been around has proven that many of us are not physiologically able to tolerate gluten.”

Historical evidence of people having trouble digesting gluten was first documented in the 2nd century A.D. when the Greek physician Aretaeus of Cappadocia, diagnosed patients with celiac disease. The symptoms included “wasting and characteristic stools.” Since Aretaeus’ time, the disease has gone by a variety of names, including “non-tropical sprue,” “celiac sprue,” “non-celiac gluten intolerance,” “gluten intolerance enteropathy,” and “gluten sensitive enteropathy.”

Fast forward to 1950, when the Dutch pediatrician Willem-Karel Dicke proposed wheat gluten was the cause of the disease. His theory was based on observations that celiac children improved during World War II when wheat was scarce in Holland.

As Challem points out, today, thanks in large part to the fast food and snack food industries, gluten is in just about every kind of food imaginable.

So Why Can’t Everyone Handle Gluten?

People who that carry any of the genes for CD and GI (expressed or not) are more susceptible to developing either condition. You can carry two dominate genes for celiac disease and perhaps end up developing CD or you can carry one dominant gene and one recessive gene and develop only GI. Your genes determine the body’s immune response in the presence of gluten, and many different health problems may result from that response. Some people may have their brain affected and develop cognitive problems such as depression or impaired brain function, while others suffer pancreatic problems and develop diabetes. Research still needs to be done to answer the question as to why these maladies affect different parts of the body in different people.

When populations that are genetically predisposed to CD and GI are exposed to cereal grains with higher gluten content, there’s little wonder why more people are having these genes “turned on” and develop gluten insensitivity on a much larger scale – especially now that the flour made from these grains are part of the “hidden ingredients” in foods from ice cream to lunch meats.

OK, Now What?

So, gluten has changed, and we have changed, and it appears not for the better. Fortunately or unfortunately, depending on how you look at it, identifying and eliminating the foods and ingredients from your life that do not work for your body is the only answer. There is no magic pill to take to make it all go away.

If you, or someone you know, is experiencing major health issues that aren’t getting better, enlisting a knowledgeable physician who understands the complexities of CD and GI testing is an excellent idea; however, on average, it takes the medical community 10 years to diagnose people who are suffering with severe health problems from undiagnosed CD and GI.?

The Bottom Line

Gluten intolerance is not a fad diet. I have seen countless cases display miraculous improvements in?long standing ailments – simply by adapting this lifestyle. Even if you have a test for CD and it comes back negative and medical community clears you to continue eating gluten, but you feel better without it, listen to your body. You know yourself far better than anyone else and you deserve good health. If you have doubts about your diet, try going gluten-free for two weeks and see how you feel. Those with more advanced illnesses (autoimmune diseases and such) will usually not experience changes until they have been gluten-free for six months to a year.

About the author

Julie McGinnis, M.S., R.D., certified herbalist holds a Master’s degree in nutrition from the University of Bridgeport in Connecticut and has been involved in the field of nutrition for twenty years. Upon completion of her herbal certification she began her career in complementary health and worked for years in research and development for a professional line of nutrition supplements. She has written professional nutrition and health literature for national retailers and other small businesses. She is one of three owners of The Gluten Free Bistro in Boulder, CO.
Comment: For more information on Wheat and Gluten intolerance read the following articles:

The Dark Side of Wheat – New Perspectives on Celiac Disease and Wheat Intolerance
Opening Pandora’s Bread Box: The Critical Role of Wheat Lectin in Human Disease
Gluten: What You Don’t Know Might Kill You
Facts you might not know about gluten
Book Review: Gluten Toxicity – The Mysterious Symptoms of Celiac Disease, Dermatitis Herpetiformis, and Non-Celiac Gluten Intolerance
Can You Stomach Wheat? How Giving up Grain May Better Your Health
Just because someone doesn’t have coeliac disease, doesn’t mean they don’t have a problem with gluten
Beyond Gluten-Free: The Critical Role of Chitin-Binding Lectins in Human Disease
Gluten Sensitivity and the Impact on the Brain

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

You Are A Radical, And So Am I: Paleo Reaches The Ominous "Stage 3″

As the Mahatma Gandhi once said:

“First, they ignore you.
Then they laugh at you.
Then they fight you.
Then you win.”

The paleo movement grew slowly for many years in the obscurity of Stage 1, and spent perhaps a year in Stage 2 being mocked as the “caveman diet”. Now, after several years of exponential growth and a stubborn refusal to be co-opted, we have finally achieved Stage 3, “Then they fight you.”

The latest example, of course, is the dismissive baloney pushed by the “experts” hired by US News and World Report, which ranks paleo dead last among 20 different diets – behind such food-free nutritional gimmicks as Slim-Fast and the Volumetrics Diet.

I have my differences with Loren Cordain, but his rebuttal is both comprehensive and devastating. Respect.

Also, the reader votes for “Did this diet work for you?” show that paleo is by far the most successful of the diets, with several thousand “Yes” votes and under 100 “No” votes. The only other diets to win more “Yes” than “No” votes were Weight Watchers and Atkins!

Previous to that, we’ve seen the repeated hatchet jobs on grass-fed beef by John Stossel (one in print and one on video), both of which use as their sole source non-peer-reviewed “research” from a scientist sponsored entirely by Elanco – a subsidiary of Eli Lilly that manufactures antibiotics and hormonal growth promoters for feedlot cattle, pigs, and chickens! President Obama’s personal trainer, in a breathtakingly stupid article, has called paleo “the newest nutritional fad”. (Solid rebuttal here.) And the avalanche of anti-paleo articles by “qualified experts” has just begun.

We might expect this sort of offensive from the American Dietetic Association and the American Diabetic Association, because anyone eating any variation on the paleo diet is essentially telling them “All of you have been completely wrong for decades, and your bad advice is killing millions of people each year.” And we might expect this sort of offensive from the PCRM, because they’re just a front for vegans and animal rights activists.

But why such hostility from the mainstream press? The paleo movement doesn’t have the anti-establishment politics of, say, the vegan movement – or, as far as I can tell, any clear politics at all. If anything, it tends towards a casual sort of conservative libertarianism, but that’s probably just because so many paleo bloggers are semi-retired techies. It seems unlikely that “going paleo” is the cause of anyone’s political beliefs.

The answer, of course, lies in the age-old admonition:

“Follow the money.”

Adding Value: The Foundation Of Functional Economies

An economy based entirely on selling the same things back and forth to each other for ever-increasing amounts of money is doomed to eventual collapse. As we found out just a couple years ago, flipping houses isn’t the same as having a manufacturing base and a world that wants to buy what we make.

Stated more generally, an economy is only sustainable to the degree that its participants add value by their actions. Factory workers add value by turning raw materials into clothes and cars and electronics; farmers turn seeds and soil and sunlight into crops; ranchers turn calves and grass into beef; engineers turn ideas into buildable products; truckers move things from where they are to where they’re wanted; cashiers and stockers and managers and janitors turn a locked building full of things into a system by which you can find what you want and exchange money for it; and so on. Added value – cost of design – cost of production = profit.

This is not to be confused with the labor theory of value, which claims that labor has intrinsic value, and indeed, is the only ‘fair’ measure of value. This is hogwash. It’s easy to spend years of effort and not add a single penny of value – because value is determined by the buyer, not the seller. It doesn’t matter how much time I’ve spent making a coat rack if it’s ugly and no one wants to buy it!

Moving ahead: the more value we can add, the more profit we can make. It should be obvious that one way to add value is to transform cheap raw materials into an expensive finished product.

Adding Value: Why Grains (And Soy) Are Profitable

There’s one big reason that industrial food manufacturers like Kraft (Nabisco, Snackwells, General Foods, many more), Con-Agra (Chef Boy-Ar-Dee, Healthy Choice, many more), Pepsico (Frito-Lay, Quaker), Kellogg’s (Kashi, Morningstar Farms, Nutrigrain, more) are huge and profitable.

It’s because grains are cheap, but the “foods” made from them aren’t.

One reason grains are so cheap in the USA, of course, is gigantic subsidies for commodity agriculture that, while advertised as helping farmers, go mostly to agribusinesses like Archer Daniels Midland ($62 billion in sales), Cargill ($108 billion), ConAgra ($12 billion), and Monsanto ($11 billion) – and result in a corn surplus so large that we are forced to turn corn into ethanol and feed it to our cars, at a net energy loss!

There isn’t one grain of anything in the world that is sold in a free market. Not one! The only place you see a free market is in the speeches of politicians. People who are not in the Midwest do not understand that this is a socialist country.”
-Dwayne Andreas, then-CEO of Archer Daniels Midland

“At least 43 percent of ADM’s annual profits are from products heavily subsidized or protected by the American government. Moreover, every $1 of profits earned by ADM’s corn sweetener operation costs consumers $10.” (Source.)

(And if you’re not clear on just how deeply in control of our government these corporations are, here’s another example: Leaked cables reveal that US diplomats take orders directly from Monsanto.)

That cheapness, however, doesn’t translate to profits for farmers or cheap food at the supermarket. Let’s do some math!

(Note: these are regular prices from the CBOT and my local supermarket, as of today. Supermarket prices will be somewhat cheaper on sale or at Costco.)

A bushel of corn weighs 56 pounds and costs $6.85. That’s 12.2 cents per pound.
A bag of Tostitos contains about 10 cents worth of corn, and costs $4.00.
That’s a 4000% increase.

A bushel of wheat weighs 60 pounds and costs $7.62. That’s 12.7 cents per pound.
A loaf of Wonder Bread contains about 16 cents worth of wheat, and sells for $4.40.
That’s a 2700% increase.

A bushel of soybeans weighs 60 pounds and costs $13.64. That’s 22.7 cents per pound.
A box of “Silk” soy milk contains about 4.5 cents worth of soybeans, and sells for $2.90.
That’s a 6400% increase.

In other words, it’s highly profitable to turn the products of industrial agriculture – cereal grains and soybeans – into highly processed “food”.

It’s not the snack aisle, the cereal aisle, or even the bread aisle…it’s the profit aisle.

Note that the profit for the processors and middlemen comes out of the pockets of the producer and the consumer. Farmers are squeezed by the 12 cents per pound, and consumers are squeezed by the $4.40 per loaf.

In contrast, pork bellies cost $1.20 per pound today.
A pound of bacon costs about $5.
That’s a 400% increase…

…which looks like a lot until you compare it with 2700%-6400% for grains. Also, unlike grain products, bacon must be stored, shipped, and sold under continuous refrigeration – and it has a much shorter shelf life.

It’s clear that it’s far more profitable to sell us processed grain products than meat, eggs, and vegetables…which leaves a lot of money available to spend on persuading us to buy them. Are you starting to understand why grains are encased in colorful packaging, pushed on us as “heart-healthy” by the government, and advertised continually in all forms of media?

And when we purchase grass-fed beef directly from the rancher, eggs from the farmer, and produce from the grower, we are bypassing the entire monumentally profitable system of industrial agriculture – the railroads, grain elevators, antibiotics, growth hormones, plows, combines, chemical fertilizers (the Haber process, by which ammonium nitrate fertilizer is made, uses 3-5% of world natural gas production!), processors, inspectors, fortifiers, manufacturers, distributors, and advertisers that profit so handsomely by turning cheap grains into expensive food-like substances.

Conclusion: You Are A Radical (And So Am I)

Simply by eating a paleo diet, we have made ourselves enemies of the establishment, and will be treated henceforth as dangerous radicals.

This is not a conspiracy theory. By eschewing commodity crops and advocating the consumption of grass-fed meat, pastured eggs, and local produce, we are making several very, very powerful enemies.

  • The medical and nutritional establishments hate paleo, because we’re exposing the fact that they’ve been wrong for decades and have killed millions of people with their bad advice.
  • The agribusinesses and industrial food processors hate paleo, because we’re hurting their business by not buying their highly profitable grain- and soy-based products.
  • The mainstream media hates paleo, because they profit handsomely from advertising those grain- and soy-based products.
  • The government hates paleo, because they’re the enforcement arm of big agribusinesses, industrial food processors, and mainstream media – and because their subsidy programs create mountains of surplus grain that must be consumed somehow.

Is anyone surprised that a government which spends billions of dollars subsidizing the production of corn, soy, and wheat, would issue nutritional recommendations emphasizing the consumption of corn, soy, and wheat?

And this is why, despite all their rhetoric, the vegans end up on the same team as Monsanto and Pepsico: their interests are aligned.

They’re everywhere.

I agree: this does look familiar.

In summary:

  • Expect more hatchet jobs against “paleo” in the media.
  • Expect more statistical manipulation marketed as “science”.
  • Expect outright scientific fraud in support of “hearthealthywholegrains” and in opposition to meat and eggs – particularly grass-fed, pastured, and local meats.
  • Expect more breathless news articles that purposely misinterpret archaeological evidence of a few specks of root starches to imply that Paleolithic humans ate a diet based on cereal grains.
  • Expect subtle attempts to subvert “paleo” by packaging ancestral varieties of grain (e.g. kamut, spelt, amaranth) and obscure fruit extracts (mostly made of apple and grape juice), and selling them back to us in colorful packaging.
  • Expect even more regulations and restrictions that destroy local, sustainable farming by forcing their nutritionally superior products into the commodity system, instead of allowing us as individuals to buy them directly – which, in the case of meat, we already can’t. (The regulations have got even worse since Joel Salatin wrote this woefully true article, “Everything I Want To Do Is Illegal.”)
  • Expect bogus scare stories in the media about the dangers of local meat, eggs, cheese, and produce from small farmers. Expect bogus salmonella scares and fraudulent inspections that claim meaningless or nonexistent bacterial contamination.
  • Expect school lunches to become even more disgusting and empty of nutrition. If you want your child to grow up healthy, expect to help them pack a lunch every day. Expect to be grilled by suspicious administrators who think you’re damaging your child by feeding them real food.
  • Expect friends and co-workers to see and believe this propaganda. Expect that you will have to defend your dietary choices against a steadily increasing chorus of suspicion and hostility.

None of this should discourage any of us from continuing to do what’s right for ourselves, our families, our children, and the Earth. We are healthy, we are happy, we are strong, and we will win because everyone else wants to be as healthy as we are.

Be proud of your health and how you’ve attained it. Do not be pushy, but do not apologize for how you eat and live. And, most importantly, be prepared for the storm.

Live in freedom, live in beauty.

J. Stanton
Gnolls.org
Sun, 19 Jun 2011 10:51 CDT

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

America's Most Dangerous Pill?

It’s not Adderall or Oxy. It’s Klonopin. And doctors are doling it out like candy, causing a surge of hellish withdrawals, overdoses and deaths

You could argue that the deadliest “drug” in the world is the venom from a jellyfish known as the Sea Wasp, whose sting can kill a human being in four minutes—up to 100 humans at a time. Potassium chloride, which is used to trigger cardiac arrest and death in the 38 states of the U.S. that enforce the death penalty is also pretty deadly . But when it comes to prescription drugs that are not only able to kill you but can drag out the final reckoning for years on end, with worsening misery at every step of the way, it is hard to top the benzodiazepines. And no “benzo” has been more lethal to millions of Americans than a popular prescription drug called Klonopin.

Klonopin is the brand name for the pill known as clonazepam, which was originally brought to market in 1975 as a medication for epileptic seizures. Since then, Klonopin, along with the other drugs in this class, has become a prescription of choice for drug abusers from Hollywood to Wall Street. In the process, these Schedule III and IV substances have also earned the dubious distinction of being second only to opioid painkillers like OxyContin as our nation’s most widely abused class of drug.

Seventies-era rock star Stevie Nicks is the poster girl for the perils of Klonopin addiction. In almost every interview, the former lead singer of Fleetwood Mac makes a point of mentioning the toll her abuse of the drug has taken on her life. This month, while promoting her new solo album, In Your Dreams, she told Fox that she blamed Klonopin for the fact that she never had children. “The only thing I’d change [in my life] is walking into the office of that psychiatrist who prescribed me Klonopin. That ruined my life for eight years,” she said. “God knows, maybe I would have met someone, maybe I would have had a baby.”

Nicks checked herself into the Betty Ford Clinic in 1986 to overcome a cocaine addiction. After her release, the psychiatrist in question prescribed a series of benzos—first Valium, then Xanax, and finally Klonopin—supposedly to support her sobriety. “[Klonopin] turned me into a zombie,” she told US Weekly in 2001, according to the website Benzo.org, one of many patient-run sites on the Internet offering information about benzodiazepine addiction, withdrawal and recovery. Nicks has described the drug as a “horrible, dangerous drug,” and said that her eventual 45-day hospital detox and rehab from the drug felt like “somebody opened up a door and pushed me into hell.” Others have described Klonopin’s effects as beginning with an energized sense of euphoria but ending up with horrifying sense of anxiety and paralysis, akin to  sticking your tongue into an electric outlet, or suddenly feeling that your brain is on fire.

When benzodiazepines first came to market in the 1950s and 1960s, they were prescribed for a range of neurological disorders such as epilepsy as well as anxiety related disorders such as insomnia. But over time, a loophole in federal drug-control laws known as the “practice of medicine exception” has permitted psychiatrists and other physicians to prescribe the drugs for any perceived disorder or symptom imaginable, from panic attacks to weight control problems. Much in the same way, Valium became infamous as “mother’s little helper,” a sedative used to pacify a generation of bored and frustrated suburban housewives.

Alcoholics and drug addicts are most likely to run into Klonopin during detox, when it is used to prevent seizures and control the symptoms of acute withdrawal. Klonopin takes longer to metabolize and passes through your system more slowly than other benzos, so in theory you don’t need to take it so frequently. But if you like the high it gives you, and  keep increasing your dosage, the addictive effects of the drug accumulate quickly and can often be devastating. The drug’s label clearly specifies that it is “recommended” only for short-term use—say, seven to 10 days—but once exposed to the pill’s seductive side-effects, many patients come back for more. And not surprisingly, many doctors are happy to refill prescriptions to meet this consumer demand. In the process, countless numbers of people swap one addiction for another, often worse than the initial addiction they were trying to treat. Although benzodiazepines are rarely reported to be the cause of single-drug overdoses, they show up with great frequency in deaths from so-called combined drug intoxication, or CDI. In recent years there have been thousands of deaths caused by this lethal combination. The drug has also help hasten the death of a wide list of otherwise healthy celebrities. :

In 1996, Actress Margaux Hemingway committed suicide by overdosing on a barbiturate-benzodiazepine cocktail. Weeks later, Hollywood movie producer Don Simpson (Beverly Hills Cop) also died from an unintentional benzo-based overdose. Klonopin was one of 11 different prescription drugs—all written by the same doctor—found in the body of Playboy centerfold model Anna Nicole Smith, who OD’d in 2007. Thereafter, the well-known Los Angeles author, David Foster Wallace, who was suffering from a profound depression when a doctor prescribed him Klonopin, went into his backyard on a September evening and hanged himself with a leather belt he had nailed to an overhead beam on his patio. Klonopin has been striking down more than just troubled celebrities, however. In 2008, reports began to surface of soldiers returning from Iraq with post-traumatic stress disorder who were dying in their sleep, the victims of a psych-med cocktail of Klonopin, Paxil (an antidepressant), and Seroquel, an antipsychotic that is routinely prescribed by VA hospitals.

Hospital emergency room visits for benzodiazepine abuse now dwarf those for illegal street drugs by a more than a three-to-one margin. This trend has been increasing for at least the last five years. In 2006, the U.S. government’s Substance Abuse and Mental Health Services Administration published data showing that prescription drugs that year were the number two reason for ER admissions to hospitals for drug abuse, slightly behind illicit substances like heroin and cocaine. But a survey released by the agency earlier this year claims that benzos, opioids and other prescriptions meds are now responsible for the majority of drug-related hospital visits.

 

Scientists can’t say for sure what Klonopin does when ingested, except that it dramatically affects the functioning of the brain. This much we know: If your brain is on fire with electrical signals—like, say, you’re having an epileptic seizure—a dose of clonazepam will help put out the flames.  It does so by lowering the electrical activity of the brain,  specifically which electrical activities it suppresses is something that no one really seems to know for sure. And therein lies the reason why clonazepam, like nearly the entire class of benzos, causes such unpredictable reactions in people. Put simply, the brain is just too complex a structure for its owners to understand—and when you start monkeying around with the way it functions, it’s anybody’s guess what is going to happen next.

Here’s how the respected neurosurgeon Frank Vertosick, Jr., describes the brain in his book When The Air Hits Your Brain: Parables of Neurosurgery: “The human brain: a trillion nerve cells storing electrical patterns more numerous than the water molecules of the world’s oceans.” So, if clonazepam is given to a patient with a history of epileptic seizures, it is likely to bring the symptoms under control. But give the same drug to a person suffering from a completely different problem (an eating or sleeping disorder, for example), and it might actually cause an epileptic seizure.

Clonazepam has wreaked such havoc on people partly because it is so highly addictive; anyone who takes it for more than a few weeks may well develop a dependence on it. As a result, you can be prescribed Klonopin as a short-term treatment for, say, insomnia, and wind up so hooked on it that you’ll begin frantically “doctor shopping” for new prescriptions if the first physician who gave it for you refuses to renew the prescription. As with all benzos, use of Klonopin for more than a month can lead to a dangerous condition known as “benzodiazepine withdrawal syndrome,” featuring elevation of a user’s heart rate and blood pressure along with insomnia, nightmares, hallucinations, anxiety, panic, weight loss, muscular spasms or cramps, and seizures.

Along with Klonopin, here are the three other benzos that, by general agreement, have made it into the top ranks of the world’s worst and most widely abused drugs: temazepam, alprazolam, and lorazepam.

Temazepam: Sold in the U.S. under the brand name Restoril, this benzo was developed and approved in the 1960s as a short-term treatment for insomnia. It is basically what is commonly called a “knockout drop.” Taken even in relatively modest dosages, temazepam can produce a powerfully hypnotic effect that numbs users and makes them extremely compliant and susceptible to control. But thanks to the “practice of medicine exception” physicians can prescribe it for anything they want.

During the Cold War, the Soviet Union reportedly used temazepam extensively to keep political dissidents in a drugged-out state in government-run psychiatric hospitals. Both the CIA and the KGB are also said to have also used the sleeping pill in prisoner interrogations and in research into mind-control, brainwashing and social engineering.

Temazepam is sometimes referred to as a “date rape” drug, and it figures frequently in drug-related crimes of violence. In the drug world underground, where it is often sold as an alternative to heroin and crack cocaine, it goes by such street names as “tams,” “Vitamin T,” “terminators,” “big T,” “mind eraser” and “Mommy’s Big Helper.” Common side-effects include confusion, clumsiness, chronic drowsiness, impaired learning, memory and motor functions, as well as extreme euphoria, dizziness and amnesia.

Alprazolam: Brand name Xanax, this benzo now accounts for as many as 60% of all hospital admissions for drug addiction, according to some research. What’s more, violent and psychotic responses to Xanax are not limited to humans. In May 2009, a 200-lb chimpanzee being kept as a house pet by a Stamford, Conn., woman went on a rampage after being dosed with Xanax, escaping into the neighborhood and ripping off the face of a friend of its owner.

 

Lorazepam: Brand name Ativan, this drug has figured in an array of well-publicized homicides and suicides by those using it. Ativan surfaced in the 2000 divorce case between Washington, D.C., socialite Patricia Duff and her husband, Wall Street billionaire Ronald Perelman. In deposition testimony, Perelman acknowledged taking Ativan as an anti-anxiety drug during his separation from Duff and the commencement of divorce proceedings. The period was marked by numerous outbursts by Perelman and at least two physical assaults on Duff. In 2008, news reports revealed that Ativan was being used by the U.S. Customs Service to keep suspected terrorists sedated while deporting them to detention facilities abroad.

You can buy any of these “feel-good” drugs without a doctor’s signature by simply typing the name into any Internet search engine. Instantly, you’ll be presented with dozens of websites, both foreign and domestic, where you can make your purchase, no prescription required. (Most of the websites accept all major credit cards.)

Why has all this happened? In large measure you can thank the 47,000 members of the American psychiatric profession for this dreadful state of affairs. Neither the pharmaceutical industry nor the psychiatric profession would be anywhere near as lucrative as they are today without their mutual support system. Together they have created a marketing juggernaut that over the last 20 years has spawned a seemingly nonstop gusher of profits that is only now beginning to slow—and probably only temporarily.

The scholarly journals of the psychiatric profession were filled with early warnings, beginning almost 50 years ago, from those who could see where the encroaching influence of the drug companies was destined to lead the profession. Now, even the medical journals themselves have been corrupted by the hidden hand of Big Pharma. In 2008, the New York Times reported that a survey of the six top medical journals showed that on average almost 8% of the bylined articles published in their pages were ghostwritten by freelance writers, then published under the names of cooperating doctors and researchers to give the pro-drug messages contained in the articles the appearance of impartiality. The scheme is bankrolled, of course, by the company that makes the drug.

Consider Dr. Joseph Biederman, the world-renowned Harvard University psychiatrist and father of modern psychopharmacology for children, who, it now turns out, has been taking secret “consulting fees” from drug companies for years. Biederman is widely credited with legitimizing the concept of “bipolar disorder” as a chemical imbalance in the brain that can be corrected with psychiatric drugs. But documents uncovered by Senate investigators probing ties between the psychiatric profession and the drug industry, which have resulted in an explosion in medically approved uses for psychiatric drugs for children, show that Biederman received more than $1.6 million in undisclosed payments since 2000 from the pharmaceutical companies manufacturing the drugs he was encouraging parents to give to their children if they appeared to be “bipolar.”

No surveys that I am aware of have ever been conducted regarding the public’s impression of what psychiatrists actually do. But from pop culture media characters such as the fictional female psychiatrist Dr. Jennifer Melfi in the HBO series The Sopranos, the general belief seems to be that psychiatrists are learned and humane professionals who counsel their patients through hour-long “talk therapy” sessions in their offices once a week, and more frequently than that if necessary to help them resolve their conflicts.

In fact, many do nothing of the sort. It may be only a patient’s first session with a psychiatrist that lasts any meaningful amount of time. In this initial consultation the psychiatrist relies on the DSM manual as the diagnostic tool to decide precisely what the patient suffers from. Once that is established, the psychiatrist can begin prescribing psych meds as therapy, free of fear about the danger of a medical malpractice suit lurking down the road.

The follow-up sessions (weekly, monthly, etc.) that come after the initial consultations—that is, the sessions that are portrayed on The Sopranos as the occasions when Mafia killer Tony Soprano sits down in Dr. Melfi’s darkened office and pours out his guts about his troubled childhood—usually last as little as 15 minutes. During these so-called “med checks,” a psychiatrist typically charges $100 or more for asking the patient little more than how he or she is responding to the prescribed medication—a question that can usually be answered by a quick glance at the patient’s demeanor.

At the end of such a med-check, the psychiatrist may decide to renew the patient’s current prescription, substitute or add a new one—or even offer the patient a free sample of some new psych-med, courtesy of a sales rep from a pharmaceutical company. At four med-checks per hour, a psychiatrist with enough patients to fill up his workdays can easily make $120,000 annually from his med-check practice alone and still take a month-long summer vacation.

It’s obvious that this system incentivizes doctors financially to keep prescribing drugs in order to keep patients returning for med-checks. But Big Pharma offers a whole host of additional income opportunities. Last year, ProPublica, the Pulitzer Prize–winning public-interest investigative website, did an extensive report on the financial compensation drug companies shower on physicians. Well-titled “Dollars for Docs,” this series included a database of more than 17,000 doctors who accepted “speaker fees” and other money from eight drug companies in 2009 and 2010 totaling $320 million.

That accounting is only the tip of the iceberg, however, as most pharmaceutical companies have refused to disclose their physician payments. Not surprisingly, most doctors interviewed by ProPublica denied that their medical decisions and prescribing habits were influenced by drug company payments. The new healthcare reform bill calls for greater transparency, requiring all drug-makers to disclose all fees paid to all doctors by 2014. Until then, you can type your doctor’s name into the database to find out if he or she is on the pharma take, and for how much.

 

Christopher Byron is a prize-winning investigative journalist and New York Times best-selling author. His columns and articles have appeared in a dozens of major publications, including New York Magazine, Fortune, The New York Times and The New York Post. He has also been a regular guest commentator on CNN. Fox, and CNBC. This article is exclusively excerpted from his forthcoming book, Mind Drugs, Inc.: How Big Pharma and Modern Psychiatry Have Corrupted Washington and Destroyed Mental Health in America.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS