Serious Mental Illness Blog

Official blog for LIU Post's Clinical Psychology Doctorate SMI Specialty Concentration

Posts tagged psychotherapy

20 notes

Why Doctors Need StoriesBy Peter D. Kramer, The New York Times Opinion Pages
 A FEW weeks ago, I received an email from the Danish psychiatrist Per Bech that had an unexpected attachment: a story about a patient. I have been writing a book about antidepressants — how well they work and how we know. Dr. Bech is an innovator in clinical psychometrics, the science of measuring change in conditions like depression. Generally, he forwards material about statistics.
Now he had shared a recently published case vignette. It concerned a man hospitalized at age 30 in 1954 for what today we call severe panic attacks. The treatment, which included “narcoanalysis” (interviewing aided by a “truth serum”), afforded no relief. On discharge, the man turned to alcohol. Later, when sober again, he endured increasing phobias, depression and social isolation.
Four decades later, in 1995, suicidal thoughts brought this anxious man back into the psychiatric system, at age 70. For the first time, he was put on an antidepressant, Zoloft. Six weeks out, both the panic attacks and the depression were gone. He resumed work, entered into a social life and remained well for the next 19 years — until his death.
If the narrative was striking, so was its inclusion in a medical journal. In the past 20 years, clinical vignettes have lost their standing. For a variety of reasons, including a heightened awareness of medical error and a focus on cost cutting, we have entered an era in which a narrow, demanding version of evidence-based medicine prevails. As a writer who likes to tell stories, I’ve been made painfully aware of the shift. The inclusion of a single anecdote in a research overview can lead to a reprimand, for reliance on storytelling.
My own view is that we need storytelling in medicine, need it for any number of reasons.
Repeatedly, I have been surprised by the impact that even lightly sketched case histories can have on readers. In my book “Listening to Prozac,” I wrote about personality and how it might change on medication. “Should You Leave?” concerned theories of intimacy. Readers, however, often used the books for a different purpose: identifying depression. Regularly, I received — and still receive — phone calls: “My husband is just like — ” one or another figure from a clinical example. For a decade and more, public health campaigns had circulated symptom lists meant to get people to recognize mood disorders, and still there remained a role for narrative to complete the job.
Other readers wrote to say that they’d recognized themselves. Seeing that they were not alone gave them hope. Encouragement is another benefit of case description, familiar to us in this age of memoir.
But vignettes can do more than illustrate and reassure. They convey what doctors see and hear, and those reports can set a research agenda.
Consider my experience prescribing Prozac. When it was introduced, certain of my patients, as they recovered from their depression or obsessionality, made note of personality effects. These patients said that, in responding to treatment, they had become “myself at last” or “better than baseline” — often, less socially withdrawn. I presented these examples first in essays for psychiatrists and then in my book, where I surrounded the narrative material with accounts of research. (Findings in cell biology, animal ethology and personality theory suggested that such antidepressants, which altered the way the brain handled serotonin, might increase assertiveness.)
My loosely buttressed descriptions — and colleagues’ similar observations — led in time to controlled trials that confirmed the “better than well” phenomenon. (One study of depressed patientsfound that Paxil drastically decreased their “neuroticism,” or emotional instability. Patients who became “better than well” appeared to gain extra protection from further bouts of mood disorder.) But doctors had not waited for controlled trials. In advance, the better-than-well hypothesis had served as a tentative fact. Treating depression, colleagues looked out for personality change, even aimed for it. Because clinical observations often do pan out, they serve as low-level evidence — especially if they jibe with what basic science suggests is likely.
To be sure, this approach, giving weight to the combination of doctors’ experience and biological plausibility, stands somewhat in conflict with the principles of evidence-based medicine. The movement’s manifesto, published in the Journal of the American Medical Association in 1992, proclaimed a new era that would see near-exclusive reliance on systematic clinical research — the direct assessment of treatments in patients. But even the manifesto conceded that less formal expertise would remain important in areas of practice that had not been subject to high-level testing.
THAT concession covers much of the territory. Making decisions about prescribing, often I exhaust the guidance that trials can give — and then I consult experts who tell me about this case and that outcome. Practicing psychotherapy, I employ methods that will never be subject to formal assessment. Among my teachers I number colleagues I know only through their descriptions of patient encounters. One psychoanalyst, Hellmuth Kaiser, imparted his wisdom through a fictional case portrayed in a stage play. I follow his precepts daily, hourly.
I have long felt isolated in this position, embracing stories, which is why I warm to the possibility that the vignette is making a comeback. This summer, Oxford University Press began publishing a journal devoted to case reports. And this month, in an unusual move, the New England Journal of Medicine, the field’s bellwether, opened an issue with a case history involving a troubled mother, daughter and grandson. The contributors write: “Data are important, of course, but numbers sometimes imply an order to what is happening that can be misleading. Stories are better at capturing a different type of ‘big picture.’ ”
Stories capture small pictures, too. I’m thinking of the anxious older man given Zoloft. That narrative has power. As Dr. Bech and his co-author, Lone Lindberg, point out, spontaneous recovery from panic and depression late in life is rare. (Even those who put great stock in placebo pills don’t imagine that they do much for conditions that are severe and chronic.) The degree of transformation in the Danish patient is impressive. So is the length of observation. No formal research can offer a 40-year lead-in or a 19-year follow-up. Few studies report on both symptoms and social progress. Research reduces information about many people; vignette retains the texture of life in one of its forms.
How far should stories inform practice? Faced with an elderly patient who was anxious, withdrawn and never medicated, a well-read doctor might weigh many potential sources of guidance, this vignette among them. Often the knowledge that informs clinical decisions emerges, like a pointillist image, from the coalescence of scattered information.
HERE is where I want to venture a radical statement about the worth of anecdote. Beyond its roles as illustration, affirmation, hypothesis-builder and low-level guidance for practice, storytelling can act as a modest counterbalance to a straitened understanding of evidence.
Take psychotherapy. Most of the research into its efficacy concerns cognitive behavioral therapy, or C.B.T., the treatment that teaches patients to moderate their habitual maladaptive thoughts. The reasons for this concentration are historical and temperamental. C.B.T. is rooted in a branch of psychology devoted to research, and the school of therapy attracts students who favor the practical and systematic over the spontaneous and poetic. There are no trials of existential psychotherapy.
But where the comparison has been made — primarily in the treatment of depression — C.B.T. does not outperform alternative approaches. (The alternatives tested are mostly distant derivatives of psychoanalysis.) And detailed research suggests that where C.B.T. works, specific techniques are not the reason. Studies of the components of therapy find that it is factors common to all schools, like the practitioner’s commitment and the alliance with the patient, that do the job.
If we weigh “evidence” by the pound or the page, we risk moving toward a monoculture of C.B.T., a result I would consider unfortunate, since there are many ways to influence people for the better. Here’s where case description shines. We hear the existential psychoanalyst Leston Havens describe his use of imitative statements, exclamations by the therapist that seem to come from within the patient: “What isone supposed to do?” For me, Dr. Havens’s approach — sitting beside the patient metaphorically and looking outward, hand-crafting interventions on the spot — carries what I call psychological plausibility. The vignette corresponds to a convincing account of how people change.
It has been my hope that, while we wait for conclusive science, stories will preserve diversity in our theories of mind. For 17 years, starting in the 1980s, I ran a psychotherapy seminar for psychiatry residents. As readings, I assigned only case vignettes, trusting that one or another would speak to each trainee.
My recent reading of outcome trials of antidepressants has strengthened my suspicion that the line between research and storytelling can be fuzzy. In psychiatry — and the same is true throughout medicine — randomized trials are rarely large enough to provide guidance on their own. Statisticians amalgamate many studies through a technique called meta-analysis. The first step of the process, deciding which data to include, colors the findings. On occasion, the design of a meta-analysis stacks the deck for or against a treatment. The resulting charts are polemical. Effectively, the numbers arenarrative.
Because so little evidence stands on its own, incorporating research results into clinical practice requires discernment. Thoughtful doctors consider data, accompanying narrative, plausibility and, yes, clinical anecdote in their decision making. To put the same matter differently, evidence-based medicine, properly enacted, is judgment-based medicine in which randomized trials, carefully assessed, are given their due.
I don’t think that psychiatry — or, again, medicine in general — need be apologetic about this state of affairs. Our substantial formal findings require integration. The danger is in pretending otherwise. It would be unfortunate if psychiatry moved fully — prematurely — to squeeze the art out of its science. And it would be unfortunate if we marginalized the case vignette. We need storytelling, to set us in the clinical moment, remind us of the variety of human experience and enrich our judgment.
Peter D. Kramer, a clinical professor of psychiatry at Brown University, is the author of several books, including “Against Depression” and “Listening to Prozac.”

For more mental health resources, Click Here to access the Serious Mental Illness Blog. 
Click Here to access original SMI Blog content

Why Doctors Need Stories
By Peter D. Kramer, The New York Times Opinion Pages

 A FEW weeks ago, I received an email from the Danish psychiatrist Per Bech that had an unexpected attachment: a story about a patient. I have been writing a book about antidepressants — how well they work and how we know. Dr. Bech is an innovator in clinical psychometrics, the science of measuring change in conditions like depression. Generally, he forwards material about statistics.

Now he had shared a recently published case vignette. It concerned a man hospitalized at age 30 in 1954 for what today we call severe panic attacks. The treatment, which included “narcoanalysis” (interviewing aided by a “truth serum”), afforded no relief. On discharge, the man turned to alcohol. Later, when sober again, he endured increasing phobias, depression and social isolation.

Four decades later, in 1995, suicidal thoughts brought this anxious man back into the psychiatric system, at age 70. For the first time, he was put on an antidepressant, Zoloft. Six weeks out, both the panic attacks and the depression were gone. He resumed work, entered into a social life and remained well for the next 19 years — until his death.

If the narrative was striking, so was its inclusion in a medical journal. In the past 20 years, clinical vignettes have lost their standing. For a variety of reasons, including a heightened awareness of medical error and a focus on cost cutting, we have entered an era in which a narrow, demanding version of evidence-based medicine prevails. As a writer who likes to tell stories, I’ve been made painfully aware of the shift. The inclusion of a single anecdote in a research overview can lead to a reprimand, for reliance on storytelling.

My own view is that we need storytelling in medicine, need it for any number of reasons.

Repeatedly, I have been surprised by the impact that even lightly sketched case histories can have on readers. In my book “Listening to Prozac,” I wrote about personality and how it might change on medication. “Should You Leave?” concerned theories of intimacy. Readers, however, often used the books for a different purpose: identifying depression. Regularly, I received — and still receive — phone calls: “My husband is just like — ” one or another figure from a clinical example. For a decade and more, public health campaigns had circulated symptom lists meant to get people to recognize mood disorders, and still there remained a role for narrative to complete the job.

Other readers wrote to say that they’d recognized themselves. Seeing that they were not alone gave them hope. Encouragement is another benefit of case description, familiar to us in this age of memoir.

But vignettes can do more than illustrate and reassure. They convey what doctors see and hear, and those reports can set a research agenda.

Consider my experience prescribing Prozac. When it was introduced, certain of my patients, as they recovered from their depression or obsessionality, made note of personality effects. These patients said that, in responding to treatment, they had become “myself at last” or “better than baseline” — often, less socially withdrawn. I presented these examples first in essays for psychiatrists and then in my book, where I surrounded the narrative material with accounts of research. (Findings in cell biology, animal ethology and personality theory suggested that such antidepressants, which altered the way the brain handled serotonin, might increase assertiveness.)

My loosely buttressed descriptions — and colleagues’ similar observations — led in time to controlled trials that confirmed the “better than well” phenomenon. (One study of depressed patientsfound that Paxil drastically decreased their “neuroticism,” or emotional instability. Patients who became “better than well” appeared to gain extra protection from further bouts of mood disorder.) But doctors had not waited for controlled trials. In advance, the better-than-well hypothesis had served as a tentative fact. Treating depression, colleagues looked out for personality change, even aimed for it. Because clinical observations often do pan out, they serve as low-level evidence — especially if they jibe with what basic science suggests is likely.

To be sure, this approach, giving weight to the combination of doctors’ experience and biological plausibility, stands somewhat in conflict with the principles of evidence-based medicine. The movement’s manifesto, published in the Journal of the American Medical Association in 1992, proclaimed a new era that would see near-exclusive reliance on systematic clinical research — the direct assessment of treatments in patients. But even the manifesto conceded that less formal expertise would remain important in areas of practice that had not been subject to high-level testing.

THAT concession covers much of the territory. Making decisions about prescribing, often I exhaust the guidance that trials can give — and then I consult experts who tell me about this case and that outcome. Practicing psychotherapy, I employ methods that will never be subject to formal assessment. Among my teachers I number colleagues I know only through their descriptions of patient encounters. One psychoanalyst, Hellmuth Kaiser, imparted his wisdom through a fictional case portrayed in a stage play. I follow his precepts daily, hourly.

I have long felt isolated in this position, embracing stories, which is why I warm to the possibility that the vignette is making a comeback. This summer, Oxford University Press began publishing a journal devoted to case reports. And this month, in an unusual move, the New England Journal of Medicine, the field’s bellwether, opened an issue with a case history involving a troubled mother, daughter and grandson. The contributors write: “Data are important, of course, but numbers sometimes imply an order to what is happening that can be misleading. Stories are better at capturing a different type of ‘big picture.’ ”

Stories capture small pictures, too. I’m thinking of the anxious older man given Zoloft. That narrative has power. As Dr. Bech and his co-author, Lone Lindberg, point out, spontaneous recovery from panic and depression late in life is rare. (Even those who put great stock in placebo pills don’t imagine that they do much for conditions that are severe and chronic.) The degree of transformation in the Danish patient is impressive. So is the length of observation. No formal research can offer a 40-year lead-in or a 19-year follow-up. Few studies report on both symptoms and social progress. Research reduces information about many people; vignette retains the texture of life in one of its forms.

How far should stories inform practice? Faced with an elderly patient who was anxious, withdrawn and never medicated, a well-read doctor might weigh many potential sources of guidance, this vignette among them. Often the knowledge that informs clinical decisions emerges, like a pointillist image, from the coalescence of scattered information.

HERE is where I want to venture a radical statement about the worth of anecdote. Beyond its roles as illustration, affirmation, hypothesis-builder and low-level guidance for practice, storytelling can act as a modest counterbalance to a straitened understanding of evidence.

Take psychotherapy. Most of the research into its efficacy concerns cognitive behavioral therapy, or C.B.T., the treatment that teaches patients to moderate their habitual maladaptive thoughts. The reasons for this concentration are historical and temperamental. C.B.T. is rooted in a branch of psychology devoted to research, and the school of therapy attracts students who favor the practical and systematic over the spontaneous and poetic. There are no trials of existential psychotherapy.

But where the comparison has been made — primarily in the treatment of depression — C.B.T. does not outperform alternative approaches. (The alternatives tested are mostly distant derivatives of psychoanalysis.) And detailed research suggests that where C.B.T. works, specific techniques are not the reason. Studies of the components of therapy find that it is factors common to all schools, like the practitioner’s commitment and the alliance with the patient, that do the job.

If we weigh “evidence” by the pound or the page, we risk moving toward a monoculture of C.B.T., a result I would consider unfortunate, since there are many ways to influence people for the better. Here’s where case description shines. We hear the existential psychoanalyst Leston Havens describe his use of imitative statements, exclamations by the therapist that seem to come from within the patient: “What isone supposed to do?” For me, Dr. Havens’s approach — sitting beside the patient metaphorically and looking outward, hand-crafting interventions on the spot — carries what I call psychological plausibility. The vignette corresponds to a convincing account of how people change.

It has been my hope that, while we wait for conclusive science, stories will preserve diversity in our theories of mind. For 17 years, starting in the 1980s, I ran a psychotherapy seminar for psychiatry residents. As readings, I assigned only case vignettes, trusting that one or another would speak to each trainee.

My recent reading of outcome trials of antidepressants has strengthened my suspicion that the line between research and storytelling can be fuzzy. In psychiatry — and the same is true throughout medicine — randomized trials are rarely large enough to provide guidance on their own. Statisticians amalgamate many studies through a technique called meta-analysis. The first step of the process, deciding which data to include, colors the findings. On occasion, the design of a meta-analysis stacks the deck for or against a treatment. The resulting charts are polemical. Effectively, the numbers arenarrative.

Because so little evidence stands on its own, incorporating research results into clinical practice requires discernment. Thoughtful doctors consider data, accompanying narrative, plausibility and, yes, clinical anecdote in their decision making. To put the same matter differently, evidence-based medicine, properly enacted, is judgment-based medicine in which randomized trials, carefully assessed, are given their due.

I don’t think that psychiatry — or, again, medicine in general — need be apologetic about this state of affairs. Our substantial formal findings require integration. The danger is in pretending otherwise. It would be unfortunate if psychiatry moved fully — prematurely — to squeeze the art out of its science. And it would be unfortunate if we marginalized the case vignette. We need storytelling, to set us in the clinical moment, remind us of the variety of human experience and enrich our judgment.

Peter D. Kramer, a clinical professor of psychiatry at Brown University, is the author of several books, including “Against Depression” and “Listening to Prozac.”

For more mental health resources, Click Here to access the Serious Mental Illness Blog

Click Here to access original SMI Blog content

Filed under therapy therapist psychotherapy psychotherapist shink psychologist psychology psychiatrist psychiatry counselor counseling social worker social work mind body brain wellness recover recovery hope healthy mental health mental health mental illness illness diagnosis disorder

26 notes

Annita Sawyer - Is Diagnosis Destiny?
Posted on the Yale University Youtube Channel

From the related article, Sawyer, A. (2011). Let’s talk: a narrative of mental illness, recovery, and the psychotherapist’s personal treatment. Journal of clinical psychology, 67(8), 776-788: 

This article describes the author’s experience in psychotherapy, beginning as a suicidal teenager with a dismal prognosis, through 5 years of hospitalization, including shock treatment that erased most memory before age 20, through an Ivy League education, and successful professional career. Retraumatization triggered by reading her hospital records 40 years later adds a unique perspective, as the author watched, but could not control, a process within herself that she regularly addressed as therapist with her own patients. Healing aspects of relationships with three psychodynamic psychotherapists (two psychiatrists and a social worker), credited with her survival and success, are examined. A dramatic interview with Harold Searles, her psychiatrist’s supervisor, and its role in her recovery is considered. Lasting lessons concerning the healing aspects of psychotherapy, the effects of repressed early trauma encountered late in life, the need to counter stigma, and the value of personal psychotherapy are discussed.



For more mental health news, Click Here to access the Serious Mental Illness Blog

Filed under serious mental illness serious mental illness mental illness mental health health psychology psychologist psychoanalysis treatment psychotherapist therapist therapy psychotherapy psychiatrist psychiatry diagnosis diagnostic dsm dsm 5 dsm iv clinical psychology clinical psychological research science news suicide suicidal

70 notes

Should Mental Health Be a Primary-Care Doctor’s Job?By Suzanne Koven, who is a primary care doctor at Massachusetts General Hospital in Boston and writes the column “In Practice” at the Boston Globe.Patients occasionally ask me if I’ll be the doctor who’ll take out a gallbladder or deliver a baby. I tell them, “You deserve better.” I’m a primary-care internist, and my expertise is broader than it is deep. I manage high blood pressure and cholesterol but refer people with heart attacks to cardiologists; I perform Pap tests and prescribe birth-control pills but send pregnant women to obstetricians; and I often diagnose, but never treat, cancer.With mental illness, though, the limits of my role are less clear. I’m comfortable helping people get through life’s more common emotional challenges, like divorce, retirement, disappointing children. If you’re hearing voices, or if you walk into my office and announce that you’ve decided to kill yourself, as someone did not long ago, I know exactly what to do: escort you to a psychiatrist. But what about the lawyer who’s having trouble meeting deadlines and wants medication for attention-deficit disorder? Or the businesswoman whose therapist told her to see me about starting an antidepressant? Or the civil servant trying to shake his Oxycontin addiction? They’ve all asked me to treat them because they don’t want or can’t easily access psychiatric care.This winter, I’ll see more patients with seasonal-affective disorder than the flu, and the tissues in my exam room will dry tears more often than they muffle sneezes. The problem is, I lack the time or training to diagnose and manage many psychiatric disorders. And some studies, such as this one about low rates of detection of anxiety and depression by primary-care doctors, show that I’m probably not all that great at doing so. Still, over a third of all mental-health care in the U.S. is now provided by primary-care doctors, nurse practitioners, pediatricians, and family practitioners.One reason is that there aren’t enough psychiatrists. I recall discussions, fifteen years ago, among members of my internal-medicine group about whether it was ethical for us to prescribe antidepressants when we practiced in a hospital with dozens of mental-health professionals on staff. We no longer have those discussions. Demand by patients for mental-health care has increased such that if primary-care doctors didn’t offer it, many people would go without it. It’s estimated that seventy per cent of a primary-care doctor’s practice now involves management of psychosocial issues ranging from marriage counselling to treatment of anxiety and depression.Some argue that the increased demand is artificial, driven by overdiagnosis of mental illness and overuse of psychiatric medications. With one in four adults and one in five children currently carrying a psychiatric diagnosis—and one in five Americans taking psychiatric medications regularly, such skepticism seems warranted. Regardless, access to psychiatric care is nowhere near large enough to meet the growing demand. Fewer medical students are going into psychiatry, partly because psychiatrists, like primary-care doctors, earn among the lowest salaries of all physicians. Those who do choose psychiatry often don’t accept insurance, including Medicare and Medicaid, requiring patients to pay out of pocket. Affordable psychiatric treatment is especially limited for children and in rural states. Wyoming, for example, which has one of the highest suicide rates in the nation, had, according to one count from a few years ago, a mere twenty-seven psychiatrists—just over five per hundred thousand residents. Massachusetts, by comparison, had around over two thousand psychiatrists, or around thirty-two per hundred thousand residents.But even in Boston, where I practice, primary-care doctors are treating more mental illness. Some patients don’t have adequate insurance to obtain specialized mental-health care, despite legislative efforts, including the Affordable Care Act, to create parity between mental-health coverage and coverage for other medical conditions. Some people want psychiatric care without having to see a psychiatrist. Having finally confided a long-held secret of compulsive hand-washing or bulimia to me, some patients would rather not share it all over again with someone else. And some wish to avoid having, as one of my patients put it, “a psychiatric rap sheet”—a record that an insurance company, employer, or nosy family member might discover. They’d prefer to have their psychiatric diagnoses tucked discreetly between my notes about their heartburn and their eczema.Harvard Medical School’s Center for Primary Care recently announced a new program to improve the quality of psychiatric care offered by primary-care doctors. In its initial phases, it will place mental-health workers in six Boston-area primary-care clinics and target the treatment of depression. It will also outfit the clinics with videoconferencing technology to enable consultations with psychiatrists and other specialists.Programs like Harvard’s aren’t only responding to a shortage of psychiatrists, though. They’re part of a movement toward what’s called the “Patient-Centered Medical Home.” First conceived in the nineteen-sixties by pediatricians who were trying to provide better-coordinated care for chronically ill children, the medical-home model urges patients to receive most of their care in the offices of their primary-care doctors, with consultants coming and going. When a patient needs to see a specialist, the primary-care provider arranges and oversees the consultation. Often it occurs in the primary-care doctor’s office, or even in the patient’s actual home, via something like Skype; this modern medical home depends heavily on technology, such as electronic health records and video and digital communication between patients and their doctors—and between the primary-care team and consultants.To get a sense of how this model differs from current norms, I told Dr. Russell Phillips, director of the Center for Primary Care, about the businesswoman whose therapist had recommended that I prescribe her an antidepressant. I mentioned that I’d prescribed it, arranged to meet with the patient frequently, and crossed my fingers that the drug would be effective and wouldn’t cause side effects. If the medication didn’t work, or if she didn’t tolerate it, I’d likely have to convince her to see a psychopharmacologist—if I could find one who accepted her insurance. How would things have been different if I practiced in one of the clinics participating in his new program?Phillips said that first, my patient would fill out a PHQ-9 survey, a nine-question screener for depression. The survey isn’t perfect, but it might cut down on some of the antidepressant prescriptions written by primary-care doctors too busy to verify that a patient is clinically depressed. If she met the criteria for depression set by the PHQ-9, her name would be entered in a registry of patients in my practice who were assigned follow-up care with a psychiatric nurse, social worker, or other mental-health professional on my medical team. If appropriate, I’d prescribe an antidepressant, but with access to consultation by phone or videoconferencing with a psychopharmacologist paid to assist me.While it sounds reasonable for a primary-care doctor to get an opinion about a rash or a chest X-ray via computer, it’s less obvious that a patient’s mental health could be assessed this way. But, it turns out, “telemental health” works surprisingly well. A 2013 review of several programs in which patients received psychiatric evaluation and counselling by phone, e-mail, or video showed that telemedicine can improve symptoms, reduce length of hospital stays, and help people adhere to medication as well as face-to-face psychiatric care. For children and adolescents, telemedicine often works better than face-to-face care.Still, I confessed to Phillips that surveys, registries, and videoconferencing didn’t sound like the kind of patient interactions that made me choose primary care in the first place. He argued that, actually, the type of care he’s proposing is simply a modern version of what an old-fashioned general practitioner offered. A few generations ago, the family doctor was a one-stop resource for health care and emotional support. He might deliver you, take out your tonsils, write your college-recommendation letter, and, if he outlived you, preside over your deathbed. Phillips envisions twenty-first-century primary care as being no less inclusive. “Our patients are coming in to see us,” he told me. “They have needs. We should be able to address as many of those needs as possible. And we know behavioral-health disorders are front and center, so it should be something that primary-care doctors can manage.”He also pointed out that in the current system, in which a doctor who cares for a patient’s body often has little contact with the doctor who cares for her mind, doesn’t make much sense. Psychiatric drugs and conditions can affect physical health, and drugs for medical conditions, as well as the medical conditions themselves, have psychological effects. And people with mental illness are two to four times as likely to die from their medical conditions as people without mental illness. Several studies have shown that when primary-care doctors team up with mental-health workers, their patients’ physical health improves.The key to making team-based medical care work, Phillips said, is helping the patient feel that his or her relationship with the primary-care provider is at its center. “I’ve actually done some of this,” he told me. “And it’s very meaningful to patients to have a connection to a member of the team when they realize that the team is an extension of their physician. So it can’t be a faceless person who’s anonymous and is a robo-caller.”Not long ago, a patient of mine came to my office, accompanied by his worried family. He’d been acting peculiarly, and it wasn’t clear whether his behavior was caused by some longstanding psychiatric issues or by his many medical problems and medications. I phoned a psychiatrist at my hospital to see if I might expedite an appointment for an evaluation to complement my medical work. “Is he with you now?” the psychiatrist asked. I said that he was, and she told me that she happened to be free, and would come to my office and meet with him there.The patient and his family were greatly reassured by the psychiatrist’s visit. I have no doubt that much of that reassurance came from seeing the psychiatrist and me, even briefly, in the same room together—from a sense that I, the doctor who knows the patient best, was running the show.I thought, Wouldn’t it be great if a psychiatrist appeared in my office every time a patient needed one? In the future, one will—most likely on a screen.


For more mental health news, Click Here to access the Serious Mental Illness Blog

Should Mental Health Be a Primary-Care Doctor’s Job?
By Suzanne Koven, who is a primary care doctor at Massachusetts General Hospital in Boston and writes the column “In Practice” at the Boston Globe.

Patients occasionally ask me if I’ll be the doctor who’ll take out a gallbladder or deliver a baby. I tell them, “You deserve better.” I’m a primary-care internist, and my expertise is broader than it is deep. I manage high blood pressure and cholesterol but refer people with heart attacks to cardiologists; I perform Pap tests and prescribe birth-control pills but send pregnant women to obstetricians; and I often diagnose, but never treat, cancer.
With mental illness, though, the limits of my role are less clear. I’m comfortable helping people get through life’s more common emotional challenges, like divorce, retirement, disappointing children. If you’re hearing voices, or if you walk into my office and announce that you’ve decided to kill yourself, as someone did not long ago, I know exactly what to do: escort you to a psychiatrist. But what about the lawyer who’s having trouble meeting deadlines and wants medication for attention-deficit disorder? Or the businesswoman whose therapist told her to see me about starting an antidepressant? Or the civil servant trying to shake his Oxycontin addiction? They’ve all asked me to treat them because they don’t want or can’t easily access psychiatric care.
This winter, I’ll see more patients with seasonal-affective disorder than the flu, and the tissues in my exam room will dry tears more often than they muffle sneezes. The problem is, I lack the time or training to diagnose and manage many psychiatric disorders. And some studies, such as this one about low rates of detection of anxiety and depression by primary-care doctors, show that I’m probably not all that great at doing so. Still, over a third of all mental-health care in the U.S. is now provided by primary-care doctors, nurse practitioners, pediatricians, and family practitioners.
One reason is that there aren’t enough psychiatrists. I recall discussions, fifteen years ago, among members of my internal-medicine group about whether it was ethical for us to prescribe antidepressants when we practiced in a hospital with dozens of mental-health professionals on staff. We no longer have those discussions. Demand by patients for mental-health care has increased such that if primary-care doctors didn’t offer it, many people would go without it. It’s estimated that seventy per cent of a primary-care doctor’s practice now involves management of psychosocial issues ranging from marriage counselling to treatment of anxiety and depression.
Some argue that the increased demand is artificial, driven by overdiagnosis of mental illness and overuse of psychiatric medications. With one in four adults and one in five children currently carrying a psychiatric diagnosis—and one in five Americans taking psychiatric medications regularly, such skepticism seems warranted. Regardless, access to psychiatric care is nowhere near large enough to meet the growing demand. Fewer medical students are going into psychiatry, partly because psychiatrists, like primary-care doctors, earn among the lowest salaries of all physicians. Those who do choose psychiatry often don’t accept insurance, including Medicare and Medicaid, requiring patients to pay out of pocket. Affordable psychiatric treatment is especially limited for children and in rural states. Wyoming, for example, which has one of the highest suicide rates in the nation, had, according to one count from a few years ago, a mere twenty-seven psychiatrists—just over five per hundred thousand residents. Massachusetts, by comparison, had around over two thousand psychiatrists, or around thirty-two per hundred thousand residents.
But even in Boston, where I practice, primary-care doctors are treating more mental illness. Some patients don’t have adequate insurance to obtain specialized mental-health care, despite legislative efforts, including the Affordable Care Act, to create parity between mental-health coverage and coverage for other medical conditions. Some people want psychiatric care without having to see a psychiatrist. Having finally confided a long-held secret of compulsive hand-washing or bulimia to me, some patients would rather not share it all over again with someone else. And some wish to avoid having, as one of my patients put it, “a psychiatric rap sheet”—a record that an insurance company, employer, or nosy family member might discover. They’d prefer to have their psychiatric diagnoses tucked discreetly between my notes about their heartburn and their eczema.
Harvard Medical School’s Center for Primary Care recently announced a new program to improve the quality of psychiatric care offered by primary-care doctors. In its initial phases, it will place mental-health workers in six Boston-area primary-care clinics and target the treatment of depression. It will also outfit the clinics with videoconferencing technology to enable consultations with psychiatrists and other specialists.
Programs like Harvard’s aren’t only responding to a shortage of psychiatrists, though. They’re part of a movement toward what’s called the “Patient-Centered Medical Home.” First conceived in the nineteen-sixties by pediatricians who were trying to provide better-coordinated care for chronically ill children, the medical-home model urges patients to receive most of their care in the offices of their primary-care doctors, with consultants coming and going. When a patient needs to see a specialist, the primary-care provider arranges and oversees the consultation. Often it occurs in the primary-care doctor’s office, or even in the patient’s actual home, via something like Skype; this modern medical home depends heavily on technology, such as electronic health records and video and digital communication between patients and their doctors—and between the primary-care team and consultants.
To get a sense of how this model differs from current norms, I told Dr. Russell Phillips, director of the Center for Primary Care, about the businesswoman whose therapist had recommended that I prescribe her an antidepressant. I mentioned that I’d prescribed it, arranged to meet with the patient frequently, and crossed my fingers that the drug would be effective and wouldn’t cause side effects. If the medication didn’t work, or if she didn’t tolerate it, I’d likely have to convince her to see a psychopharmacologist—if I could find one who accepted her insurance. How would things have been different if I practiced in one of the clinics participating in his new program?
Phillips said that first, my patient would fill out a PHQ-9 survey, a nine-question screener for depression. The survey isn’t perfect, but it might cut down on some of the antidepressant prescriptions written by primary-care doctors too busy to verify that a patient is clinically depressed. If she met the criteria for depression set by the PHQ-9, her name would be entered in a registry of patients in my practice who were assigned follow-up care with a psychiatric nurse, social worker, or other mental-health professional on my medical team. If appropriate, I’d prescribe an antidepressant, but with access to consultation by phone or videoconferencing with a psychopharmacologist paid to assist me.
While it sounds reasonable for a primary-care doctor to get an opinion about a rash or a chest X-ray via computer, it’s less obvious that a patient’s mental health could be assessed this way. But, it turns out, “telemental health” works surprisingly well. A 2013 review of several programs in which patients received psychiatric evaluation and counselling by phone, e-mail, or video showed that telemedicine can improve symptoms, reduce length of hospital stays, and help people adhere to medication as well as face-to-face psychiatric care. For children and adolescents, telemedicine often works better than face-to-face care.
Still, I confessed to Phillips that surveys, registries, and videoconferencing didn’t sound like the kind of patient interactions that made me choose primary care in the first place. He argued that, actually, the type of care he’s proposing is simply a modern version of what an old-fashioned general practitioner offered. A few generations ago, the family doctor was a one-stop resource for health care and emotional support. He might deliver you, take out your tonsils, write your college-recommendation letter, and, if he outlived you, preside over your deathbed. Phillips envisions twenty-first-century primary care as being no less inclusive. “Our patients are coming in to see us,” he told me. “They have needs. We should be able to address as many of those needs as possible. And we know behavioral-health disorders are front and center, so it should be something that primary-care doctors can manage.”
He also pointed out that in the current system, in which a doctor who cares for a patient’s body often has little contact with the doctor who cares for her mind, doesn’t make much sense. Psychiatric drugs and conditions can affect physical health, and drugs for medical conditions, as well as the medical conditions themselves, have psychological effects. And people with mental illness are two to four times as likely to die from their medical conditions as people without mental illness. Several studies have shown that when primary-care doctors team up with mental-health workers, their patients’ physical health improves.
The key to making team-based medical care work, Phillips said, is helping the patient feel that his or her relationship with the primary-care provider is at its center. “I’ve actually done some of this,” he told me. “And it’s very meaningful to patients to have a connection to a member of the team when they realize that the team is an extension of their physician. So it can’t be a faceless person who’s anonymous and is a robo-caller.”
Not long ago, a patient of mine came to my office, accompanied by his worried family. He’d been acting peculiarly, and it wasn’t clear whether his behavior was caused by some longstanding psychiatric issues or by his many medical problems and medications. I phoned a psychiatrist at my hospital to see if I might expedite an appointment for an evaluation to complement my medical work. “Is he with you now?” the psychiatrist asked. I said that he was, and she told me that she happened to be free, and would come to my office and meet with him there.
The patient and his family were greatly reassured by the psychiatrist’s visit. I have no doubt that much of that reassurance came from seeing the psychiatrist and me, even briefly, in the same room together—from a sense that I, the doctor who knows the patient best, was running the show.
I thought, Wouldn’t it be great if a psychiatrist appeared in my office every time a patient needed one? In the future, one will—most likely on a screen.



For more mental health news, Click Here to access the Serious Mental Illness Blog

(Source: newyorker.com)

Filed under new news psychology psychological psychologist therapist therapists psychotherapy medical medicine health illness mental mental health mental illness mental disorder disorder primary care primary care doctor doctors boston massachusetts united states united states usa america internist

58 notes

[Article of Interest] The Problem With How We Treat Bipolar Disorder
By Linda Logan
Excerpt:
The last time I saw my old self, I was 27 years old and living in Boston. I was doing well in graduate school, had a tight circle of friends and was a prolific creative writer. Married to my high-school sweetheart, I had just had my first child. Back then, my best times were twirling my baby girl under the gloaming sky on a Florida beach and flopping on the bed with my husband — feet propped against the wall — and talking. The future seemed wide open.
I don’t think there is a particular point at which I can say I became depressed. My illness was insidious, gradual and inexorable. I had a preview of depression in high school, when I spent a couple of years wearing all black, rimming my eyes in kohl and sliding against the walls in the hallways, hoping that no one would notice me. But back then I didn’t think it was a very serious problem.
The hormonal chaos of having three children in five years, the pressure of working on a Ph.D. dissertation and a genetic predisposition for a mood disorder took me to a place of darkness I hadn’t experienced before. Of course, I didn’t recognize that right away. Denial is a gauze; willful denial, an opiate. Everyone seemed in league with my delusion. I was just overwhelmed, my family would say. I should get more help with the kids, put off my Ph.D.
When I told other young mothers about my bone-wearying fatigue, they rolled their eyes knowingly and mumbled, “Right.” But what they didn’t realize was that I could scarcely push the stroller to the park, barely summon the breath to ask the store clerk, “Where are the Pampers?” I went from doctor to doctor, looking for the cause. Lab tests for anemia, low blood sugar and hypothyroidism were all negative.
Any joy I derived from my children was now conjoined with grief. I couldn’t breathe the perfume of their freshly shampooed hair without being seized by the realization that they would not always be under my roof. While stroking their backs, I would mentally fast-forward their lives — noses elongating, tongues sharpening — until I came to their leave-taking, until I reached my death and, ultimately, theirs.
I lost my sense of competence. If a colleague remarked on my intelligence, I mentally derided him as being too stupid to know how dumb I was. If someone asked what I did for a living, I would say, “Nothing” — a remarkably effective conversation stopper. I couldn’t bear the thought of socializing; one night I jumped out of the car as my husband and I were driving to a party.
Despite having these feelings in my mid-30s, when my kids were 8, 5 and 3, I was thriving professionally: I had recently completed my Ph.D. in geography, had just finished co-teaching a semester at M.I.T. as a lecturer and was revising my dissertation on spec for a respected university press. Yet several nights a week, I drove to the reservoir near my home, sat under a tree and, as joggers and their dogs ran past, thought about ending it all. There was a gun shop on the way to my poetry group; I knew exactly where to go when the time came.
My day, once broken by naps, gradually turned into lengthy stretches of sleep, punctuated by moments of wakefulness. My husband and I didn’t explain to the kids that I was depressed. “Mommy’s a little tired today,” we would say. A year or so earlier, a therapist told us to tell the children. “But they’re just kids,” we said. “What do they know?” “They know,” she said. When we eventually spoke to them, my oldest daughter came to me and asked: “Why did you keep it a secret? I thought all mothers were like you.”
After a few weeks of stopping at the reservoir, as suicide eclipsed all other thoughts, I finally told my husband about my worsening psychic pain. The next day I was hospitalized. It was June 1989. Even though we were living in Boston, we decided I should go to Chicago to work with the psychopharmacologist who, 15 years earlier, restored the health of my father, who had also been hospitalized for depression. As the cab pulled away from our house, I turned and saw three children’s hands pressed against the screen of an upstairs window. This is the way the world breaks.
The moment the psych-unit doors locked behind me, I was stripped of my identity as wife, mother, teacher and writer and transformed into patient, room number and diagnosis. I couldn’t open a refrigerator without permission. If I were on suicide watch, I had to ask before going to the bathroom. I was told when to sleep and when to wake, when to eat and when to go to group. My routine, which at home had cleaved so closely to my children’s, now revolved around the clattering sounds of the food trays being brought three times each day from the service elevators into our unit. With my husband and children nearly 1,000 miles away, I was severed from my fixed stars. I missed my children’s smells, the way they used to wrap their bodies around my legs when I was on the phone. I brought my son’s comforter to the hospital for my bed. I remembered him with one leg thrown across the covers, a small foot peeking out from his pajamas.
When my children visited, I had to resuscitate my maternal self, if only for an hour. I dragged myself to the shower, pulled on a pair of clean sweat pants and a fresh T-shirt and ran a streak of lipstick across my lips, hoping to look like a reasonable facsimile of a mother.
My doctor used my first hospitalization as a so-called washout, a period during which he planned to take me off the medication I was on and introduce several drugs in several different combinations. The prospect of polypharmacy — taking many drugs at once — seemed foreboding. I read about Prozac’s giving some people entirely new personalities: happier, lighter, even buoyant. “Who are you going to turn me into?” I asked my doctor.
“I’m not turning you into anyone,” he said. “You’ll be yourself, only happier.”
“I don’t think I even have a self anymore.”
“We’ll find your self.”
I was wary. “Just don’t turn me into Sandy Duncan.”
How much insult to the self is done by the symptoms of the disorder and how much by the drugs used to treat it? Paradoxically, psychotropic drugs can induce anxiety, nervousness, impaired judgment, mania, hypomania, hallucinations, feelings of depersonalization, psychosis and suicidal thoughts, while being used to treat the same symptoms. Before getting to the hospital, my daily moods ranged from bad to worse, each state accompanied by a profound depth of feeling. The first drug I was given was amitriptyline (Elavil), which, in the process of reducing my despair, blunted all my other emotions. I no longer felt anything. It was like going from satellite TV to one lousy channel.
[…]
For many people with mental disorders, the transformation of the self is one of the most disturbing things about being ill. And their despair is heightened when doctors don’t engage with the issue, don’t ask about what parts of the self have vanished and don’t help figure out strategies to deal with that loss.
Some in the mental-health field are beginning to recognize this need. Janina Fisher, a psychologist and the assistant director of the Sensorimotor Psychotherapy Institute in Broomfield, Colo., told me that there has been a “sea change” in the role the self plays in the therapeutic dialogue since the decades when I was sick. New therapies and treatment philosophies, founded mostly by clinical psychologists and other practitioners who are not medical doctors, recognize the role of the self in people with mental illness. Patients tell her, “I just want to be that person I used to be.” Fisher encourages her patients to recognize that their mental trauma is a part of their life, but shouldn’t dominate it.
In my own experience with Scheftner, whom I began seeing after my father’s doctor moved away, we talk about the self but only when I bring it up. That’s why I have enjoyed helping to run a support group for people with mental disorders, something I’ve been doing for the last three years. There are usually 8 of us, sometimes 12. We sit in the basement of a local library every Wednesday afternoon. Though we know one another’s innermost thoughts, we are intimate strangers, not friends. Like A.A. and other self-help groups, we’re peer-led: run by and for people with mental disorders. We talk one by one about the past week — small achievements, setbacks, doctor appointments, family conflicts. While the self is not always an explicit topic, the loss of self — or for those doing better, the reconstruction of the self — is a hovering presence in the group.
One day, not long ago, a middle-aged man came to our group. He told us that he spent the past year attending different grief groups, but none of them were right. “Why not?” someone asked. The man said: “Because everyone there was grieving over the loss of another person. I was grieving for myself. For who I used to be before I got sick and who I am now.”
During the 20-odd years since my hospitalizations, many parts of my old self have been straggling home. But not everything made the return trip. While I no longer jump from moving cars on the way to parties, I still find social events uncomfortable. And, although I don’t have to battle to stay awake during the day, I still don’t have full days — I’m only functional mornings to midafternoons. I haven’t been able to return to teaching. How many employers would welcome a request for a cot, a soft pillow and half the day off?
One morning, about five years ago, my husband and I were talking on the family-room sofa. I was still wearing my pajamas and had wool hiking socks on. As he rubbed my feet, he told me he was leaving. It was, at once, a scene of tenderness and savagery. A little later, he threw some clothes into a suitcase and moved out. But my self — devastated, grieving, angry — remained intact.
Today, my mind is nimble. Creative writing has crept back into my life. I’ve made a couple of close friends in Chicago. My greatest pleasure is still my children — they’re starting careers, marrying, on the brinks of their lives. I’m looking forward to grandchildren, to singing the 1950s favorite “Life Is but a Dream” while spinning those babies under the stars of a falling night on a Florida beach. This June, I’m turning 60. I’m having a small party to celebrate my ingathering of selves. My old self was first to R.S.V.P.

[Article of Interest] The Problem With How We Treat Bipolar Disorder

By Linda Logan

Excerpt:

The last time I saw my old self, I was 27 years old and living in Boston. I was doing well in graduate school, had a tight circle of friends and was a prolific creative writer. Married to my high-school sweetheart, I had just had my first child. Back then, my best times were twirling my baby girl under the gloaming sky on a Florida beach and flopping on the bed with my husband — feet propped against the wall — and talking. The future seemed wide open.

I don’t think there is a particular point at which I can say I became depressed. My illness was insidious, gradual and inexorable. I had a preview of depression in high school, when I spent a couple of years wearing all black, rimming my eyes in kohl and sliding against the walls in the hallways, hoping that no one would notice me. But back then I didn’t think it was a very serious problem.

The hormonal chaos of having three children in five years, the pressure of working on a Ph.D. dissertation and a genetic predisposition for a mood disorder took me to a place of darkness I hadn’t experienced before. Of course, I didn’t recognize that right away. Denial is a gauze; willful denial, an opiate. Everyone seemed in league with my delusion. I was just overwhelmed, my family would say. I should get more help with the kids, put off my Ph.D.

When I told other young mothers about my bone-wearying fatigue, they rolled their eyes knowingly and mumbled, “Right.” But what they didn’t realize was that I could scarcely push the stroller to the park, barely summon the breath to ask the store clerk, “Where are the Pampers?” I went from doctor to doctor, looking for the cause. Lab tests for anemia, low blood sugar and hypothyroidism were all negative.

Any joy I derived from my children was now conjoined with grief. I couldn’t breathe the perfume of their freshly shampooed hair without being seized by the realization that they would not always be under my roof. While stroking their backs, I would mentally fast-forward their lives — noses elongating, tongues sharpening — until I came to their leave-taking, until I reached my death and, ultimately, theirs.

I lost my sense of competence. If a colleague remarked on my intelligence, I mentally derided him as being too stupid to know how dumb I was. If someone asked what I did for a living, I would say, “Nothing” — a remarkably effective conversation stopper. I couldn’t bear the thought of socializing; one night I jumped out of the car as my husband and I were driving to a party.

Despite having these feelings in my mid-30s, when my kids were 8, 5 and 3, I was thriving professionally: I had recently completed my Ph.D. in geography, had just finished co-teaching a semester at M.I.T. as a lecturer and was revising my dissertation on spec for a respected university press. Yet several nights a week, I drove to the reservoir near my home, sat under a tree and, as joggers and their dogs ran past, thought about ending it all. There was a gun shop on the way to my poetry group; I knew exactly where to go when the time came.

My day, once broken by naps, gradually turned into lengthy stretches of sleep, punctuated by moments of wakefulness. My husband and I didn’t explain to the kids that I was depressed. “Mommy’s a little tired today,” we would say. A year or so earlier, a therapist told us to tell the children. “But they’re just kids,” we said. “What do they know?” “They know,” she said. When we eventually spoke to them, my oldest daughter came to me and asked: “Why did you keep it a secret? I thought all mothers were like you.”

After a few weeks of stopping at the reservoir, as suicide eclipsed all other thoughts, I finally told my husband about my worsening psychic pain. The next day I was hospitalized. It was June 1989. Even though we were living in Boston, we decided I should go to Chicago to work with the psychopharmacologist who, 15 years earlier, restored the health of my father, who had also been hospitalized for depression. As the cab pulled away from our house, I turned and saw three children’s hands pressed against the screen of an upstairs window. This is the way the world breaks.

The moment the psych-unit doors locked behind me, I was stripped of my identity as wife, mother, teacher and writer and transformed into patient, room number and diagnosis. I couldn’t open a refrigerator without permission. If I were on suicide watch, I had to ask before going to the bathroom. I was told when to sleep and when to wake, when to eat and when to go to group. My routine, which at home had cleaved so closely to my children’s, now revolved around the clattering sounds of the food trays being brought three times each day from the service elevators into our unit. With my husband and children nearly 1,000 miles away, I was severed from my fixed stars. I missed my children’s smells, the way they used to wrap their bodies around my legs when I was on the phone. I brought my son’s comforter to the hospital for my bed. I remembered him with one leg thrown across the covers, a small foot peeking out from his pajamas.

When my children visited, I had to resuscitate my maternal self, if only for an hour. I dragged myself to the shower, pulled on a pair of clean sweat pants and a fresh T-shirt and ran a streak of lipstick across my lips, hoping to look like a reasonable facsimile of a mother.

My doctor used my first hospitalization as a so-called washout, a period during which he planned to take me off the medication I was on and introduce several drugs in several different combinations. The prospect of polypharmacy — taking many drugs at once — seemed foreboding. I read about Prozac’s giving some people entirely new personalities: happier, lighter, even buoyant. “Who are you going to turn me into?” I asked my doctor.

“I’m not turning you into anyone,” he said. “You’ll be yourself, only happier.”

“I don’t think I even have a self anymore.”

“We’ll find your self.”

I was wary. “Just don’t turn me into Sandy Duncan.”

How much insult to the self is done by the symptoms of the disorder and how much by the drugs used to treat it? Paradoxically, psychotropic drugs can induce anxiety, nervousness, impaired judgment, mania, hypomania, hallucinations, feelings of depersonalization, psychosis and suicidal thoughts, while being used to treat the same symptoms. Before getting to the hospital, my daily moods ranged from bad to worse, each state accompanied by a profound depth of feeling. The first drug I was given was amitriptyline (Elavil), which, in the process of reducing my despair, blunted all my other emotions. I no longer felt anything. It was like going from satellite TV to one lousy channel.

[…]

For many people with mental disorders, the transformation of the self is one of the most disturbing things about being ill. And their despair is heightened when doctors don’t engage with the issue, don’t ask about what parts of the self have vanished and don’t help figure out strategies to deal with that loss.

Some in the mental-health field are beginning to recognize this need. Janina Fisher, a psychologist and the assistant director of the Sensorimotor Psychotherapy Institute in Broomfield, Colo., told me that there has been a “sea change” in the role the self plays in the therapeutic dialogue since the decades when I was sick. New therapies and treatment philosophies, founded mostly by clinical psychologists and other practitioners who are not medical doctors, recognize the role of the self in people with mental illness. Patients tell her, “I just want to be that person I used to be.” Fisher encourages her patients to recognize that their mental trauma is a part of their life, but shouldn’t dominate it.

In my own experience with Scheftner, whom I began seeing after my father’s doctor moved away, we talk about the self but only when I bring it up. That’s why I have enjoyed helping to run a support group for people with mental disorders, something I’ve been doing for the last three years. There are usually 8 of us, sometimes 12. We sit in the basement of a local library every Wednesday afternoon. Though we know one another’s innermost thoughts, we are intimate strangers, not friends. Like A.A. and other self-help groups, we’re peer-led: run by and for people with mental disorders. We talk one by one about the past week — small achievements, setbacks, doctor appointments, family conflicts. While the self is not always an explicit topic, the loss of self — or for those doing better, the reconstruction of the self — is a hovering presence in the group.

One day, not long ago, a middle-aged man came to our group. He told us that he spent the past year attending different grief groups, but none of them were right. “Why not?” someone asked. The man said: “Because everyone there was grieving over the loss of another person. I was grieving for myself. For who I used to be before I got sick and who I am now.”

During the 20-odd years since my hospitalizations, many parts of my old self have been straggling home. But not everything made the return trip. While I no longer jump from moving cars on the way to parties, I still find social events uncomfortable. And, although I don’t have to battle to stay awake during the day, I still don’t have full days — I’m only functional mornings to midafternoons. I haven’t been able to return to teaching. How many employers would welcome a request for a cot, a soft pillow and half the day off?

One morning, about five years ago, my husband and I were talking on the family-room sofa. I was still wearing my pajamas and had wool hiking socks on. As he rubbed my feet, he told me he was leaving. It was, at once, a scene of tenderness and savagery. A little later, he threw some clothes into a suitcase and moved out. But my self — devastated, grieving, angry — remained intact.

Today, my mind is nimble. Creative writing has crept back into my life. I’ve made a couple of close friends in Chicago. My greatest pleasure is still my children — they’re starting careers, marrying, on the brinks of their lives. I’m looking forward to grandchildren, to singing the 1950s favorite “Life Is but a Dream” while spinning those babies under the stars of a falling night on a Florida beach. This June, I’m turning 60. I’m having a small party to celebrate my ingathering of selves. My old self was first to R.S.V.P.

Filed under Science History News bipolar bipolarity antipsychotic isps psychiatric psychiatry psychoanalysis psychological psychology psychopathology psychopharmacology psychosis psychotherapy psychotic Crime Extreme america documentary med medication meds mental mental illness pharmacy hospital dsm dsm 5

15 notes



[Article of Interest] Childhood Depression May Be Tied to Later Heart Risk

For these kids, obesity, smoking and inactivity more likely in adolescence, preliminary research showsTeens who were depressed as children are more likely to be obese, to smoke and to be sedentary, a new study finds.The findings suggest that depression during childhood can increase the risk of heart problems later in life, according to the researchers.The study included more than 500 children who were followed from ages 9 to 16. There were three groups: those diagnosed with depression as children, their depression-free siblings and a control group of unrelated youngsters with no history of depression.Twenty-two percent of the kids who were depressed at age 9 were obese at age 16, the study found. “Only 17 percent of their siblings were obese, and the obesity rate was 11 percent in the unrelated children who never had been depressed,” study first author Robert Carney, a professor of psychiatry at Washington University School of Medicine in St. Louis, said in a university news release.The researchers found similar patterns when they looked at smoking and physical activity.”A third of those who were depressed as children had become daily smokers, compared to 13 percent of their nondepressed siblings and only 2.5 percent of the control group,” Carney said.Teens who had been depressed as children were the least physically active, their siblings were a bit more active and those in the control group were the most active, according to the study, which is scheduled for presentation Friday at the annual meeting of the American Psychosomatic Society in Miami. Although the study showed an association between childhood depression and obesity, smoking habits and inactivity later in life, it did not prove a cause-and-effect relationship.These findings are cause for concern because “a number of recent studies have shown that when adolescents have these cardiac risk factors, they’re much more likely to develop heart disease as adults and even to have a shorter lifespan,” Carney said.”Active smokers as adolescents are twice as likely to die by the age of 55 than nonsmokers, and we see similar risks with obesity, so finding this link between childhood depression and these risk factors suggests that we need to very closely monitor young people who have been depressed,” he said.
Note: Data and conclusions presented at meetings are typically considered preliminary until published in a peer-reviewed medical journal.

[Article of Interest] Childhood Depression May Be Tied to Later Heart Risk


For these kids, obesity, smoking and inactivity more likely in adolescence, preliminary research shows

Teens who were depressed as children are more likely to be obese, to smoke and to be sedentary, a new study finds.
The findings suggest that depression during childhood can increase the risk of heart problems later in life, according to the researchers.
The study included more than 500 children who were followed from ages 9 to 16. There were three groups: those diagnosed with depression as children, their depression-free siblings and a control group of unrelated youngsters with no history of depression.
Twenty-two percent of the kids who were depressed at age 9 were obese at age 16, the study found. “Only 17 percent of their siblings were obese, and the obesity rate was 11 percent in the unrelated children who never had been depressed,” study first author Robert Carney, a professor of psychiatry at Washington University School of Medicine in St. Louis, said in a university news release.
The researchers found similar patterns when they looked at smoking and physical activity.
A third of those who were depressed as children had become daily smokers, compared to 13 percent of their nondepressed siblings and only 2.5 percent of the control group,” Carney said.
Teens who had been depressed as children were the least physically active, their siblings were a bit more active and those in the control group were the most active, according to the study, which is scheduled for presentation Friday at the annual meeting of the American Psychosomatic Society in Miami. Although the study showed an association between childhood depression and obesity, smoking habits and inactivity later in life, it did not prove a cause-and-effect relationship.
These findings are cause for concern because “a number of recent studies have shown that when adolescents have these cardiac risk factors, they’re much more likely to develop heart disease as adults and even to have a shorter lifespan,” Carney said.
Active smokers as adolescents are twice as likely to die by the age of 55 than nonsmokers, and we see similar risks with obesity, so finding this link between childhood depression and these risk factors suggests that we need to very closely monitor young people who have been depressed,” he said.

Note: Data and conclusions presented at meetings are typically considered preliminary until published in a peer-reviewed medical journal.

(Source: Childhood Depression May Be Tied to Later Heart Risk)

Filed under antipsychotic isps psychiatric psychiatry psychoanalysis psychological psychology psychopathology psychopharmacology psychosis psychotherapy psychotic News Science Neuroscience teen teenager child children smoke smoking cig cigarette cigarettes History Major Depression depressed depression depressive health