Search
Close this search box.

What Evidence Should Guide Treatment?

It’s an old lament: They don’t like us, and we don’t like them. Practitioners and researchers, that is; us and them being whichever one you are, and aren’t, respectively. So why don’t we get along? We all care about the same things, don’t we?

Researchers complain that practitioners don’t read the research. “That’s not responsible!” they say. “They should be using what works, not just whatever they feel like doing. Why don’t they listen to us?” Good point. Practitioners should be paying attention to the research, to improve practice based on what has been shown to work. So why don’t they?

Well… Practitioners complain that researchers are too focused on doing studies that are not relevant to practice. “So why bother to read that junk? It’s got nothing to do with my work!” Hmm. Good point. So why do they do those studies?

The academic culture has traditionally valued papers published in the highest status journals, which generally have the highest scientific standards. Of course, the more trivial the subject of study, the easier it is to achieve the high level of control required for publication in the high status journals. And although the scholarly journals may also include some clinically relevant nuggets, many practitioners don’t have the time or patience to dig for them.

This state of affairs leaves everyone frustrated. Researchers really do want their work to be used to improve practice; and practitioners really would like to know about clinically relevant research – but only that, without all the clutter.

The field has moved beyond mere interest in bridging the research-practice gap, to necessity in doing so. This necessity is most clearly apparent in the dreaded and ever-growing pressure to provide empirically supported treatments. This is, on the whole, a positive development, although with a major caveat.

First, the positive. It is reasonable and beneficial to expect that mental health professionals should provide effective treatments when known-effective treatments are available. Otherwise the customer risks paying for an inferior product/service. Today plenty of people are receiving months or even years of treatment for a problem that could routinely be resolved in relatively few sessions, using a known-effective treatment. This is a scandal; it’s bad for the public and it’s bad for the field. This should be corrected, and the movement towards empirically supported treatments has the potential to effect the correction.

Now for the caveat. Too much enthusiasm to promote empirically supported treatments can actually do harm and prevent clients from receiving the most effective available treatments. This is because not all empirically supported treatments are actually effective, and not all actually-effective treatments have been established as empirically supported treatments.

What is empirically supported is not necessarily effective. That is, many so-called empirically supported treatments were tested in laboratory settings, with trained and supervised therapists using treatment manuals, as well as research participants who have only the problem under study and no other problem. This can result in a cherry-picking effect in which the treatment may be proven efficacious in the laboratory setting, for uniquely easy-to-treat clients. However, this does not tell us whether the same treatment, as used by therapists in field/practice settings, will work with their real-world clients.

For a proven-efficacious treatment to be embraced by clinicians, the treatment should be tested in the field. Unfortunately, this step is not always done, and those regulating and/or funding treatment may not know the difference between an efficacious (laboratory-tested) and effective (field-tested) treatment. But practitioners know the difference. This is why the laboratory studies, often called “efficacy” studies, although highly valued in the scholarly journals, tend to be scorned by practitioners.

What is effective is not necessarily empirically supported. This does not mean that it cannot be empirically supported – only that it has not yet been subject to the testing that would earn it that designation. Not every study has been done yet. Also, because of limited resources, psychotherapy research tends to be conducted on brief symptom-focused treatments that can be neatly and efficiently studied with a relatively limited expenditure. Thus, the literature on empirically-supported treatments tends to favor the cognitive-behavioral “procedure” treatments that strictly target specific symptoms.

Although such treatments may indeed be effective, or even superior, for certain symptoms or disorders, the reality is more complex. For example, the Consumer Reports study (Seligman, 1995), using a retrospective research design, found that most therapy clients reported greater benefit from longer treatments. This sharply contrasted with much other psychotherapy research that found greater benefit for the shorter, easier-to-study symptom-focused treatments (typically compared, in those studies, to other brief treatments only). The finding of greater benefit for longer-term treatment may reflect the effectiveness of certain treatment approaches, of longer-term treatment per se, and/or of the impact of the so-called non-specific factors (such as empathy, positive regard, therapeutic alliance) that have been shown to contribute to positive outcomes (Norcross, 2002). The Consumer Reports findings highlight the risk that the easy-to-study treatments may overshadow other treatments that may actually be, in the long run, more effective and more beneficial for clients.

Identifying what works. To determine which treatments are most worthy of doing (and paying for), the literature should be analyzed in a way that values effectiveness over mere efficacy. Such analysis might also look for presence/absence of proven-effective treatment components, as well as the non-specific factors which may be more present/potent in some treatment approaches, even if the particular treatment approach in question has not yet undergone formal testing.

In lieu of appropriate analysis, there is a real risk that preference for empirically supported treatments could inappropriately lead to the funding/use of efficacious but ineffective treatments, saddling clinicians with requirements to use treatments that do not work well with their clients. For example, I’m currently collaborating with an agency in which all their therapists are trained in the most-research-supported child trauma therapy method, and it is their primary treatment modality, yet only a tiny percentage of their clients complete the trauma resolution work. Whereas I worked on a project with another agency – similarly focused on child victims of crime – using a less established method, in which nearly every client made it through the trauma work, with excellent outcomes (Descilo, Greenwald, Schmitt, & Reslan, 2010)

The preference for empirically supported treatment may also lead to refusal to fund/support actually-effective treatments that clinicians find useful but that have not yet been formally tested. For example, I worked with an agency that had been providing excellent therapy on a county contract, but then lost the contract to another agency that provided one of the evidence-based treatment approaches. Unfortunately, the new agency’s treatment was much less effective, and the clients suffered.

In conclusion, we don’t want to throw the baby out with the bath-water, but neither do we want to leave the baby soaking in that old bath-water if we have some better way to wash it. The movement to bring proven-effective methods into clinical practice (Norcross, Beutler, & Levant, 2006) has to balance what the research supports with what works with real clients. This means that treatment researchers should focus more on real clients in field/practice settings, and practitioners should systematically collect outcome data. When research and practice work together, we get better at helping our clients.

References

Descilo, T., Greenwald, R., Schmitt, T. A., & Reslan, S. (2010). Traumatic incident reduction for urban at-risk youth and unaccompanied minor refugees: Two open trials. Journal of Child & Adolescent Trauma, 3, 181-191.

Norcross, J. C. (Ed.). (2002). Psychotherapy relationships that work: Therapist contributions and responsiveness to patients. New York, Oxford University Press.

Norcross, J.C., Beutler, L.E., & Levant, R.F. (Eds.). (2006). Evidence-based practices in mental health. Washington, DC: American Psychological Association.

Seligman, M. E. P. (1995). The effectiveness of psychotherapy: The Consumer Reports study. American Psychologist, 50, 965-974.

Loading