Science and Pseudoscience in Clinical Psychology
Posted by: corboy ()
Date: February 21, 2004 11:26AM

by Lilienfeld, Lynn and Lohr

[www.guilford.com]

This article contains a valuable 10 point list of characteristics that distinguish pseudoscience from science.

Here is my condensed version of this valuable 10-point list. Read it and see how many of these you can recognize!

Especially take note of Point Five: Science puts the burden of proof on those making the claims; in pseudoscience, the proponent cannot submit convincing evidence, and instead puts the burden of proof on the skeptic, demanding that the skeptic do the work and provide the proofs.

I think we can call give examples of that particular gambit

Characteristics of Pseudoscience

1. An overuse of ad hoc hypotheses designed to immunize claims
from falsification.

The repeated invocation of ad hoc hypotheses to explain away negative findings is a common tactic among proponents of pseudoscientific claims. Moreover, in most pseudosciences, ad hoc hypotheses are simply “pasted on” to plug holes in the theory in question. When taken to an extreme, ad hoc hypotheses can provide an impenetrable barrier against potential
refutation.

It is crucial to emphasize that the invocation of ad hoc hypotheses in the face of negative evidence is sometimes a legitimate strategy in science. In scientific research programs, however, such maneuvers tend to enhance the theory’s content, predictive power, or both (see Lakatos, 1978).

2. Absence of self-correction. Scientific research programs are not
necessarily distinguished from pseudoscientific research programs in the verisimilitude of their claims, because proponents of both programs frequently advance incorrect assertions. Nevertheless, in the long run most scientific research programs tend to eliminate these errors, whereas most pseudoscientific research programs do not. Consequently, intellectual stagnation is a hallmark of most pseudoscientific research programs (Ruscio,
2001). For example, astrology has changed remarkably little in the past 2,500 years (Hines, 1988).

3. Evasion of peer review. On a related note, many proponents of
pseudoscience avoid subjecting their work to the often ego-bruising process of peer review…they may do so on the grounds that the peer review process is inherently biased against findings or claims that contradict wellestablished paradigms…In other cases, they may avoid the peer review process on the grounds that their assertions cannot be evaluated adequately using standard scientific methods.

Although the peer review process is far from flawless, it remains the best mechanism for self-correction in science, and assists investigators in identifying errors in their reasoning, meth-
odology, and analyses. By remaining largely insulated from the peer review process, some proponents of pseudoscience forfeit an invaluable opportunity to obtain corrective feedback from informed colleagues.

4. Emphasis on confirmation rather refutation. The brilliant physicist Richard Feynman (1985) maintained that the essence of science is a bending over backwards to prove oneself wrong…. science at its best involves the maximization of constructive criticism.

Ideally, scientists subject their cherished claims to grave risk of
refutation ... In contrast, pseudo-scientists tend to seek only confirming evidence for their claims. Because a determined advocate can find at least some supportive evidence for virtually any claim (Popper, 1959), this confirmatory hypothesis-testing strategy is not an efficient means of rooting out error in one’s web of beliefs. Moreover, as Bunge (1967) observed, most
pseudosciences manage to reinterpret negative or anomalous findings as corroborations of their claims.

**5. Reversed burden of proof. As noted earlier, the burden of proof in science rests invariably on the individual making a claim, not on the critic.

Proponents of pseudoscience frequently neglect this principle and instead demand that skeptics demonstrate beyond a reasonable doubt that a claim (e.g., an assertion regarding the efficacy of a novel therapeutic technique) is false.

This error is similar to the logician’s ad ignorantium fallacy (i.e., the
argument from ignorance), the mistake of assuming that a claim is likely to be correct merely because there is no compelling evidence against it (Shermer, 1997). For example, some proponents of unidentified flying objects (UFOs) have insisted that skeptics account for every unexplained report of an anomalous event in the sky (Hines, 1988; Sagan, 1995a). *But
because it is essentially impossible to prove a universal negative, this tactic incorrectly places the burden of proof on the skeptic rather than the claimant.

6. Absence of connectivity. In contrast to most scientific research programs, pseudoscientific research programs tend to lack “connectivity” with other scientific disciplines.

Pseudosciences often purport to create entirely new paradigms out of whole cloth rather than to build on extant paradigms. In so doing, they often neglect well-established scientific principles or hard-won scientific knowledge.

Although scientists should always remain open to the possibility that an entirely novel paradigm has successfully overturned all preexisting paradigms, they must insist on *very high standards of evidence before drawing such a conclusion.

7. Overreliance on testimonial and anecdotal evidence. Testimonial and anecdotal evidence can be quite useful in the early stages of scientific investigation. Nevertheless, such evidence is typically much more helpful in the context of discovery (i.e., hypothesis generation) than in the context of justification (i.e., hypothesis testing; see Reichenbach, 1938). Proponents
of pseudoscientific claims frequently invoke reports from selected cases (e.g., “This treatment clearly worked for Person X, because Person X improved markedly following the treatment”) as a means of furnishing dispositive evidence for these claims.

As Gilovich (1991) observed, however, case reports almost never provide sufficient evidence for a claim, although they often provide necessary evidence for this claim. For example, if a new form of psychotherapy is efficacious, one should certainly expect at least some positive case reports of improvement. But such case reports do not provide adequate evidence that the improvement was attributable to the psychotherapy, because this improvement could have been produced by a host of other influences (e.g.placebo effects, regression to the mean, spontaneous remission, maturation; see Cook & Campbell, 1979).

8. Use of obscurantist language. Many proponents of pseudoscience use impressive sounding or highly technical jargon in an effort to provide their disciplines with the superficial trappings of science (see van Rillaer, 1991, for a discussion of “strategies of dissimulation” in pseudoscience).

Such language may be convincing to individuals unfamiliar with the scientific underpinnings of the claims in question, and may therefore lend these claims *an unwarranted imprimatur of scientific legitimacy.*

9. Absence of boundary conditions. Most well-supported scientific
theories possess boundary conditions, that is, well-articulated limits under which predicted phenomena do and do not apply. In contrast, many or most pseudoscientific phenomena are purported to operate across an exceedingly wide range of conditions. As Hines (1988, 2001) noted, one frequent characteristic of fringe psychotherapies is that they are ostensibly efficacious for almost all disorders regardless of their etiology. For example, some proponents of Thought Field Therapy (see Chapter 9) have proposed that this treatment is beneficial for virtually all mental disorders. Moreover, the developer of this treatment has posited that it is efficacious not only for adults but for “horses, dogs, cats, infants, and very young children” (Callahan, 2001b, p. 1255).

10. The mantra of holism. Proponents of pseudoscientific claims, especially in organic medicine and mental health, often resort to the “mantra of holism” (Ruscio, 2001) to explain away negative findings. When invoking this mantra, they typically maintain that scientific claims can be evaluated only within the context of broader claims and therefore cannot be judged in isolation.

There are two major difficulties with this line of reasoning. First, it implies that clinicians can effectively integrate in their heads a great deal of complex psychometric information from diverse sources, a claim that is doubtful given the research literature on clinical judgment (see Chapter 2). Second, by invoking the mantra of holism, proponents of the Rorschach and other techniques can readily avoid subjecting their claims to the risk of falsification.

In other words, if research findings corroborate the validity of a specific Rorschach index, Rorschach proponents can point to these findings as supportive evidence, but if these findings are negative, Rorschach proponents can explain them away by maintaining that “clinicians never interpret this index in isolation anyway” (see Merlo & Barnett, 2001, for an example).

This “heads I win, tails you lose” reasoning places the claims of these proponents largely outside of the boundaries of science.

Options: ReplyQuote


Sorry, only registered users may post in this forum.
This forum powered by Phorum.