Chat with us, powered by LiveChat Do you agree with Matt Normand's assumption that as behavior analysts, we are first scientists? Why or why not? What are the implications of acting as a scient - EssayAbode

Do you agree with Matt Normand’s assumption that as behavior analysts, we are first scientists? Why or why not? What are the implications of acting as a scient

Do you agree with Matt Normand’s assumption that as behavior analysts, we are first scientists? Why or why not? What are the implications of acting as a scientist and how can you ensure that you will practice along these guidelines?

APA format

Check for plagiarim and AI

Attached us the discussion rubric and reading

Must site one outside source

Discussion Post Rubric 20 Possible Points

Category 4 Points 2 Points 0 Points

Length of Post – Enough content to convey a scholarly message

The author’s post consisted of 150 – 200 words (Not counting reference citations)

The author’s post consisted of 100-149 words (Not counting reference citations)

The author’s post consisted of 100 words or less (Not counting reference citations)

Grammar, Usage, Spelling – The author proofread using software for obvious errors in grammar, usage, and spelling

The author’s post contained less than 2 grammar, usage, or spelling errors.

The author’s post contained 3-4 grammar, usage, or spelling errors.

The author’s post contained more than 5 grammar, usage, or spelling errors and proofreading was not apparent.

Referencing and Utilizing Outside Sources – The author referenced all assigned readings and (1) unique reference

The author posted a unique reference from a peer-reviewed document AND all the assigned readings.

The author was missing a unique reference from a peer-reviewed document or did not cite all the assigned readings.

The author neither used a unique reference from a peer-reviewed document and/or did not cite all the assigned readings.

Promotes Discussion – The author produces content beyond a summary and applies it to a logical argument.

The author’s post clearly responds to the assignment prompt, develops ideas cogently, organizes them logically, and supports them through empirical writing. The author’s post also raises questions or stimulates discussion.

The author’s post responds to the assignment prompt but relies heavily on definitional explanations and does not create and develop original ideas and support them logically. The author’s post may stimulate some discussion.

The author’s post does not correspond with the assignment prompt, mainly discusses personal opinions, irrelevant information, or information is presented with limited logic and lack of development and organization of ideas Does not support any claims made.

Demonstrates Application – The author is able to apply content to an example or real world application

The author’s post clearly demonstrates application and relationship to the week’s assigned reading/topic.

The author’s post refers to the assigned topic/reading tangentially but does not demonstrate application.

The author’s post does not demonstrate application of the week’s assigned topic/reading.

Be advised, there are also response costs associated with specific behaviors:

● A response cost of 3 points will be administered for not responding to a peer’s post ● A response cost of 3 points will be administered for late submissions (up to 2 days) ● Discussion posts that are more than two days late will not be accepted unless excused by the

instructor

,

1

Science, Skepticism, and Applied Behavior Analysis Matthew P Normand, Ph D , BCBA, University of the Pacifc

ABSTRACT

Pseudoscientifc claims concerning medical and psychological treatments of all varieties are commonplace. As behavior analysts, a sound skeptical approach to our science and practice is essential. The present paper offers an overview of science and skepticism and discusses the relationship of skepticism to behavior analysis, with an emphasis on the types of issues concerning behavior analysts in practice. Descriptors: Behavior analysis, pseudoscience, science, skepticism

“In science, keeping an open mind is a virtue—just not so open that your brains fall out.”

– James Oberg1

notIn science, being skeptical does mean doubting the validity of

everything, nor does it mean being cynical. Rather, to be skeptical is to judge the validity of a claim based on objective empirical evidence. David Hume, the 18th century philosopher, asserted that we should accept no things as true unless the evidence available makes the non-existence of the thing more miraculous than its existence. Even extraordinary claims can be true, but the more extraordinary the claim, the more extraordinary the evidence required. Not too long ago, the notion of human fight seemed like pure fancy. Today, scores of people take to the sky almost as routinely as they take to the highway. To be skeptical does not mean dismissing claims—even extraordinary claims—out of hand. It means examining the available evidence before reaching a decision or withholding judgment until suffcient evidence is had. One should not start with the assumption that a claim cannot be true any more than one should start with the assumption that a claim must be true. All reasonable evidence on both sides should be considered.

Skepticism is a critical feature of a scientifc repertoire. Indeed, many of the most prominent skeptics are and

1 Quote attributed to James Oberg by (Sagan, 1996).

have been some of the world’s most prominent scientists, including Richard Dawkins, Stephen Jay Gould, and Carl Sagan. Even B. F. Skinner was among the signers of the 1976 letter announcing the formation of the Committee for the Scientifc Investigation of the Paranormal, an organization dedicated to the promotion of scientifc skepticism and publication of the Skeptical Inquirer (Kurtz, 1996).2 The relationship of skepticism to behavior analysis is the relationship between skepticism and science in general. The experimental analysis of behavior is a natural science, and this natural science is the foundation of all behavior analytic research and practice. Moreover, the practical importance of a skeptical repertoire for those engaged in behavior analytic practice cannot be overstated. Pseudoscience abounds in so many of the areas behavior analysts tread, including developmental disabilities, education, and psychotherapy. According to physicist Robert Park, pseudoscience is characterized by claims purportedly supported by well-established scientifc evidence when, in truth, such evidence is misinterpreted, misunderstood, or wholly lacking (Park, 2000).

This paper is aimed primarily at behavior analysts in practice who are likely to encounter various pseudoscientifc claims in the course of their work and who might not immediately identify themselves as

2 Now known as the Committee for Scientifc Investigation (CSI).

scientists, although it will be argued, they should. Pseudosciences know no professional boundaries and thrive in many areas of research and practice. Claims regarding the effectiveness of sensory integration therapy, facilitated communication, and inclusion qualify as pseudoscience. All are offered as legitimate therapies or useful practices when, in fact, the evidence available fails to support them (Jacobson, Foxx, & Mulick, 2005). Today, one would be hard pressed to fnd an area more widely affected by rampant pseudoscience than that of autism treatment, which also happens to be one of the largest single areas of application for behavior analysts (Shook, Johnston, & Mellichamp, 2004). In the sections that follow, I discuss scientifc standards of evidence as they relate to the practice of behavior analysis, describe some of the common characteristics of pseudoscientifc claims, and offer suggestions to promote skepticism in applied behavior analysis.

Standards of Evidence

Interobserver Agreement

When gathering and evaluating relevant evidence, scientists take careful steps to minimize bias in observation. What scientists say should be controlled primarily by what is seen, rather than what one hopes to see. Bias in observation cannot be entirely eliminated, but it can be controlled. The ideal case might be one in which some automated recording system can be utilized, as often is the

SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS Behavior Analysis in Practice, 1(2), 42-49. 42

BAP_v1.2_p1-72.indd 42 10/10/08 8:55:56 AM

case in basic research. Though not without precedent, such automated recording is fairly uncommon in applied research and practice. Where human observers collect behavioral data, steps must be taken to ensure that changes in behavior over time are actually changes in the behavior of interest, and not the behavior of the observer (Baer, Wolf, & Risley, 1968). That is, the veracity of the data are assessed through some form of interobserver agreement measures or through the use of double-blind control procedures. In this way, the primary source of control over the verbal behavior of the observer is more likely to be the actually transpired events.

Experimental Design

Even the most careful observations are not suffcient to inform about, say, the effects of a given cold remedy. Many of us do not visit the doctor when experiencing mild symptoms characteristic of the common cold. Instead, we visit the doctor only when we’ve been struggling with the symptoms for some prolonged period of time or when the symptoms become so severe that we have a diffcult time coping. Typically, we receive a brief exam, are prescribed some medication, and go on our way. Within a few days we are feeling better and able to resume our normal activities. The wonders of modern medicine? Not necessarily. We might well have gotten better in about the same amount of time had we never visited the doctor. We visited the doctor only after some extended period of time suffering with symptoms or after we noticed the symptoms become severe. Either circumstance might suggest that we were nearing the end of our illness. The medication might have dampened our symptoms, but our recovery might not have been hastened. No matter how carefully we observed what happened, we would be unable to drawn any frm conclusions about cause and effect.

Now consider an analogous case concerning a behavior analytic intervention. A young child is referred by his classroom teacher for behavior analytic services because he rarely works

on assigned tasks during the class time allotted. The behavior analyst sets about taking careful records of the time the child is engaged in assigned class work for a period of one week, with observations distributed across times of day and academic domains. Once these data are analyzed, and it is determined that the child is engaged in assigned academic work about 30% of the time he should be so engaged, a token reinforcement system is implemented with points awarded each time he is engaged continuously with his work for 60 s. The points are, of course, later exchanged for back-up reinforcers such as preferred activities or items. The behavioral observation system is continued and, after a few weeks of intervention, the child is now observed to be on-task approximately 80% of the time and the teacher reports that his assignment completion is greatly improved, even better than some of his peers.

The wonders of modern behavioral science? Not necessarily. The intervention could have produced the changes observed, but so could have any number of other uncontrolled variables. Perhaps the type of work assignments changed during the same period of time, resulting in easier or more interesting assignments. Or the referral might have increased the overall amount of attention provided to the student by the teacher and other school personnel, thereby improving performance due to changed motivating conditions or more effective academic instruction or behavior management. It is impossible to know why the student’s performance improved based on the types of observations made. But, you say, we can be more certain of our success because what we did was based on solid behavioral principles and, moreover, we are successful again and again with different children. Perhaps, but it could very well be that our token economy intervention regularly recruits one or more of the extraneous variables mentioned (e.g., increased attention by school personnel), which is the actual agent of change. Then again, maybe not. The point is that we cannot know

from the information obtained. Experimental evaluation is critical

for all sciences and is the mechanism that ultimately provides us the ability to predict and control our subject matter. In most behavior analytic experimental designs, prediction is made possible through repeated measures of behavior during a baseline condition before any experimental or clinical manipulation is made. Such measures then provide a basis against which to compare behavioral observations made under the changed conditions. We use the baseline measures to predict what we would see if our manipulation did not affect the behavior. If the observed behavior under our changed conditions (e.g., during intervention) deviates from our prediction, an experimental or clinical effect is suggested. The extent to which we are able to replicate this effect through experimental manipulations such as reversals to baseline or multiple-baseline arrangements determines the strength of the conclusions that can be drawn. When we can predict the likelihood of behavior occurring or not occurring under certain conditions, and when we can alter such likelihoods through our manipulations, we have demonstrated a cause-effect relationship.

Of course, a well-developed science of behavior should presumably offer well-established technologies for the practitioner, technologies that do not require continued experimental evaluation. In medicine, for example, the diagnosis of a bacterial infection can readily lead to a prescription of antibiotics. The effectiveness of the antibiotic prescription is, however, heavily predicated on an accurate diagnosis. In behavior analytic practice, the prescription of intervention strategies also is heavily predicated on accurate diagnosis or, in behavioral terms, a functional behavior assessment. At present, the varying rigor with which functional assessments are conducted across practitioners and settings suggests that the easy prescription of well- established behavioral technologies is not practically at hand, with some exceptions.

SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS 43

BAP_v1.2_p1-72.indd 43 10/10/08 8:55:57 AM

A powerful reinforcement-based intervention such as a token economy, superimposed on existing but unknown contingencies, is likely to be benefcial even without a rigorous functional assessment. Ideally, as behavioral science matures, we will have evidence-based procedures of a fairly standardized sort that have been demonstrated to work for a large majority of people with whom they are used. When non-responders are identifed, more careful functional assessments can be conducted on an individual basis and individualized interventions prescribed as necessary, much the way a physician might alter the prescription of antibiotics if your health is not improved in the expected period of time.

Replication and Self-Correction

Methods applied in any specifc case are not failsafe. Fortunately, the majesty of science is that although it is fallible, it also is self-correcting. Careful technological description of procedures allows others to replicate the same procedures at different times, in different places, and with different participants (Baer et al., 1968). In the best cases, the peer-review process of publication in scientifc journals identifes fawed studies or erroneous conclusions drawn from otherwise solid studies before they are widely disseminated. Once fndings are disseminated, failures to replicate the reported fndings or the discovery of new fndings that refute or attenuate some earlier fndings lead to revisions of scientifc language and, ultimately, to a greater ability to describe, predict, and control our world. Sadly, many non- scientists view this as insufferable fip- fopping. The politician who alters an opinion or policy is thereafter chastised for being indecisive or insincere. The government agency that revises the guidelines for a healthy diet is mistrusted. In the public arena, it often is better to be true to some core conviction than responsive to a changing world. Science embraces “fip-fopping” so long as it is due to changes in evidence rather than extraneous sources of control.

The scientifc community arranges explicit and powerful contingencies of reinforcement for such behavior, and the scientist who treads lightly as preliminary data are gathered is in a much better position to alter his or her stance as emerging evidence dictates. A hallmark of the pseudoscientist is the propensity to make bold statements and draw frm conclusions in the absence of suffcient evidence. Once so committed, the aversive consequences for changing course can trump those arranged by the scientifc community.

Perhaps it is not so diffcult to see how one can succeed in making claims absent any supporting evidence, but how does someone succeed in promoting a claim in the face of existing evidence to the contrary? In psychology and the related social sciences, part of the answer is that markedly lesser standards of evidence are accepted than in the so-called hard sciences (e.g., physics, chemistry, and biology), and society seems to follow suit. It is not entirely clear why this is so. To be sure, a physicist need not labor to convince an engineer of the importance of basic physical laws. If the engineer does not abide by the laws of physics, the building falls down. This outcome is obvious and the cause is not attributed to some unknowable random process beyond the control of the engineer. It is attributed to some faw in design or construction. Even the layperson doesn’t assume that buildings sometimes fall down spontaneously because we can’t hope to control nature well enough to ensure otherwise. As a result, the engineer or builder is blamed and the failed methods revised or discarded. However, when a psychological therapy fails to demonstrably change behavior, the blame is not necessarily laid upon the therapist or the therapy, though the consequences of the failure can be as great or greater than the collapsed building. Instead, many laypeople and scientists alike assume it impossible to reliably infuence human behavior, because human behavior is complex and not entirely lawful. Therefore, to demonstrate that one therapy does

not succeed as reliably as another is not necessarily a fatal blow for the less successful therapy. This is an unfortunate state of affairs.

So what is to be made of the proposition that some things cannot be known with certainty, human behavior or otherwise? Nothing is known for certain, but much is known for which the likelihood of alternative explanations is so small as to be unworthy of consideration. When discussing what we know, we are really describing the strength of a prediction we can make. If we state that the sun will rise in the east tomorrow, we state this because it has never been observed to do otherwise.3 Based on historical observations of both the daily rising of the sun and, more importantly, scores of physical regularities observed by scientists at multiple levels of analysis, we can state the probability of the sun rising in the east as being so high as to be practically certain. Is it possible that the sun will rise in the west? Yes, but to say something is possible is not to say much at all. Science deals with probability, not possibility.

But perhaps the foregoing description of the general philosophy of science is just one of many equally valid philosophies about the world and our knowledge of it. Rubbish. The superiority of science is quite well-established, as science is the only “philosophy” that regularly provides the ability to predict and control that which it purports to explain. One might argue that prediction and control are not the ultimate demonstrations of truth, but such arguments seem to hold better in conversation than in practice. As the biologist Richard Dawkins eloquently put it, “Show me a cultural relativist at 30,000 feet and I’ll show you a hypocrite” (Dawkins, 1995, p. 31). When it really matters, we rely on science; we fy in the plane designed in accordance with the laws of physics.

3 This is, of course, a geocentric description of the behavior of the earth and sun. Although wanting in scientifc precision, it should serve the present purpose better than appeals to the regularity of the earth’s rotation as it revolves around the sun.

44 SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS

BAP_v1.2_p1-72.indd 44 10/10/08 8:55:57 AM

.

The Practical Limits of Scientifc Rationalism

Ideally, we would behave as scientifc rationalists in matters as diverse as nutrition, economics, and global warming. That is, we would be able to respond to direct empirical evidence as we confront important matters affecting our lives. But what if experimentation or the analysis of existing experimental data is beyond reach? Most of us are extremely limited in our ability to distinguish between fact and fction in unfamiliar areas such as quantum mechanics or even automobile repair. What we “know” comes from our contact with others who describe the evidence for us rather than from our evaluation of the relevant research. As a result, we are almost unavoidably dogmatic in practice, insofar as a great deal of what we do is infuenced by statements of truth professed by people of authority (or notoriety) rather than our own examination of the evidence. That this is so should be no great surprise. We haven’t the skills necessary to personally investigate all the phenomena that impact us in our day-to-day affairs.

So what is to be made of those areas that are beyond the scope of our direct study but do have an impact on our lives, both personal and professional? For example, how is a behavior analyst to deal effectively with the many claims made about the genetic underpinnings of a variety of conditions, including obesity, a learning disability, or autism?Ultimately, many of us will have to be dogmatic in approach, but we should be carefully dogmatic. At best, we are likely to consult reviews of the research literature in lieu of the literature itself. But in so doing, we are subject to the biases of interpretation in the writing of the reviewer. At worst, we learn of some new fad diet or therapy from someone already convinced of its effectiveness and thereby vested in convincing us of its effectiveness by providing only evidence seeming to support the claim. There is no easy way for the non-specialist to identify pseudoscience in unfamiliar

disciplines. However, as discussed in the next section, one or more red fags typically accompany pseudoscientifc claims.

Characteristics of Pseudoscience

He Said, She Said

Pseudoscientifc claims often eschew objective experimental evidence in favor of anecdotes or testimonials. The current autism-vaccine controversy is a case in point. A large vocal contingent of parents and professionals contend that the Measles-Mumps-Rubella (MMR) vaccine or other vaccines that contained a mercury-based preservative called thimerosal are the cause behind the recent autism “explosion.” A commonly cited piece of evidence for the alleged link between certain vaccines and autism is that parents of children with autism report that their child only began to show signs of autism after receiving a vaccination. These parent reports have become even more important in the face of mounting empirical evidence failing to show even a correlation between vaccine administration and autism diagnosis (Normand & Dallery, 2007). When the available scientifc evidence is examined, parent testimonies are essentially the only “evidence” that supports a link at all. Despite their best intentions, parent reports are poor sources of evidence, as parents rarely have extensive training in behavioral observation, their observations are not independently corroborated to ensure accuracy, and, being the parents of the children observed, they are far from objective.

Other times, the anecdotal nature of the evidence for a claim is dressed up in scientifc garb, as is the case with claims that mega-vitamin regimens produce marked improvements in young children with autism (e.g., Barthelemy et al., 1981; Rimland, Callaway, & Dreyfus, 1978). The arguments for such treatments are replete with examples of children who reportedly improved after they began a mega-vitamin regimen. A critical problem with such evidence is that the published studies rely almost exclusively

on parent reports of changes in child behavior. Rather than being presented as anecdotes, the reports are dressed up as scientifc data (usually quantifed in some way and analyzed statistically), giving the impression of something more substantial (e.g., Barthelemy et al., 1978). Additionally, steps must be taken to isolate the effects of the vitamins from any other intervention. If the vitamins are only one part of a larger collection of intervention strategies, including intensive behavior analytic intervention, it would be inappropriate to attribute the observed improvement in the child’s behavior to vitamins rather than to any of the other strategies or combinations thereof.

The Unfalsifable Claim

Scientifc studies refuting pseudo- scientifc claims often are criticized and dismissed on grounds of poor methodological rigor or problematic design. Such is the case with facilitated communication (FC) with persons diagnosed with autism. FC proponents claim that it enables these individuals to communicate through the aid of a “facilitator” who physically guides their hand over a keyboard so that they can type messages. A number of well-controlled experiments have demonstrated that it is the facilitators doing the communicating (Jacobson, Mulick, & Schwartz, 1995). Simply put, if the facilitator does not have access to the question posed, a correct answer is not given. Douglas Biklen, one of the main proponents of FC, frequently dismisses this sizeable body of experimental research on the grounds that the studies are poorly designed and conducted, though no acceptable scientifc rationale for this claim is offered (Biklen, 1993). These studies all meet the well-established standards of experimental design and appear in reputable peer-reviewed scientifc journals. As a defense, Biklen has suggested that the methods employed in the contradictory studies are predicated on the assumption that human behavior can be understood from a natural science perspective, and

SKEPTICISM AND APPLIED BEHAVIOR ANALYSIS 45

BAP_v1.2_p1-72.indd 45 10/10/08 8:55:57 AM