19 Nov In this assignment, you will use Chapter 3 of Rossi (2004) as a guide in the development of a proper research question that can be used as the foundation upon which you will build your Fina
Building from last discussions.
Research Question and Introduction Development
In this assignment, you will use Chapter 3 of Rossi (2004) as a guide in the development of a proper research question that can be used as the foundation upon which you will build your Final Paper. To further assist you in the development of a practical, measurable, and valid research question, read the introduction and methodology sections of any of the journal articles listed within the Required Resources or Recommended Resources sections throughought the course. Examining the methodologies and opening sections of a journal article will provide examples of how other scholars have generated their hypotheses and research questions. The research question will appear in the introduction of the Final Paper as well as within its methodology section.
Next, you will use your research question to assist in the formation of a proper introduction for your Final Paper. Provide your readers with a summary of the topic under consideration, its place in the field of criminal justice, why it is important to undertake an evaluation of the topic, policy relevance, social significance, and anything else you discover that might be considered noteworthy. You can use this week’s recommended article or any scholarly articles you find to guide you as to the proper organization of the introduction. Your introduction and research question should be drafted in such a manner as to be suitable for presentation before an audience of criminal justice professionals.
In your paper,
- Formulate the research question you will address in your Final Paper.
- Write the introduction to your Final Paper.
The “Research Question and Introduction Development” assignment
- Must be 750 to 1,250 words in length (not including title and references pages), double spaced, and formatted according to APA style as outlined in the Writing Center Links to an external site..
- Must include a separate title page with the following:
- Title of paper
- Student’s name
- Course name and number
- Instructor’s name
- Date submitted
- Must use at least three scholarly sources in addition to the course text.
- Must document all sources in APA style as outlined in the Writing Center.
- Must include a separate references page that is formatted according to APA style as outlined in the Writing Center.
Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systemic approach (7th ed.). Retrieved from https://content.uagc.edu
- Chapter 3: Identifying Issues and Formulating Questions
- Chapter 4: Assessing the Need for a Program
Farrington, D. P. (2006). Methodological quality and the evaluation of anti-crime programs Links to an external site.. Journal of Experimental Criminology, 2(3), 329-337. doi: 10.1007/s11292-006-9012-y
- The full-text version of this article can be accessed through the ProQuest database in the Ashford University Library. The National Research Council report is an important and well respected contribution to the knowledge base for the field of criminal justice. This report highlights the methods through which researchers undertake a program evaluation. A variety of methodological issues such as evaluability, effect size, benefit-cost ratio, attrition, observational methods, area-based programs, descriptive validity, and generalizability are discussed throughout the paper. This article is an excellent tool to demonstrate to students the host of considerations underlying program evaluations. This article will assist you in the completion of this week's discussion posts.
Identifying Issues and Formulating Questions
The previous chapter presented an overview of the many considerations that go into tailoring an evaluation. Although all those matters are important to evaluation design, the essence of evaluation is generating credible answers to questions about the performance of a social program. Good evaluation questions must address issues that are meaningful in relation to the nature of the program and also of concern to key stakeholders. They must be answerable with the research techniques available to the evaluator and formulated so that the criteria by which the corresponding program performance will be judged are explicit or can be determined in a straightforward way.
A set of carefully crafted evaluation questions, therefore, is the hub around which evaluation revolves. It follows that a careful, explicit formulation of those questions greatly facilitates the design of the evaluation and the use of its findings. Evaluation questions may take various forms, some of which are more useful and meaningful than others for stakeholders and program decisionmakers. Furthermore, some forms of evaluation questions are more amenable to the evaluator’s task of providing credible answers, and some address critical program effectiveness issues more directly than others.
This chapter discusses practical ways in which evaluators can fashion effective evaluation questions. An essential step is identification of the decisionmakers who will use the evaluation results, what information they need, and how they expect to use it. The evaluator’s own analysis of the program is also important. One approach that is particularly useful for this purpose is articulation of the program theory, a detailed account of how and why the program is supposed to work. Consideration of program theory focuses attention on critical events and premises that may be appropriate topics of inquiry in the evaluation.
Acritical phase in an evaluation is the identification and formulation of the questions the evaluation is to address. One might assume that this step would be very straightforward, indeed, that the questions would be stipulated routinely as part of the process of commissioning the evaluation. As described in Chapter 2, however, it is rare for final, workable evaluation questions to be specified clearly by the evaluation sponsor at the beginning of an evaluation. Nor can the evaluator usually step in and define those questions solely on the basis of his or her professional expertise. That maneuver would increase the risk that the evaluation would not be responsive to stake-holder concerns, would not be useful or used, and would be attacked as irrelevant or inappropriate.
To ensure that the evaluation will focus on the matters of greatest concern to the pertinent decisionmakers and stakeholders, the initial evaluation questions are best formulated through discourse and negotiation with them. Equally important, engaging key stakeholders increases the likelihood that they will understand, appreciate, and make effective use of the findings when they become available.
Although input from stakeholders is critical, the evaluator should not depend solely on their perspective to identify the issues the evaluation will address. Sometimes the evaluation sponsors are very knowledgeable about evaluation and will have formulated a complete and workable set of questions to which the evaluation should attend. More often, however, the evaluation sponsors and program stakeholders are not especially expert at evaluation or, if so, have not done all the groundwork needed to focus the evaluation. This means that the evaluator will rarely be presented at the outset with a finished list of issues the evaluation should address for the results to be useful, interpretable, and complete. Nor will the questions that are put forward generally be formulated in a manner that permits ready translation into research design.
The evaluator, therefore, also has a crucial role in the framing of evaluation questions. The stakeholders will be the experts on the practical and political issues facing the program, but the evaluator should know the most about how to analyze a program and focus an evaluation. The evaluator must be prepared to raise issues that otherwise might be overlooked, identify aspects of the program’s operations and outcomes that might warrant inquiry, and work with stakeholders to translate their concerns into questions that evaluation research can actually answer.
It is generally wise for the evaluator to develop a written summary of the specific questions that will guide the evaluation design. This provides a useful reference to consult while designing the evaluation and selecting research procedures. Perhaps more important, the evaluator can discuss this summary statement with the evaluation sponsor and key stakeholders to ensure that it encompasses their concerns. Such a procedure also can safeguard against later misunderstanding of what the evaluation was supposed to accomplish.
The remainder of this chapter examines the two most important topics related to specifying the questions that will guide an evaluation: (1) how to formulate evaluation questions in such a way that they can be addressed using the research procedures available to the evaluator, and (2) how to determine the specific questions on which the evaluation should focus.
3.1 What Makes a Good Evaluation Question?
The form that evaluation questions should take is shaped by the functions they must perform. Their principal role is to focus the evaluation on the areas of program performance at issue for key decisionmakers and stakeholders. They should also be able to facilitate the design of a data collection procedure that will provide meaningful information about that area of performance. In particular, a good evaluation question must identify a distinct dimension of relevant program performance and do so in such a way that the quality of the performance can be credibly assessed. Such assessment, in turn, requires an accurate description of the nature of the performance and some standard by which it can be evaluated (see Exhibit 3-A). Each of these aspects of good evaluation questions warrants further discussion.
EXHIBIT 3-A What It Means to Evaluate Something
There are different kinds of inquiry across practice areas, such as that which is found in law, medicine, and science. Common to each kind of inquiry is a general pattern of reasoning or basic logic that guides and informs the practice… . Evaluation is one kind of inquiry, and it, too, has a basic logic or general pattern of reasoning [that has been put forth by Michael Scriven]… . This general logic of evaluation is as follows:
1. Establishing criteria of merit. On what dimensions must the evaluand [thing being evaluated] do well?
2. Constructing standards. How well should the evaluand perform?
3. Measuring performance and comparing with standards. How well did the evaluand perform?
4. Synthesizing and integrating data into a judgment of merit or worth. What is the merit or worth of the evaluand?
… To evaluate anything means to assess the merit or worth of something against criteria and standards. The basic logic explicated by Scriven reflects what it means when we use the term to evaluate.
SOURCE: Quoted from Deborah M. Fournier, Establishing Evaluative Conclusions: A Distinction Between General and Working Logic, New Directions for Evaluation, no. 68 (San Francisco: Jossey-Bass, 1995), p. 16.
Dimensions of Program Performance
Good evaluation questions must first of all be reasonable and appropriate. That is, they must identify performance dimensions that are relevant to the expectations stakeholders hold for the program and that represent domains in which the program can realistically hope to have accomplishments. It would hardly be fair or sensible, for instance, to ask if a low-income housing weatherization program reduced the prevalence of drug dealing in a neighborhood. Nor would it generally be useful to ask a question as narrow as whether the program got a bargain in its purchase of file cabinets for its office. Furthermore, evaluation questions must be answerable; that is, they must involve performance dimensions that are sufficiently specific, concrete, practical, and measurable that meaningful information can be obtained about their status. An evaluator would have great difficulty determining whether an adult literacy program improved a community’s competitiveness in the global economy or whether the counselors in a drug prevention program were sufficiently caring in their relations with clients.
Evaluation Questions Must Be Reasonable and Appropriate
Program advocates often proclaim grandiose goals (e.g., improve the quality of life for children), expect unrealistically large effects, or believe the program to have accomplishments that are clearly beyond its actual capabilities. Good evaluation questions deal with performance dimensions that are appropriate and realistic for the program. This means that the evaluator must often work with relevant stakeholders to scale down and focus the evaluation questions. The manager of a community health program, for instance, might initially ask, “Are our education and outreach services successful in informing the public about the risk of AIDS?” In practice, however, those services may consist of little more than occasional presentations by program staff at civic club meetings and health fairs. With this rather modest level of activity, it may be unrealistic to expect the public at large to receive much AIDS information. If a question about this service is deemed important for the evaluation, a better version might be something such as “Do our education and outreach services raise awareness of AIDS issues among the audiences addressed?” and “Do those audiences represent community leaders who are likely to influence the level of awareness of AIDS issues among other people?”
There are two complementary ways for an evaluator, in collaboration with pertinent stakeholders, to assess how appropriate and realistic a candidate evaluation question is. The first is to examine the question in the context of the actual program activities related to it. In the example above, for instance, the low-key nature of the education and outreach services was clearly not up to the task of “informing the public about the risk of AIDS,”and there would be little point in having the evaluation attempt to determine if this was the actual outcome. The evaluator and relevant stakeholders should identify and scrutinize the program components, activities, and personnel assignments that relate to program performance and formulate the evaluation question in a way that is reasonable given those characteristics.
The second way to assess whether candidate evaluation questions are reasonable and appropriate is to analyze them in relationship to the findings reported in applicable social science and social service literature. For instance, the sponsor of an evaluation of a program for juvenile delinquents might initially ask if the program increases the self-esteem of the delinquents, in the belief that inadequate self-esteem is a problem for these juveniles and improvements will lead to better behavior. Examination of the applicable social science research, however, will reveal that juvenile delinquents do not generally have problems with self-esteem and, moreover, that increases in self-esteem are not generally associated with reductions in delinquency. In light of this information, the evaluator and the evaluation sponsor may well agree that the question of the program’s impact on self-esteem is not appropriate.
The foundation for formulating appropriate and realistic evaluation questions is detailed and complete program description. Early in the process, the evaluator should become thoroughly acquainted with the program—how it is structured, what activities take place, the roles and tasks of the various personnel, the nature of the participants, and the assumptions inherent in its principal functions. The stakeholder groups with whom the evaluator collaborates (especially program managers and staff) will also have knowledge about the program, of course. Evaluation questions that are inspired by close consideration of actual program activities and assumptions will almost automatically be appropriate and realistic.
Evaluation Questions Must Be Answerable
It is obvious that the evaluation questions around which an evaluation plan is developed should be answerable. Questions that cannot be answered may be intriguing to philosophers but do not serve the needs of evaluators and the decisionmakers who intend to use the evaluation results. What is not so obvious, perhaps, is how easy it is to formulate an unanswerable evaluation question without realizing it. This may occur because the terms used in the question, although seemingly commonsensical, are actually ambiguous or vague when the time comes for a concrete interpretation (“Does this program enhance family values?”). Or sensible-sounding questions may invoke issues for which there are so few observable indicators that little can be learned about them (“Are the case managers sensitive to the social circumstances of their clients?”). Also, some questions lack sufficient indication of the relevant criteria to permit a meaningful answer (“Is this program successful?”). Finally, some questions may be answerable only with more expertise, data, time, or resources than are available to the evaluation (“Do the prenatal services this program provides to high-risk women increase the chances that their children will complete college?”).
For an evaluation question to be answerable, it must be possible to identify some evidence or “observables” that can realistically be obtained and that will be credible as the basis for an answer. This generally means developing questions that involve measurable performance dimensions stated in terms that have unambiguous and noncontroversial definitions. In addition, the relevant standards or criteria must be specified with equal clarity. Suppose, for instance, that a proposed evaluation question for a compensatory education program like Head Start is,“Are we reaching the children most in need of this program?” To affirm that this is an answerable question, the evaluator should be able to do the following:
1. Define the group of children at issue (e.g., those in census tract such and such, four or five years old, living in households with annual income under 150% of the federal poverty level).
2. Identify the specific measurable characteristics and cutoff values that represent the greatest need (e.g., annual income below the federal poverty level, single parent in the household with educational attainment of less than high school).
3. Give an example of the evaluation finding that might result (e.g., 60% of the children currently served fall in the high-need category; 75% of the high-need children in the catchment area—the geographic area being served by the program— are not enrolled in the program).
4. Stipulate the evaluative criteria (e.g., to be satisfactory, at least 90% of the children in the program should be high need and at least 50% of the high-need children in the catchment area should be in the program).
5. Have the evaluation sponsors and other pertinent stakeholders (who should be involved in the whole process) agree that a finding meeting these criteria would, indeed, answer the question.
If such conditions can be met and, in addition, the resources are available to collect, analyze, and report the applicable data, then the evaluation question can be considered answerable.
Criteria for Program Performance
Beginning a study with a reasonable, answerable question or set of questions, of course, is customary in the social sciences (where the questions often are framed as hypotheses). What distinguishes evaluation questions is that they have to do with performance